├── .gitignore ├── README ├── TODO ├── scripts ├── sslsnoop ├── sslsnoop-openssh ├── sslsnoop-openssh-dump └── sslsnoop-openssl ├── setup.py ├── sslsnoop ├── Makefile ├── __init__.py ├── align.py ├── cleaner.py ├── ctypes_nss.c ├── ctypes_nss.py ├── ctypes_nss_generated.py ├── ctypes_openssh.py ├── ctypes_openssl.c ├── ctypes_openssl.py ├── ctypes_openssl_generated.py ├── ctypes_putty.py ├── ctypes_putty_cryptoapi_generated.py ├── ctypes_putty_generated.py ├── engine.py ├── finder.py ├── generate.py ├── lrucache.py ├── myputty.h ├── network.py ├── openssh.py ├── openssl.py ├── openvpn.py ├── output.py ├── paramiko_packet.py ├── preprocess.py ├── ssh-types.c ├── stream.py ├── test-scapy.py ├── test.py └── utils.py └── test ├── __init__.py ├── dsa.orig ├── dsa.orig.pub ├── id_dsa-1.key ├── ssh ├── types.c.log └── types.py.log └── sslsnoop ├── __init__.py └── test_ssh_data.py /.gitignore: -------------------------------------------------------------------------------- 1 | biblio/ 2 | build/ 3 | dist/ 4 | memdiff/ 5 | outputs/ 6 | sslsnoop.egg-info/ 7 | structures/ 8 | *~ 9 | *.pyc 10 | test/data/ 11 | test/dumps/ 12 | *.xml 13 | log.* 14 | log 15 | .scapy_prestart.py 16 | 17 | -------------------------------------------------------------------------------- /README: -------------------------------------------------------------------------------- 1 | HOWTO: 2 | ------ 3 | https://github.com/trolldbois/sslsnoop/wiki/Screencast 4 | 5 | $ sudo easy_install sslsnoop 6 | $ mkdir outputs 7 | 8 | You really have to. Please. 9 | 10 | $ sudo sslsnoop # try ssh, sshd and ssh-agent... for various things 11 | $ sudo sslsnoop-openssh live `pgrep ssh` # dumps SSH decrypted traffic in outputs/ 12 | $ sudo sslsnoop-openssh offline --help # dumps SSH decrypted traffic in outputs/ from a pcap file 13 | $ sudo sslsnoop-openssl `pgrep ssh-agent` # dumps RSA and DSA keys 14 | 15 | and go and check outputs/. 16 | 17 | hints : 18 | ------- 19 | a) works if scapy doesn't drop packets. using pcap instead of SOCK_RAW helps a lot now. 20 | b) works better on interactive traffic with no traffic at the time of the ptrace. It follows the flow, after that. 21 | c) Dumps one file by fd in outputs/ 22 | d) Attaching a process is quickier with --addr 0xb788aa98 as provided by haystack 23 | INFO:abouchet:found instance @ 0xb788aa98 24 | e) how to get a pickled session_state file : 25 | $ sudo haystack --pid `pgrep ssh` sslsnoop.ctypes_openssh.session_state search > ss.pickled 26 | 27 | 28 | not so FAQ : 29 | ============ 30 | 31 | What does it do, really ?: 32 | -------------------------- 33 | It dumps live session keys from an openssh , and decrypts the traffic on the fly. 34 | Not all ciphers are implemented. 35 | 36 | Workings ciphers : aes128-ctr, aes192-ctr, aes256-ctr, blowfish-cbc, cast128-cbc 37 | Partially workings ciphers (INBOUND only ?!): aes128-cbc, aes192-cbc, aes256-cbc 38 | Non workings ciphers : 3des-cbc, 3des, ssh1-blowfish, arcfour, arcfour1280 39 | 40 | It can also dump DSA and RSA keys from ssh-agent or sshd ( or others ). 41 | 42 | How do it knows that the structures is valid ? : 43 | ------------------------------------------------ 44 | You add some constraints ( expectedValues ) on the fields. Pointers are also a good start. 45 | 46 | Yeah, but you have to be root, so what's the use ? : 47 | ---------------------------------------------------- 48 | Monitoring ssh traffic on honeypots ? 49 | Monitoring encrypted traffic on honeypots ? 50 | Monitoring encrypted traffic on ... somewhere your are root ? 51 | 52 | It does not work on my openssh ? : 53 | ----------------------------------- 54 | tested on OpenSSH 5.5. 55 | Should work on most recent version.. I didn't check for structure modification. but that would explain a lot. 56 | It work really good on intereactive session with no traffic at the time of execution. (clean cipher state in memory) 57 | It can work on a busy ssh stream, *IF* a) the cipher state is clean, b) scapy doesn't loose packets (CPU ?). 58 | -> yeah the GIL really sucks 59 | 60 | How can i decrypt a pcap file ? : 61 | ---------------------------------- 62 | Use the offline mode. 63 | 64 | Where does the idea comes from ? : 65 | ----------------------------------- 66 | use http://www.hsc.fr/ressources/breves/passe-partout.html.fr to get keys 67 | use http://pauldotcom.com/2010/10/tsharkwireshark-ssl-decryption.html 68 | or http://www.rtfm.com/ssldump/ to read streams 69 | use scapy, because it's fun ? but we need IP reassembly . 70 | pynids could be more useful... 71 | dsniff is now in python ? 72 | flowgrep 73 | use python. 74 | 75 | 76 | What are the dependencies ? : 77 | ---------------------------- 78 | python-haystack (same author) 79 | python-ptrace 80 | scapy 81 | python-pcap / python-xxxpcap ( recommended for perf issues ) 82 | paramiko (for ssh decryption) [ TODO, extract & kill dep. we only need Message and Packetizer ] 83 | python-psutil 84 | 85 | Conclusion : 86 | ------------ 87 | poc done. 88 | Next, `pgrep firefox`. 89 | 90 | 91 | Biblio 92 | ------- 93 | 94 | Bringing volatility to Linux 95 | http://dfsforensics.blogspot.com/2011/03/bringing-linux-support-to-volatility.html 96 | 97 | Extracting truecrypt keys from memory 98 | http://jessekornblum.com/tools/volatility/cryptoscan.py 99 | 100 | python-ptrace ( hey, haypo again) 101 | https://bitbucket.org/haypo/python-ptrace/wiki/Home 102 | https://bitbucket.org/haypo/python-ptrace/wiki/Documentation 103 | 104 | from ptrace.debugger.memory_mapping import readProcessMappings 105 | 106 | openssl.py is passe-partout.py - OK - 04/03/2011 107 | 108 | there is a 2008 paper on aes keys + software in debian 109 | https://citp.princeton.edu/research/memory/ 110 | 111 | OpenSSH, testing ciphers 112 | ======================== 113 | Ciphers 114 | Specifies the ciphers allowed for protocol version 2 in order of preference. Multiple ciphers must be comma-separated. The supported ciphers 115 | are “3des-cbc”, “aes128-cbc”, “aes192-cbc”, “aes256-cbc”, “aes128-ctr”, “aes192-ctr”, “aes256-ctr”, “arcfour128”, “arcfour256”, “arcfour”, 116 | “blowfish-cbc”, and “cast128-cbc”. The default is: 117 | 118 | aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128, 119 | aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc, 120 | aes256-cbc,arcfour 121 | 122 | force one : 123 | 124 | ssh -c aes192-ctr log@host 125 | 126 | 127 | firefox & NSS 128 | ============= 129 | INFO:abouchet:found instance @ 0xbfe12c20 => sur la stack 130 | 131 | INFO:abouchet:Looking at 0x85f00000-0x86000000 (rw-p) 132 | INFO:abouchet:processed 6465536 bytes 133 | ptrace.debugger.process_error.ProcessError: readBytes(0x84d28ae4, 392) error: [Errno 5] Input/output error 134 | ## weird .... 135 | 136 | 4894720 137 | 138 | 139 | Architecture 140 | ============ 141 | 142 | 143 | openssh creates a OpenSSHLiveDecryptatator which inherits a OpenSSHKeysFinder 144 | OpenSSHLiveDecryptatator : 145 | * connects to/launch a network.Sniffer. (scapy) 146 | * OpenSSHKeysFinder calls haystack to fetch the session_state 147 | - memory capture/ptrace is done in a subprocess 148 | - target process is not under ptrace anymore when openssh runs. 149 | - keys are acquired 150 | * SessionCiphers are created from pickled values from haystack 151 | - one for inbound traffic 152 | - one for outbound traffic 153 | * each SessionCipher is coupled with : 154 | - a socket given by a TCPStream ( Inbound and Outbound TCPstate) 155 | - a paramiko Packetizer which is a ssh protocol handler. 156 | * a cipher engine is used by the paramiko.Packetizer to decrypt data from the TCPStream socket 157 | * the Packetizer uses : 158 | - the socket to read it's data from the 'network'. 159 | - the cipher to decrypt the data 160 | * a SSHStreamToFile is created for each stream and is given the packetizer and the overall context ( cipher, socket ) 161 | - the SSHStreamToFile try to process the packetizer's outputs into a file. 162 | * a Supervisor is created to handle traffic ( select on socket ) 163 | - both SSHStreamToFile are given to the Supervisor with their respective socket 164 | 165 | 166 | TODO: 167 | 168 | SSHStream uses the packets is orderedQueue and the cipher, to try to find a SSH packet 169 | - algo 1 : copy original cipher state, decrypt first block of packet [0], 170 | if not valid, drop packet and loop to next one (for x packets) 171 | if valid, switch to go-trough mode and queue current + all packets data to socket 172 | 173 | - algo 2 : try to find a valid packet, block per block/long by long 174 | if valid, switch to go-trough mode and queue current + all packets data to socket 175 | 176 | 177 | 178 | 179 | 180 | 181 | 182 | 183 | 184 | -------------------------------------------------------------------------------- /TODO: -------------------------------------------------------------------------------- 1 | TODO: 2 | ==== 3 | use psutils to get p.get_connections(), to construct the filter or update the socket with a new stream filter ( doable ) 4 | automatise ssh/sshd detection through name matching // get_connections ( done in finder.py ) 5 | find that bug that sigsegv abouchet sometimes. probably a pointer going in loadMembers.else 6 | 7 | find interesting struct in firefox. 8 | 9 | look at windows portability. 10 | 11 | look at volatility compatibility. 12 | 13 | -------------------------------------------------------------------------------- /scripts/sslsnoop: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import sys 10 | from sslsnoop import finder 11 | 12 | if __name__ == "__main__": 13 | finder.main(sys.argv[1:]) 14 | 15 | 16 | -------------------------------------------------------------------------------- /scripts/sslsnoop-openssh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import sys 10 | from sslsnoop import openssh 11 | 12 | if __name__ == "__main__": 13 | openssh.main(sys.argv[1:]) 14 | 15 | 16 | -------------------------------------------------------------------------------- /scripts/sslsnoop-openssh-dump: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | __doc__ = ''' 10 | Drop OpenSSH session_state. 11 | 12 | python code Ready to be freezed by : 13 | /usr/share/doc/python2.7/examples/Tools/freeze/freeze.py 14 | -X apport -X apt -X distutils -X doctest -X pydoc -X paramiko -X scapy -X email 15 | -X xml -X unittest -X urllib -X urllib2 -X ssl -X threading -X meliae -X readline 16 | sslsnoop-openssh-dump.py 17 | ''' 18 | 19 | import sslsnoop 20 | from sslsnoop import ctypes_openssh 21 | import pickle,os, sys, argparse 22 | 23 | def argparser(): 24 | parser = argparse.ArgumentParser(prog='sshsnoop', description='Live decription of Openssh traffic.') 25 | parser.add_argument('--debug', action='store_const', const=True, default=False, help='debug mode') 26 | parser.add_argument('--quiet', action='store_const', const=True, default=False, help='quiet mode') 27 | 28 | subparsers = parser.add_subparsers(help='sub-command help') 29 | dump_parser = subparsers.add_parser('dump', help='Dump openssh session_state to a file for later use by offline mode.') 30 | dump_parser.add_argument('pid', type=int, help='Target PID') 31 | dump_parser.add_argument('sessionstatefile', type=argparse.FileType('w'), help='Output File for the pickled session_state.') 32 | dump_parser.set_defaults(func=dumpToFile) 33 | return parser 34 | 35 | def dumpToFile(args): 36 | from haystack import memory_mapper, abouchet, model 37 | mappings = memory_mapper.MemoryMapper(args).getMappings() #args.pid /args.memfile 38 | targetMapping = [m for m in mappings if m.pathname == '[heap]'] 39 | if len(targetMapping) == 0: 40 | log.warning('No [heap] memorymapping found. Searching everywhere.') 41 | targetMapping = mappings 42 | finder = abouchet.StructFinder(mappings, targetMapping) 43 | outs = finder.find_struct( ctypes_openssh.session_state, maxNum=1) # 1 is awaited 44 | if len(outs) == 0 : 45 | log.error('openssh session_state not found') 46 | return 47 | ss, addr = outs[0] 48 | res = ss.toPyObject() 49 | if model.findCtypesInPyObj(res): 50 | log.error('=========************======= CTYPES STILL IN pyOBJ !!!! ') 51 | args.sessionstatefile.write(pickle.dumps([(res,addr)])) 52 | return 53 | 54 | 55 | def main(argv): 56 | parser = argparser() 57 | opts = parser.parse_args(argv) 58 | opts.func(opts) 59 | return 60 | 61 | if __name__ == "__main__": 62 | sys.path.append(os.getcwd()) 63 | main(sys.argv[1:]) 64 | 65 | -------------------------------------------------------------------------------- /scripts/sslsnoop-openssl: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import sys 10 | from sslsnoop import openssl 11 | 12 | if __name__ == "__main__": 13 | openssl.main(sys.argv[1:]) 14 | 15 | 16 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | from setuptools import setup, find_packages 3 | from glob import glob 4 | 5 | setup(name="sslsnoop", 6 | version="0.15", 7 | description="Dumps the live traffic of an ssl-encrypted stream.", 8 | long_description=open('README').read(), 9 | 10 | url="http://packages.python.org/sslsnoop/", 11 | download_url="http://github.com/trolldbois/sslsnoop/tree/master", 12 | license='GPL', 13 | classifiers=[ 14 | "Topic :: System :: Networking", 15 | "Topic :: Security", 16 | "Environment :: Console", 17 | "Intended Audience :: Developers", 18 | "License :: OSI Approved :: GNU General Public License (GPL)", 19 | "Programming Language :: Python", 20 | "Development Status :: 5 - Production/Stable", 21 | ], 22 | keywords=['memory','analysis','forensics','struct','ptrace','openssh','openssl','decrypt'], 23 | author="Loic Jaquemet", 24 | author_email="loic.jaquemet+python@gmail.com", 25 | packages = ['sslsnoop'], 26 | #exclude=['biblio'], 27 | #packages=find_packages(exclude=['biblio', 'build']), 28 | scripts = ['scripts/sslsnoop-openssh', 'scripts/sslsnoop-openssl', 'scripts/sslsnoop', 'scripts/sslsnoop-openssh-dump'], 29 | install_requires = [ "haystack >= 0.15","psutil >= 0.1"], # python-scapy, pypcap neither are in pypi... deadlink 30 | test_suite= "test.alltests", 31 | ) 32 | -------------------------------------------------------------------------------- /sslsnoop/Makefile: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | 4 | 5 | clean: 6 | rm -f ctypes_openssl_generated.c ctypes_openssl_generated_clean.c ctypes_openssl_generated.py ctypes_openssl_generated.pyc 7 | #rm -f ctypes_nss_generated.c ctypes_nss_generated_clean.c ctypes_nss_generated.py ctypes_nss_generated.pyc 8 | 9 | -------------------------------------------------------------------------------- /sslsnoop/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/trolldbois/sslsnoop/29a348f2bf2c9c7eb4f6b9e1cae276f119e033f1/sslsnoop/__init__.py -------------------------------------------------------------------------------- /sslsnoop/align.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import argparse 10 | import ctypes 11 | import copy 12 | import os 13 | import logging 14 | import pickle 15 | import struct 16 | import sys 17 | import time 18 | import threading 19 | import Queue 20 | 21 | # todo : replace by one empty shell of ours 22 | #from paramiko.transport import Transport 23 | 24 | from sslsnoop import openssh 25 | from sslsnoop import ctypes_openssl 26 | from sslsnoop import ctypes_openssh 27 | import output 28 | import haystack 29 | import network 30 | import utils 31 | import openssh 32 | 33 | from engine import CIPHERS 34 | #our impl 35 | from paramiko_packet import Packetizer, PACKET_MAX_SIZE 36 | 37 | log=logging.getLogger('align') 38 | 39 | 40 | 41 | 42 | def alignEncryption(way, data): 43 | ''' 44 | # get data[] in input 45 | # try to get an offset 46 | # split to output previous, after 47 | # the index of the alignement 48 | ''' 49 | log.info('trying to align on data ') 50 | blocksize = way.engine.block_size 51 | # try to find a packetlen 52 | read_offset = 1 # len(data) # check only start of packet 53 | #for i in range(0, len(data)-blocksize, len(data) ): # check only start of packet 54 | for i in range(0, len(data)-blocksize, read_offset ): # check all offsets 55 | # tests shows Message are often data[blocksize:] 56 | log.debug('trying index %d'%(i)) 57 | header = way.engine.decrypt( data[i:i+blocksize] ) 58 | #log.debug('after align decrypt %s'%repr(way.engine.getCounter())) 59 | packet_size = struct.unpack('>I', header[:4])[0] 60 | # reset engine to initial state 61 | way.engine.sync(way.context) 62 | if 0 < packet_size <= PACKET_MAX_SIZE: 63 | log.debug('Auto align done: We found a acceptable packet size(%d) at %d '%(packet_size, i)) 64 | if (packet_size - (blocksize-4)) % blocksize == 0 : 65 | log.debug('Auto align found : packet size(%d) at %d '%(packet_size, i)) 66 | # save previous data 67 | prev = data[:i] 68 | next = data[i:] 69 | return prev, next 70 | else: 71 | log.debug('bad blocking packetsize %d is not correct for blocksize'%((packet_size - (blocksize-4)) % blocksize)) 72 | # goto next byte 73 | return None 74 | 75 | 76 | def rfindAlign(way, data): 77 | ''' find alignement backwards 78 | we need to backtrack from prev_i = (index-blocksize)-- # smallest packet is at least mac_len+int_size+padding 79 | with counter = counter-2 ( // block_size) // counter-1 is useless when > 16 80 | smallest case scenarios packet_size = 28, 12 for disconnects ? 81 | And this is only to find a valid packet_size 82 | and find a value x for which x+prev_i < index # packet_size is not in clear stoupid.. 83 | ''' 84 | logging.getLogger('align').setLevel(logging.DEBUG) 85 | #logging.getLogger('engine').setLevel(logging.DEBUG) 86 | 87 | log.debug('trying to backwards align on data ') 88 | blocksize = way.engine.block_size 89 | # try to find a packetlen 90 | read_offset = 1 # len(data) # check only start of packet 91 | 92 | # reset engine to initial state 93 | way.engine.sync(way.context) 94 | # save current counter 95 | lastGoodCounter = way.engine.counter 96 | lastGoodIndex = len(data) 97 | 98 | log.debug('orig counter:') 99 | log.debug(repr(way.engine.getCounter())) 100 | log.debug('backwards counters:') 101 | 102 | #for i in range(0, len(data)-blocksize, len(data) ): # check only start of packet 103 | #for i in range(len(data)-blocksize, -1 , -1 ): # check all offsets 104 | two = True 105 | i = len(data) 106 | while i>0: 107 | 108 | ## STEP 1 : prepare counter backwards 2 (packet_size+) 109 | ## # possible corner case, if no packet payload, only one decCounter should be needed ( disconnections ?? ) 110 | ## # test1 : if index-prev_i < mac_len + 4 + 12(padding), test with simple decCounter 111 | ## # lets ignore that for now 112 | 113 | # reset engine ctr to previous iteration and decrease counter 2 times 114 | way.engine.counter = (ctypes.c_ubyte*blocksize).from_buffer_copy( lastGoodCounter ) 115 | way.engine.decCounter() 116 | way.engine.decCounter() 117 | log.debug('after decCounter %s'%repr(way.engine.getCounter())) 118 | counter = (ctypes.c_ubyte*blocksize).from_buffer_copy( way.engine.counter ) 119 | 120 | ## STEP 2 : test all offset for a valid packet_size 121 | for i in range(lastGoodIndex-blocksize, -1 , -1 ): # check all offsets ( -blocksize+4 ?) 122 | header = way.engine.decrypt( data[i:i+blocksize] ) # read from end to start 123 | #log.debug('after decrypt %s'%repr(way.engine.getCounter())) 124 | packet_size = struct.unpack('>I', header[:4])[0] 125 | #log.debug('packet_size %d'%( packet_size )) 126 | if 0 <= packet_size <= PACKET_MAX_SIZE: 127 | log.debug('Auto align done: We found a acceptable packet size(%d) at %d '%(packet_size, i)) 128 | if (packet_size - (blocksize-4)) % blocksize == 0 : 129 | log.debug('Auto align found : packet size(%d) at %d '%(packet_size, i)) 130 | # save previous data 131 | lastGoodIndex = i 132 | lastGoodCounter = (ctypes.c_ubyte*blocksize).from_buffer_copy(counter) # save current counter for next iteration 133 | break # go to search the previous packet 134 | else: 135 | log.debug('bad blocking packetsize %d is not correct for blocksize'%((packet_size - (blocksize-4)) % blocksize)) 136 | # clean and reset 137 | way.engine.counter = (ctypes.c_ubyte*blocksize).from_buffer_copy( counter ) 138 | 139 | #if lastGoodIndex-i > 20000: 140 | # raise IndexError('did not find a valid offset') 141 | 142 | if lastGoodIndex == len(data): 143 | return -1, None 144 | 145 | return lastGoodIndex, lastGoodCounter 146 | 147 | 148 | 149 | from network import Sniffer 150 | from stream import TCPStream 151 | class PcapFileSniffer2(Sniffer): 152 | ''' Use scapy's offline mode to simulate network by reading a pcap file. 153 | ''' 154 | def __init__(self, pcapfile, filterRules='tcp', packetCount=0): 155 | Sniffer.__init__(self, filterRules=filterRules, packetCount=packetCount) 156 | self.pcapfile = pcapfile 157 | def run(self): 158 | from scapy.all import sniff 159 | sniff(store=0, prn=self.enqueue, offline=self.pcapfile) 160 | return 161 | def addStream(self, connection): 162 | ''' forget that stream ''' 163 | shost,sport = connection.local_address 164 | dhost,dport = connection.remote_address 165 | if shost.startswith('127.') or shost.startswith('::1'): 166 | log.warning('=============================================================') 167 | log.warning('scapy is gonna truncate big packet on the loopback interface.') 168 | log.warning('please change your test params,or use offline mode with pcap.') 169 | log.warning('=============================================================') 170 | #q = multiprocessing.Queue(QUEUE_SIZE) 171 | q = Queue.Queue(50000) 172 | st = TCPStream2(q, connection) 173 | #save in both directions 174 | self.streams[(shost,sport,dhost,dport)] = (st,q) 175 | self.streams[(dhost,dport,shost,sport)] = (st,q) 176 | return st 177 | 178 | class TCPStream2(TCPStream): 179 | def run(self): 180 | ''' loops on self.inQueue and calls triage ''' 181 | retry=1 182 | while self.check(): 183 | try: 184 | for p in self.inQueue.get(block=True, timeout=1): 185 | self.triage(p) 186 | self.inQueue.task_done() 187 | except Queue.Empty,e: 188 | if retry > 2: 189 | break 190 | retry+=1 191 | log.debug('Empty queue') 192 | pass 193 | self.finish() 194 | pass 195 | 196 | 197 | class RawFileDumper: 198 | def __init__(self, socket, filename): 199 | self.fout=file(filename,'w') 200 | self.socket = socket 201 | 202 | def process(self): 203 | #log.debug('Processing stuff') 204 | ok = True 205 | while ok: 206 | data = self.socket.recv(16) 207 | if data is None or len(data) <= 0: 208 | raise EOFError('end of stream') 209 | self.fout.write(data) 210 | self.fout.flush() 211 | 212 | from openssh import OpenSSHPcapDecrypt 213 | 214 | class PrevDecrypt(OpenSSHPcapDecrypt): 215 | #def __init__(self, pcapfilename, connection, ssfile): 216 | def run(self): 217 | ''' launch sniffer and decrypter threads ''' 218 | # capure network before keys 219 | self._initSniffer() 220 | self._initStream() 221 | self._launchStreamProcessing() 222 | # we can get keys now 223 | self._initCiphers() 224 | self._initSSH() 225 | self._initOutputs() 226 | self._initWorker() 227 | self.worker.pleaseStop() 228 | self.loop() 229 | log.info("[+] done %s"%(self)) 230 | return 231 | 232 | def _initSniffer(self): 233 | self.scapy = PcapFileSniffer2(self.pcapfilename) 234 | self.scapy.thread = threading.Thread(target=self.scapy.run, ) 235 | #self.scapy.thread.start() 236 | log.info('[+] read pcap online') 237 | 238 | def _initOutputs(self): 239 | self.inbound.state.setActiveMode() 240 | self.outbound.state.setActiveMode() 241 | self.inbound.filewriter = RawFileDumper(self.inbound.state.getSocket(), '%s.inbound.raw'%(self.pcapfilename)) 242 | self.outbound.filewriter = RawFileDumper(self.outbound.state.getSocket(), '%s.outbound.raw'%(self.pcapfilename)) 243 | log.debug('Outputs created') 244 | return 245 | 246 | 247 | class Dummy: 248 | pass 249 | 250 | 251 | 252 | BUFSIZE = 4096 253 | class SimpleBufferSocket(): 254 | def __init__(self, buf): 255 | self.buf = buf 256 | self.offset = 0 257 | self.closed = False 258 | 259 | def recv(self, n = BUFSIZE): 260 | if self.closed: 261 | raise IOError() 262 | ret = self.buf[self.offset:self.offset+n] 263 | self.offset += len(ret) 264 | return ret 265 | 266 | def close(self): 267 | self.closed = True 268 | 269 | class FakeState(): 270 | def __init__(self, buf): 271 | self.socket = SimpleBufferSocket(buf) 272 | 273 | def getSocket(self): 274 | return self.socket 275 | 276 | 277 | def decrypt(engine, data, block_size, mac_len): 278 | #logging.getLogger('align').setLevel(logging.DEBUG) 279 | #logging.getLogger('engine').setLevel(logging.DEBUG) 280 | 281 | i = 0 282 | ret = '' 283 | remainder='' 284 | while i < len(data): 285 | 286 | #log.debug('read %d i:%d : \n header = %s'%(block_size,i, repr(data[i:i+block_size]))) 287 | tmp = engine.decrypt( data[i:i+block_size] ) 288 | i += block_size 289 | packet_size = struct.unpack('>I', tmp[:4])[0] 290 | if packet_size > 35000: 291 | raise ValueError(packet_size) 292 | 293 | ret += tmp[:4] 294 | 295 | #log.debug('read packet_size %d i:%d'%(packet_size,i)) 296 | leftover = tmp[4:] 297 | 298 | #log.debug('read packet (%d+%d-%d)=%d i:%d \n body = %s'%(packet_size, mac_len, len(leftover), 299 | # packet_size+mac_len-len(leftover), i , repr(data[i:i+packet_size+mac_len-len(leftover)]) )) 300 | packet = engine.decrypt( data[i:i+packet_size-len(leftover)] ) # do not decrypt +mac_len 301 | i += packet_size-len(leftover) 302 | # but we read +mac_len 303 | mac = data[i:i+mac_len] 304 | i += mac_len 305 | 306 | packet = leftover+packet 307 | padding = ord(packet[0]) 308 | #log.debug('padding %d'%(padding)) 309 | #payload = packet[1:packet_size - padding] 310 | #ret += packet[1+4+4+1:-padding] # content only 311 | 312 | log.debug('read packet_size:%d padding:%d mac:%d total:%d'%(packet_size,padding, mac_len, packet_size+mac_len+padding )) 313 | 314 | ret += packet+mac 315 | return ret 316 | 317 | 318 | def parseProtocol(way, data_clear, outfilename): 319 | way.state = FakeState(data_clear) 320 | way.packetizer = Packetizer( way.state.getSocket() ) 321 | way.packetizer.set_log(logging.getLogger('packetizer')) 322 | #logging.getLogger('packetizer').setLevel(logging.DEBUG) 323 | #use a null block_engine 324 | way.packetizer.set_inbound_cipher(None, way.context.block_size, None, way.context.mac.mac_len , None) 325 | if way.context.comp.enabled != 0: 326 | from paramiko.transport import Transport 327 | name = way.context.comp.name 328 | compress_in = Transport._compression_info[name][1] 329 | way.packetizer.set_inbound_compressor(compress_in()) 330 | # use output stream 331 | ssh_decrypt = output.SSHStreamToFile(way.packetizer, way, '%s.clear'%outfilename, folder=".", fmt='%Y') 332 | try: 333 | way.engine.sync(way.context) 334 | while True: 335 | ssh_decrypt.process() 336 | except EOFError,e: 337 | log.debug('offset in socket offset:%d/%d'%(way.state.getSocket().offset, len(way.state.getSocket().buf))) 338 | pass 339 | 340 | def dec(way, basename): 341 | rawfilename = '%s.raw'%(basename) 342 | postfilename = '%s.post.dec'%(basename) 343 | prevfilename = '%s.prev.dec'%(basename) 344 | decodedfilename = '%s.dec'%(basename) 345 | 346 | attachEngine(way) 347 | 348 | ## STEP 1 : open the raw file, and find the point of alignement 349 | prev, post = alignEncryption(way, file(rawfilename).read()) 350 | index = len(prev) 351 | log.info('Memdump was done at index %d with cipher %s'%(index, way.context.name)) 352 | 353 | ## STEP 2 : decrypt data 354 | log.info('Decrypting data POST memdump ') 355 | way.engine.sync(way.context) 356 | fout = file('%s.raw'%postfilename,'w') 357 | 358 | log.info('before post decrypt %s'%repr(way.engine.getCounter())) 359 | 360 | post_data = decrypt( way.engine, post , way.context.block_size, way.context.mac.mac_len ) 361 | fout.write(post_data) 362 | fout.close() 363 | log.info('POST memdump - decryption completed in %s'%(fout.name)) 364 | 365 | #logging.getLogger('align').setLevel(logging.DEBUG) 366 | # debug 367 | way.engine.sync(way.context) 368 | log.debug('before test decrypt %s'%repr(way.engine.getCounter())) 369 | header = way.engine.decrypt( post[:way.context.block_size] ) 370 | log.debug('after decrypt %s'%repr(way.engine.getCounter())) 371 | packet_size = struct.unpack('>I', header[:4])[0] 372 | log.debug('packet_size %d'%( packet_size )) 373 | 374 | ## STEP 2b : parse the ssh protocol of data after the 375 | #data_clear = file('%s.raw'%postfilename,'r').read() 376 | #parseProtocol(way, data_clear, postfilename) 377 | 378 | #logging.getLogger('align').setLevel(logging.DEBUG) 379 | ## STEP 3 : go backwards and find alignement as far as possible 380 | log.info('Decrypting data BEFORE memdump ') 381 | 382 | way.engine.sync(way.context) 383 | log.debug('before ralign decrypt %s'%repr(way.engine.getCounter())) 384 | 385 | prev_index, prev_counter = rfindAlign(way, prev) 386 | if prev_index == -1: 387 | log.warning('could not go backwards') 388 | prev_data = '' 389 | else: 390 | log.info('Backwards decryption managed up to %d bytes offset:%d'%(len(prev)-prev_index, prev_index)) 391 | 392 | ## STEP 4 : decrypt backwards data 393 | fakecontext = Dummy() 394 | fakecontext.__dict__ = dict(way.context.__dict__) 395 | way.engine.sync(fakecontext) 396 | way.engine.counter = prev_counter 397 | fout = file('%s.raw'%prevfilename,'w') 398 | prev_data = decrypt( way.engine, prev[prev_index:] , way.context.block_size, way.context.mac.mac_len ) 399 | fout.write(prev_data) 400 | fout.close() 401 | log.info('BEFORE memdump - decryption of %d bytes completed in %s'%(len(prev_data), fout.name)) 402 | 403 | ## STEP 4b : parse the ssh protocol of data after the 404 | #parseProtocol(way, data_clear, prevfilename) 405 | 406 | #logging.getLogger('output').setLevel(logging.DEBUG) 407 | 408 | ## STEP 5 : parse the ssh protocol of as many data as possible 409 | data_clear = prev_data + post_data 410 | parseProtocol(way, data_clear, decodedfilename) 411 | 412 | return 413 | 414 | def attachEngine(way): 415 | way.engine = openssh.CIPHERS[way.context.name](way.context) 416 | 417 | 418 | 419 | logging.basicConfig(level=logging.INFO) 420 | logging.getLogger('align').setLevel(logging.INFO) 421 | logging.getLogger('engine').setLevel(logging.INFO) 422 | 423 | ''' 424 | ssfilename='test1.ss' 425 | pcapfilename='test1.pcap' 426 | connection = Dummy() 427 | connection.remote_address = ('::1', 44204) 428 | connection.local_address = ('::1', 22) 429 | ''' 430 | ssfilename='test2.ss' 431 | pcapfilename='test2.pcap' 432 | connection = Dummy() 433 | connection.remote_address = ('::1', 53373) 434 | connection.local_address = ('::1', 22) 435 | 436 | 437 | ## work for separating streams 438 | # decryptor = PrevDecrypt(pcapfilename, connection, file(ssfilename)) 439 | # decryptor.run() 440 | 441 | #import sys 442 | #sys.exit() 443 | 444 | inbound = Dummy() 445 | outbound = Dummy() 446 | 447 | ss = openssh.SessionCiphers(pickle.load( file(ssfilename))[0][0]) 448 | inbound.context, outbound.context = ss.getCiphers() 449 | 450 | base_in = '%s.inbound'%(pcapfilename) 451 | base_out = '%s.outbound'%(pcapfilename) 452 | 453 | 454 | 455 | #dec(inbound, base_in) 456 | dec(outbound, base_out) 457 | 458 | ''' 459 | invoke_shell 460 | '\x00\x00\x00\x1c\x0cb\x00\x00\x00\x00\x00\x00\x00\x05shell\x01' 461 | 462 | b \x00\x00\x00\x00 \x00\x00\x00\x05 shell \x01' 463 | channel_request of channel.invoke_shell() 464 | 465 | '\x00\x00\x00\x1c\x0c 466 | get_pty tronque ? 467 | 468 | si, c'est completement impossible que 0x1c et 0x0c soit width et height 469 | 470 | channel.get_pty 471 | m.add_byte(chr(MSG_CHANNEL_REQUEST)) 472 | m.add_int(self.remote_chanid) 473 | m.add_string('pty-req') 474 | m.add_boolean(True) 475 | m.add_string(term) 476 | m.add_int(width) 477 | m.add_int(height) 478 | # pixel height, width (usually useless) 479 | m.add_int(0).add_int(0) 480 | m.add_string('') 481 | 482 | 483 | ''' 484 | 485 | 486 | 487 | 488 | 489 | 490 | 491 | 492 | 493 | 494 | 495 | 496 | 497 | 498 | -------------------------------------------------------------------------------- /sslsnoop/cleaner.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import logging, re 10 | 11 | log=logging.getLogger('preprocess') 12 | 13 | class HeaderCleaner: 14 | ''' 15 | Cleans a Preprocessed File for gccxml comsumption. 16 | Strips off static functions and extern references. 17 | 18 | @param preprocessed: the preprocessed file name 19 | @param out: the desired output file 20 | ''' 21 | functionsReOneLine = r""" # static inline bool kernel_page_present(struct page *page) { return true; } 22 | ^ ((static\ (inline|__inline__)) 23 | (\s+__attribute__\(\(always_inline\)\))* (?P \s+\w+)* (\s*[*]\s*)* 24 | (?P \w+ ) (?P \s*\([^{;]+?\)\s* ) 25 | ( { }$ |{ .*? }$ ) 26 | ) 27 | """ 28 | functionsRe = r""" # nice - ok for pointers 29 | ^ ((static\ (inline|__inline__)) 30 | (\s+__attribute__\(\(always_inline\)\))* (?P \s+\w+)* (\s*[*]\s*)* 31 | (?P \w+ ) (?P \s*\([^{;]+?\)\s* ) 32 | ( { }$ |{ . }$ | { .*? ^}$ ) 33 | ) 34 | """ 35 | functionsRe2 = r""" # nice - ok for pointers 36 | ^ ( (__extension__\s+)* (extern\ (inline|__inline__|__inline)) 37 | (\s+__attribute__\(\((always_inline|__gnu_inline__)\)\))* (?P \s+\w+)* (\s*[*]\s*)* 38 | (?P \w+ ) (?P \s*\([^{;]+?\)\s* ) 39 | ( { }$ |{ . }$ | { .*? ^}$ ) 40 | ) 41 | """ 42 | externsRe = r""" # extern functions sig. 43 | ^((extern) \s+ (?!struct|enum) [^{]*? ;$ ) 44 | """ 45 | externsRe2 = r""" # functions sig. on one line 46 | ^((void|unsigned|long|int) \s+ (?!struct|enum) .*? ;$ ) 47 | """ 48 | def __init__(self, preprocessed, out): 49 | self.preprocessed = file(preprocessed).read() 50 | self.out = out 51 | 52 | def stripFunctions(self, data): 53 | # delete oneliner 54 | REGEX_OBJ = re.compile( self.functionsReOneLine, re.MULTILINE| re.VERBOSE) 55 | data1 = REGEX_OBJ.sub(r'/** // supprimed function a: */',data) 56 | 57 | REGEX_OBJ = re.compile( self.functionsRe, re.MULTILINE| re.VERBOSE | re.DOTALL) 58 | data2 = REGEX_OBJ.sub(r'/** // supprimed function b: */',data1) 59 | 60 | REGEX_OBJ = re.compile( self.functionsRe2, re.MULTILINE| re.VERBOSE | re.DOTALL) 61 | data3 = REGEX_OBJ.sub(r'/** // supprimed function c: */',data2) 62 | return data3 63 | 64 | 65 | def stripExterns(self, data): 66 | REGEX_OBJ2 = re.compile( self.externsRe, re.MULTILINE| re.VERBOSE | re.DOTALL) 67 | data1 = REGEX_OBJ2.sub(r'/** // supprimed extern */', data) 68 | 69 | REGEX_OBJ2 = re.compile( self.externsRe2, re.MULTILINE| re.VERBOSE | re.DOTALL) 70 | data2 = REGEX_OBJ2.sub(r'/** // supprimed function sig */', data1) 71 | return data2 72 | 73 | def changeReservedWords(self, data): 74 | data1 = data.replace('*new);','*new1);') 75 | data1 = data1.replace('*proc_handler;','*proc_handler1;') 76 | mre = re.compile(r'\bprivate;') 77 | data2 = mre.sub('private1;',data1) 78 | mre = re.compile(r'\bnamespace\b') 79 | data3 = mre.sub('namespace1',data2) 80 | return data3 81 | 82 | def clean(self): 83 | data2 = self.stripFunctions(self.preprocessed) 84 | data3 = self.stripExterns(data2) 85 | #data3 = data2 86 | data4 = self.changeReservedWords(data3) 87 | self.fout = file(self.out,'w') 88 | return self.fout.write(data4) 89 | 90 | 91 | 92 | def clean(prepro, out): 93 | clean = HeaderCleaner(prepro, out) 94 | return clean.clean() 95 | 96 | #clean('ctypes_linux_generated.c','ctypes_linux_generated_clean.c') 97 | 98 | 99 | data=''' 100 | typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr, 101 | void *data); 102 | extern int apply_to_page_range(struct mm_struct *mm, unsigned long address, 103 | unsigned long size, pte_fn_t fn, void *data); 104 | void vm_stat_account(struct mm_struct *, unsigned long, struct file *, long); 105 | static inline void 106 | kernel_map_pages(struct page *page, int numpages, int enable) {} 107 | static inline void enable_debug_pagealloc(void) 108 | { 109 | } 110 | static inline bool kernel_page_present(struct page *page) { return true; } 111 | extern struct vm_area_struct *get_gate_vma(struct task_struct *tsk); 112 | int in_gate_area_no_task(unsigned long addr); 113 | int in_gate_area(struct task_struct *task, unsigned long addr); 114 | ''' 115 | 116 | from cleaner import HeaderCleaner 117 | import re 118 | def testF(data): 119 | obj = re.compile( HeaderCleaner.functionsRe, re.MULTILINE| re.VERBOSE | re.DOTALL) 120 | for p in obj.findall(data): 121 | print p[0] 122 | print '----------------------' 123 | return 124 | 125 | def testF2(data): 126 | obj = re.compile( HeaderCleaner.functionsRe2, re.MULTILINE| re.VERBOSE | re.DOTALL) 127 | for p in obj.findall(data): 128 | print p[0] 129 | print '----------------------' 130 | return 131 | 132 | def testE(data): 133 | obj = re.compile( HeaderCleaner.externsRe, re.MULTILINE| re.VERBOSE | re.DOTALL) 134 | for p in obj.findall(data): 135 | print p[0] 136 | print '----------------------' 137 | return 138 | 139 | def testE2(data): 140 | obj = re.compile( HeaderCleaner.externsRe2, re.MULTILINE| re.VERBOSE | re.DOTALL) 141 | for p in obj.findall(data): 142 | print p[0] 143 | print '----------------------' 144 | return 145 | 146 | ''' 147 | HeaderCleaner.externsRe = r""" # extern functions sig. 148 | ^((extern) \s+ (?!struct|enum) .*? ;$ ) 149 | """ 150 | HeaderCleaner.externsRe2 = r""" # functions sig. on one line 151 | ^((void|unisgned|long|int) \s+ (?!struct|enum) .*? ;$ ) 152 | """ 153 | 154 | 155 | c = HeaderCleaner('/dev/null','/dev/null') 156 | data1 = c.stripFunctions( data ) 157 | data2 = c.stripExterns( data1 ) 158 | 159 | print data1 160 | 161 | 162 | 'mf_flags' in data 163 | 'mf_flags' in data1 164 | 'mf_flags' in data2 165 | 166 | 167 | testE(data) 168 | 169 | 170 | ''' 171 | 172 | 173 | 174 | 175 | 176 | 177 | 178 | 179 | -------------------------------------------------------------------------------- /sslsnoop/ctypes_nss.c: -------------------------------------------------------------------------------- 1 | 2 | #undef explicit 3 | #define explicit xplicit 4 | 5 | //typedef enum { implicit, explicit } SSL3PublicValueEncoding; 6 | 7 | #include 8 | //#include "cryptohi/keyt.h" 9 | //#include "cryptohi/keythi.h" 10 | #include 11 | #include 12 | #include "sslimpl.h" 13 | 14 | -------------------------------------------------------------------------------- /sslsnoop/ctypes_nss.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import ctypes 10 | import logging, sys 11 | 12 | ''' insure ctypes basic types are subverted ''' 13 | from haystack import model 14 | 15 | from haystack.model import is_valid_address,is_valid_address_value,getaddress,array2bytes,bytes2array 16 | from haystack.model import LoadableMembersStructure,RangeValue,NotNull,CString, IgnoreMember 17 | 18 | import ctypes_nss_generated as gen 19 | 20 | log=logging.getLogger('ctypes_nss') 21 | 22 | 23 | # ============== Internal type defs ============== 24 | 25 | ''' 26 | http://mxr.mozilla.org/firefox/source/security/nss/lib/ssl/ssl.h 27 | http://mxr.mozilla.org/firefox/source/security/nss/lib/ssl/sslt.h 28 | http://mxr.mozilla.org/firefox/source/security/nss/lib/ssl/sslimpl.h#912 29 | ''' 30 | 31 | class NSSStruct(LoadableMembersStructure): 32 | ''' defines classRef ''' 33 | pass 34 | 35 | 36 | 37 | # replace c_char_p with our String handler 38 | if type(gen.STRING) != type(CString): 39 | print 'STRING is not model.CString. Please correct ctypes_nss_generated with :' 40 | print 'from model import CString' 41 | print 'STRING = CString' 42 | import sys 43 | sys.exit() 44 | 45 | 46 | ################ START copy generated classes ########################## 47 | 48 | # copy generated classes (gen.*) to this module as wrapper 49 | model.copyGeneratedClasses(gen, sys.modules[__name__]) 50 | 51 | # register all classes (gen.*, locally defines, and local duplicates) to haystack 52 | # create plain old python object from ctypes.Structure's, to picke them 53 | model.registerModule(sys.modules[__name__]) 54 | 55 | ################ END copy generated classes ########################## 56 | 57 | 58 | 59 | 60 | ############# Start expectedValues and methods overrides ################# 61 | 62 | 63 | 64 | sslSocket.expectedValues = { 65 | "fd": RangeValue(1,0xffff), 66 | #"version": [0x0002, 0x0300, 0x0301 ], #sslproto.h 67 | "version": RangeValue(1,0x0301), 68 | "clientAuthRequested": [0,1], # 69 | "delayDisabled": [0,1], # 70 | "firstHsDone": [0,1], # 71 | "handshakeBegun": [0,1], # 72 | "TCPconnected": [0,1], # 73 | "lastWriteBlocked": [0,1], # 74 | "url": NotNull, 75 | } 76 | 77 | 78 | 79 | sslOptions.expectedValues = { 80 | # "useSecurity": [0,1], # doh... 81 | "handshakeAsClient" : [1], 82 | "handshakeAsServer" : [0], 83 | } 84 | 85 | sslSecurityInfo.expectedValues = { 86 | "cipherType": RangeValue(1,0xffffffff), 87 | #"writeBuf" : IgnoreMember, 88 | } 89 | 90 | 91 | ssl3State.expectedValues = { 92 | } 93 | 94 | 95 | ssl3CipherSpec.expectedValues = { 96 | "cipher_def" : NotNull, 97 | "mac_def" : NotNull, 98 | } 99 | 100 | 101 | ''' 102 | # set expected values 103 | SSLCipherSuiteInfo.expectedValues={ 104 | "cipherSuite": RangeValue(0,0x0100), # sslproto.h , ECC is 0xc00 105 | "authAlgorithm": RangeValue(0,4), 106 | "keaType": RangeValue(0,4), 107 | "symCipher": RangeValue(0,9), 108 | "macAlgorithm": RangeValue(0,4), 109 | } 110 | 111 | SSLChannelInfo.expectedValues={ 112 | 'compressionMethod':RangeValue(0,1), 113 | } 114 | 115 | SECItem.expectedValues={ 116 | 'type':RangeValue(0,15), # security/nss/lib/util/seccomon.h:64 117 | } 118 | 119 | SECKEYPublicKey.expectedValues={ 120 | 'keyType':RangeValue(0,6), # security/nss/lib/cryptohi/keythi.h:189 121 | } 122 | 123 | #NSSLOWKEYPublicKey.expectedValues={ 124 | # 'keyType':RangeValue(0,5), # security/nss/lib/softoken/legacydb/lowkeyti.h:123 125 | #} 126 | 127 | CERTCertificate.expectedValues={ # XXX TODO CHECK VALUES security/nss/lib/certdb/certt.h 128 | 'keyIDGenerated': [0,1], # ok 129 | 'keyUsage': [RangeValue(0,gen.KU_ALL), # security/nss/lib/certdb/certt.h:570 130 | gen.KU_KEY_AGREEMENT_OR_ENCIPHERMENT , 131 | RangeValue(gen.KU_NS_GOVT_APPROVED, gen.KU_ALL | gen.KU_NS_GOVT_APPROVED), 132 | gen.KU_KEY_AGREEMENT_OR_ENCIPHERMENT | gen.KU_NS_GOVT_APPROVED ] , 133 | 'rawKeyUsage': [RangeValue(0,gen.KU_ALL), # security/nss/lib/certdb/certt.h:570 134 | gen.KU_KEY_AGREEMENT_OR_ENCIPHERMENT , 135 | RangeValue(gen.KU_NS_GOVT_APPROVED, gen.KU_ALL | gen.KU_NS_GOVT_APPROVED), 136 | gen.KU_KEY_AGREEMENT_OR_ENCIPHERMENT | gen.KU_NS_GOVT_APPROVED ] , 137 | 'keyUsagePresent': [0,1], # ok 138 | 'nsCertType': [ RangeValue(0,0xff), # security/nss/lib/certdb/certt.h:459 139 | RangeValue(gen.EXT_KEY_USAGE_TIME_STAMP , gen.EXT_KEY_USAGE_TIME_STAMP | 0xff ), # not sure 140 | RangeValue(gen.EXT_KEY_USAGE_STATUS_RESPONDER , gen.EXT_KEY_USAGE_STATUS_RESPONDER | 0xff ), # not sure 141 | RangeValue(gen.EXT_KEY_USAGE_TIME_STAMP | gen.EXT_KEY_USAGE_STATUS_RESPONDER , 142 | gen.EXT_KEY_USAGE_TIME_STAMP | gen.EXT_KEY_USAGE_STATUS_RESPONDER | 0xff ) ] , # not sure 143 | 'keepSession': [0,1], # ok 144 | 'timeOK': [0,1], # ok 145 | 'isperm': [0,1], # ok 146 | 'istemp': [0,1], # ok 147 | 'isRoot': [0,1], # ok 148 | 'ownSlot': [0,1], # ok 149 | 'referenceCount': RangeValue(1,0xffff), # no idea... should be positive for sure. more than 65535 ref ? naah 150 | 'subjectName': NotNull, # a certificate must have one... 151 | } 152 | 153 | CERTSignedCrl.expectedValues = { 154 | 'isperm': [0,1], # ok 155 | 'istemp': [0,1], # ok 156 | 'referenceCount': RangeValue(1,0xffff), # no idea... should be positive for sure. more than 65535 ref ? naah 157 | } 158 | 159 | CERTCertTrustStr.expectedValues = { 160 | #'sslFlags': [1< mini: 175 | print '%s:'%name,ctypes.sizeof(klass) 176 | #print 'SSLCipherSuiteInfo:',ctypes.sizeof(SSLCipherSuiteInfo) 177 | #print 'SSLChannelInfo:',ctypes.sizeof(SSLChannelInfo) 178 | 179 | if __name__ == '__main__': 180 | printSizeof() 181 | 182 | -------------------------------------------------------------------------------- /sslsnoop/ctypes_openssl.c: -------------------------------------------------------------------------------- 1 | 2 | 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | 10 | #include 11 | 12 | #include 13 | #include 14 | 15 | // not tested 16 | #include 17 | #include 18 | #include 19 | 20 | 21 | -------------------------------------------------------------------------------- /sslsnoop/ctypes_openssl.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import logging, sys 10 | 11 | ''' insure ctypes basic types are subverted ''' 12 | from haystack import model 13 | 14 | from haystack.utils import get_pointee_address,array2bytes,bytes2array 15 | from haystack.constraints import RangeValue,NotNull 16 | 17 | import ctypes_openssl_generated as gen 18 | 19 | import ctypes 20 | 21 | log=logging.getLogger('ctypes_openssl') 22 | 23 | ''' hmac.h:69 ''' 24 | HMAC_MAX_MD_CBLOCK=128 25 | ''' evp.h:91 ''' 26 | EVP_MAX_BLOCK_LENGTH=32 27 | EVP_MAX_IV_LENGTH=16 28 | AES_MAXNR=14 # aes.h:66 29 | RIJNDAEL_MAXNR=14 30 | 31 | 32 | # ============== Internal type defs ============== 33 | 34 | 35 | class OpenSSLStruct(ctypes.Structure): 36 | ''' defines classRef ''' 37 | pass 38 | 39 | BN_ULONG = ctypes.c_ulong 40 | 41 | 42 | ################ START copy generated classes ########################## 43 | 44 | # copy generated classes (gen.*) to this module as wrapper 45 | model.copyGeneratedClasses(gen, sys.modules[__name__]) 46 | 47 | # register all classes (gen.*, locally defines, and local duplicates) to haystack 48 | # create plain old python object from ctypes.Structure's, to picke them 49 | model.registerModule(sys.modules[__name__]) 50 | 51 | ################ END copy generated classes ########################## 52 | 53 | 54 | 55 | 56 | ############# Start expectedValues and methods overrides ################# 57 | 58 | NIDs = dict( [(getattr(gen, s), s) for s in gen.__dict__ if s.startswith('NID_') ]) 59 | def getCipherName(nid): 60 | if nid not in NIDs: 61 | return None 62 | nidname = NIDs[nid] 63 | LNattr = 'SN'+nidname[3:] # del prefix 'NID' 64 | return getattr(gen, LNattr) 65 | 66 | def getCipherDataType(nid): 67 | name = getCipherName(nid) 68 | if name is None: 69 | return None 70 | for t in EVP_CIPHER.CIPHER_DATA: 71 | if name.startswith( t ): 72 | return EVP_CIPHER.CIPHER_DATA[t] 73 | return None 74 | 75 | 76 | ''' rc4.h:71 ''' 77 | ####### RC4_KEY ####### 78 | def RC4_KEY_getData(self): 79 | return array2bytes(self.data) 80 | def RC4_KEY_fromPyObj(self,pyobj): 81 | #copy P and S 82 | self.data = bytes2array(pyobj.data, ctypes.c_uint) 83 | self.x = pyobj.x 84 | self.y = pyobj.y 85 | return self 86 | RC4_KEY.getData = RC4_KEY_getData 87 | RC4_KEY.fromPyObj = RC4_KEY_fromPyObj 88 | ####### 89 | 90 | ''' cast.h:80 ''' 91 | ####### CAST_KEY ####### 92 | def CAST_KEY_getData(self): 93 | return array2bytes(self.data) 94 | def CAST_KEY_getShortKey(self): 95 | return self.short_key 96 | def CAST_KEY_fromPyObj(self,pyobj): 97 | #copy P and S 98 | self.data = bytes2array(pyobj.data, ctypes.c_uint) 99 | self.short_key = pyobj.short_key 100 | return self 101 | CAST_KEY.getData = CAST_KEY_getData 102 | CAST_KEY.getShortKey = CAST_KEY_getShortKey 103 | CAST_KEY.fromPyObj = CAST_KEY_fromPyObj 104 | ####### 105 | 106 | 107 | 108 | ''' blowfish.h:101 ''' 109 | ####### BF_KEY ####### 110 | def BF_KEY_getP(self): 111 | return array2bytes(self.P) 112 | #return ','.join(["0x%lx"%key for key in self.rd_key]) 113 | def BF_KEY_getS(self): 114 | return array2bytes(self.S) 115 | def BF_KEY_fromPyObj(self,pyobj): 116 | #copy P and S 117 | self.P = bytes2array(pyobj.P, ctypes.c_ulong) 118 | self.S = bytes2array(pyobj.S, ctypes.c_ulong) 119 | return self 120 | BF_KEY.getP = BF_KEY_getP 121 | BF_KEY.getS = BF_KEY_getS 122 | BF_KEY.fromPyObj = BF_KEY_fromPyObj 123 | ####### 124 | 125 | 126 | ''' aes.h:78 ''' 127 | ####### AES_KEY ####### 128 | def AES_KEY_getKey(self): 129 | #FIXME 130 | return array2bytes(self.rd_key) 131 | #return ','.join(["0x%lx"%key for key in self.rd_key]) 132 | def AES_KEY_getRounds(self): 133 | return self.rounds 134 | def AES_KEY_fromPyObj(self,pyobj): 135 | #copy rd_key 136 | self.rd_key = bytes2array(pyobj.rd_key,ctypes.c_ulong) 137 | #copy rounds 138 | self.rounds = pyobj.rounds 139 | return self 140 | AES_KEY.getKey = AES_KEY_getKey 141 | AES_KEY.getRounds = AES_KEY_getRounds 142 | AES_KEY.fromPyObj = AES_KEY_fromPyObj 143 | ####### 144 | 145 | 146 | ############ BIGNUM 147 | BIGNUM.expectedValues={ 148 | "neg": [0,1] 149 | } 150 | def BIGNUM_loadMembers(self, mappings, maxDepth): 151 | ''' 152 | #self._d = process.readArray(attr_obj_address, ctypes.c_ulong, self.top) 153 | ## or 154 | #ulong_array= (ctypes.c_ulong * self.top) 155 | ''' 156 | if not self.isValid(mappings): 157 | log.debug('BigNUm tries to load members when its not validated') 158 | return False 159 | # Load and memcopy d / BN_ULONG * 160 | attr_obj_address = get_pointee_address(self.d) 161 | if not bool(self.d): 162 | log.debug('BIGNUM has a Null pointer d') 163 | return True 164 | memoryMap = mappings.is_valid_address_value(attr_obj_address) 165 | # TODO - challenge buffer_copy use, 166 | contents=(BN_ULONG*self.top).from_buffer_copy(memoryMap.readArray(attr_obj_address, BN_ULONG, self.top)) 167 | mappings.keepRef( contents, model.getSubtype(self.d), attr_obj_address ) 168 | log.debug('contents acquired %d'%ctypes.sizeof(contents)) 169 | return True 170 | 171 | def BIGNUM_get_d(self): 172 | return self._mapping_.getRef( model.getSubtype(self.d), get_pointee_address(self.d)) 173 | 174 | def BIGNUM_isValid(self,mappings): 175 | if ( self.dmax < 0 or self.top < 0 or self.dmax < self.top ): 176 | return False 177 | return LoadableMembersStructure.isValid(self,mappings) 178 | 179 | def BIGNUM___str__(self): 180 | d= get_pointee_address(self.d) 181 | return ("BN { d=0x%lx, top=%d, dmax=%d, neg=%d, flags=%d }"% 182 | (d, self.top, self.dmax, self.neg, self.flags) ) 183 | 184 | BIGNUM.loadMembers = BIGNUM_loadMembers 185 | BIGNUM.isValid = BIGNUM_isValid 186 | BIGNUM.__str__ = BIGNUM___str__ 187 | ################# 188 | 189 | 190 | # CRYPTO_EX_DATA crypto.h:158: 191 | def CRYPTO_EX_DATA_loadMembers(self, mappings, maxDepth): 192 | ''' erase self.sk''' 193 | #self.sk=ctypes.POINTER(STACK)() 194 | return LoadableMembersStructure.loadMembers(self, mappings, maxDepth) 195 | def CRYPTO_EX_DATA_isValid(self,mappings): 196 | ''' erase self.sk''' 197 | # TODO why ? 198 | #self.sk=ctypes.POINTER(STACK)() 199 | return LoadableMembersStructure.isValid(self,mappings) 200 | 201 | CRYPTO_EX_DATA.loadMembers = CRYPTO_EX_DATA_loadMembers 202 | CRYPTO_EX_DATA.isValid = CRYPTO_EX_DATA_isValid 203 | ################# 204 | 205 | 206 | ######## RSA key 207 | RSA.expectedValues={ 208 | "pad": [0], 209 | "version": [0], 210 | "references": RangeValue(0,0xfff), 211 | "n": [NotNull], 212 | "e": [NotNull], 213 | "d": [NotNull], 214 | "p": [NotNull], 215 | "q": [NotNull], 216 | "dmp1": [NotNull], 217 | "dmq1": [NotNull], 218 | "iqmp": [NotNull] 219 | } 220 | def RSA_printValid(self,mappings): 221 | log.debug( '----------------------- LOADED: %s'%self.loaded) 222 | log.debug('pad: %d version %d ref %d'%(self.pad,self.version,self.references) ) 223 | log.debug(mappings.is_valid_address( self.n) ) 224 | log.debug(mappings.is_valid_address( self.e) ) 225 | log.debug(mappings.is_valid_address( self.d) ) 226 | log.debug(mappings.is_valid_address( self.p) ) 227 | log.debug(mappings.is_valid_address( self.q) ) 228 | log.debug(mappings.is_valid_address( self.dmp1) ) 229 | log.debug(mappings.is_valid_address( self.dmq1) ) 230 | log.debug(mappings.is_valid_address( self.iqmp) ) 231 | return 232 | def RSA_loadMembers(self, mappings, maxDepth): 233 | #self.meth = 0 # from_address(0) 234 | # ignore bignum_data. 235 | #self.bignum_data = 0 236 | self.bignum_data.ptr.value = 0 237 | #self.blinding = 0 238 | #self.mt_blinding = 0 239 | 240 | if not LoadableMembersStructure.loadMembers(self, mappings, maxDepth): 241 | log.debug('RSA not loaded') 242 | return False 243 | return True 244 | 245 | RSA.printValid = RSA_printValid 246 | RSA.loadMembers = RSA_loadMembers 247 | 248 | 249 | 250 | ########## DSA Key 251 | DSA.expectedValues={ 252 | "pad": [0], 253 | "version": [0], 254 | "references": RangeValue(0,0xfff), 255 | "p": [NotNull], 256 | "q": [NotNull], 257 | "g": [NotNull], 258 | "pub_key": [NotNull], 259 | "priv_key": [NotNull] 260 | } 261 | def DSA_printValid(self,mappings): 262 | log.debug( '----------------------- \npad: %d version %d ref %d'%(self.pad,self.version,self.write_params) ) 263 | log.debug(mappings.is_valid_address( self.p) ) 264 | log.debug(mappings.is_valid_address( self.q) ) 265 | log.debug(mappings.is_valid_address( self.g) ) 266 | log.debug(mappings.is_valid_address( self.pub_key) ) 267 | log.debug(mappings.is_valid_address( self.priv_key) ) 268 | return 269 | def DSA_loadMembers(self, mappings, maxDepth): 270 | # clean other structs 271 | # r and kinv can be null 272 | self.meth = None 273 | self._method_mod_p = None 274 | #self.engine = None 275 | 276 | if not LoadableMembersStructure.loadMembers(self, mappings, maxDepth): 277 | log.debug('DSA not loaded') 278 | return False 279 | 280 | return True 281 | 282 | DSA.printValid = DSA_printValid 283 | DSA.loadMembers = DSA_loadMembers 284 | 285 | ######### EP_CIPHER 286 | EVP_CIPHER.expectedValues={ 287 | #crypto/objects/objects.h 0 is undef .. crypto cipher is a smaller subset : 288 | # 1-10 19 29-46 60-70 91-98 104 108-123 166 289 | # but for argument sake, we have to keep an open mind 290 | "nid": RangeValue( min(NIDs.keys()), max(NIDs.keys()) ), 291 | "block_size": [1,2,4,6,8,16,24,32,48,64,128], # more or less 292 | "key_len": RangeValue(1,0xff), # key_len *8 bits ..2040 bits for a key is enought ? 293 | # Default value for variable length ciphers 294 | "iv_len": RangeValue(0,0xff), # rc4 has no IV ? 295 | #"init": [NotNull], 296 | #"do_cipher": [NotNull], 297 | #"cleanup": [NotNull], # aes-cbc ? 298 | "ctx_size": RangeValue(0,0xffff), # app_data struct should not be too big 299 | } 300 | EVP_CIPHER.CIPHER_DATA = { 301 | "DES": DES_key_schedule, 302 | "3DES": DES_key_schedule, 303 | "BF": BF_KEY, 304 | "CAST": CAST_KEY, 305 | "RC4": RC4_KEY, 306 | "ARCFOUR": RC4_KEY, 307 | "AES": AES_KEY, 308 | } 309 | 310 | 311 | ########### EVP_CIPHER_CTX 312 | EVP_CIPHER_CTX.expectedValues={ 313 | "cipher": [NotNull], 314 | "encrypt": [0,1], 315 | "buf_len": RangeValue(0,EVP_MAX_BLOCK_LENGTH), ## number we have left, so must be less than buffer_size 316 | #"engine": , # can be null 317 | #"app_data": , # can be null if cipher_data is not 318 | #"cipher_data": , # can be null if app_data is not 319 | "key_len": RangeValue(1,0xff), # key_len *8 bits ..2040 bits for a key is enought ? 320 | } 321 | 322 | # loadMembers, if nid & cipher_data-> we can assess cipher_data format to be a XX_KEY 323 | def EVP_CIPHER_CTX_loadMembers(self, mappings, maxDepth): 324 | if not super(EVP_CIPHER_CTX,self).loadMembers(mappings, maxDepth): 325 | return False 326 | log.debug('trying to load cipher_data Structs.') 327 | ''' 328 | if bool(cipher) and bool(self.cipher.nid) and mappings.is_valid_address(cipher_data): 329 | memcopy( self.cipher_data, cipher_data_addr, self.cipher.ctx_size) 330 | # cast possible on cipher.nid -> cipherType 331 | ''' 332 | cipher = mappings.getRef( evp_cipher_st, get_pointee_address(self.cipher) ) 333 | if cipher.nid == 0: # NID_undef, not openssl doing 334 | log.info('The cipher is home made - the cipher context data should be application dependant (app_data)') 335 | return True 336 | 337 | struct = getCipherDataType( cipher.nid) 338 | log.debug('cipher type is %s - loading %s'%( getCipherName(cipher.nid), struct )) 339 | if(struct is None): 340 | log.warning("Unsupported cipher %s"%(cipher.nid)) 341 | return True 342 | 343 | # c_void_p is a basic type. 344 | attr_obj_address = self.cipher_data 345 | memoryMap = mappings.is_valid_address_value( attr_obj_address, struct) 346 | log.debug( "cipher_data CAST into : %s "%(struct) ) 347 | if not memoryMap: 348 | log.warning('in CTX On second toughts, cipher_data seems to be at an invalid address. That should not happen (often).') 349 | log.warning('%s addr:0x%lx size:0x%lx addr+size:0x%lx '%(mappings.is_valid_address_value(attr_obj_address), 350 | attr_obj_address, ctypes.sizeof(struct), attr_obj_address+ctypes.sizeof(struct))) 351 | return True 352 | #ok 353 | st = memoryMap.readStruct(attr_obj_address, struct ) 354 | mappings.keepRef(st, struct, attr_obj_address) 355 | self.cipher_data = ctypes.c_void_p(ctypes.addressof(st)) 356 | ###print 'self.cipher_data in loadmembers',self.cipher_data 357 | # check debug 358 | attr=getattr(self, 'cipher_data') 359 | log.debug('Copied 0x%lx into %s (0x%lx)'%(ctypes.addressof(st), 'cipher_data', attr)) 360 | log.debug('LOADED cipher_data as %s from 0x%lx (%s) into 0x%lx'%(struct, 361 | attr_obj_address, mappings.is_valid_address_value(attr_obj_address, struct), attr )) 362 | from haystack.outputters import text 363 | parser = text.RecursiveTextOutputter(mappings) 364 | log.debug('\t\t---------\n%s\t\t---------'%(parser.parse(st))) 365 | return True 366 | 367 | def EVP_CIPHER_CTX_toPyObject(self): 368 | d = super(EVP_CIPHER_CTX,self).toPyObject() 369 | log.debug('Cast a EVP_CIPHER_CTX into PyObj') 370 | # cast app_data or cipher_data to right struct 371 | if bool(self.cipher_data): 372 | cipher = self._mappings_.getRef( evp_cipher_st, get_pointee_address(self.cipher) ) 373 | struct = getCipherDataType( cipher.nid) 374 | if struct is not None: 375 | # CAST c_void_p to struct 376 | d.cipher_data = struct.from_address(self.cipher_data).toPyObject() 377 | return d 378 | 379 | def EVP_CIPHER_CTX_getOIV(self): 380 | return array2bytes(self.oiv) 381 | def EVP_CIPHER_CTX_getIV(self): 382 | return array2bytes(self.iv) 383 | 384 | EVP_CIPHER_CTX.loadMembers = EVP_CIPHER_CTX_loadMembers 385 | EVP_CIPHER_CTX.toPyObject = EVP_CIPHER_CTX_toPyObject 386 | EVP_CIPHER_CTX.getOIV = EVP_CIPHER_CTX_getOIV 387 | EVP_CIPHER_CTX.getIV = EVP_CIPHER_CTX_getIV 388 | 389 | ########## 390 | 391 | 392 | # checkks 393 | ''' 394 | import sys,inspect 395 | src=sys.modules[__name__] 396 | for (name, klass) in inspect.getmembers(src, inspect.isclass): 397 | #if klass.__module__ == src.__name__ or klass.__module__.endswith('%s_generated'%(src.__name__) ) : 398 | # #if not klass.__name__.endswith('_py'): 399 | print klass, type(klass) #, len(klass.classRef) 400 | ''' 401 | 402 | def printSizeof(mini=-1): 403 | for (name,klass) in inspect.getmembers(sys.modules[__name__], inspect.isclass): 404 | if type(klass) == type(ctypes.Structure) and klass.__module__.endswith('%s_generated'%(__name__) ) : 405 | if ctypes.sizeof(klass) > mini: 406 | print '%s:'%name,ctypes.sizeof(klass) 407 | #print 'SSLCipherSuiteInfo:',ctypes.sizeof(SSLCipherSuiteInfo) 408 | #print 'SSLChannelInfo:',ctypes.sizeof(SSLChannelInfo) 409 | 410 | 411 | if __name__ == '__main__': 412 | printSizeof() 413 | 414 | -------------------------------------------------------------------------------- /sslsnoop/ctypes_putty.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | ''' 8 | 9 | h2xml $PWD/myssh.h -o ssh.xml -I$PWD/ -I$PWD/unix/ -I$PWD/charset/ 10 | && xml2py ssh.xml -o ctypes_putty_generated.py 11 | && cp ctypes_putty_generated.py ../../sslsnoop/ 12 | 13 | ''' 14 | 15 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 16 | 17 | import ctypes 18 | import logging, sys 19 | 20 | ''' insure ctypes basic types are subverted ''' 21 | from haystack import model 22 | 23 | from haystack.model import is_valid_address,is_valid_address_value,getaddress,array2bytes,bytes2array 24 | from haystack.model import LoadableMembersStructure,RangeValue,NotNull,CString 25 | 26 | import ctypes_putty_generated as gen 27 | 28 | log=logging.getLogger('ctypes_putty') 29 | 30 | 31 | # ============== Internal type defs ============== 32 | 33 | 34 | class PuttyStruct(LoadableMembersStructure): 35 | ''' defines classRef ''' 36 | pass 37 | 38 | 39 | ################ START copy generated classes ########################## 40 | 41 | # copy generated classes (gen.*) to this module as wrapper 42 | model.copyGeneratedClasses(gen, sys.modules[__name__]) 43 | 44 | # register all classes (gen.*, locally defines, and local duplicates) to haystack 45 | # create plain old python object from ctypes.Structure's, to picke them 46 | model.registerModule(sys.modules[__name__]) 47 | 48 | ################ END copy generated classes ########################## 49 | 50 | 51 | BN_ULONG = ctypes.c_ulong 52 | 53 | ############# Start expectedValues and methods overrides ################# 54 | ''' 55 | NIDs = dict( [(getattr(gen, s), s) for s in gen.__dict__ if s.startswith('NID_') ]) 56 | def getCipherName(nid): 57 | if nid not in NIDs: 58 | return None 59 | nidname = NIDs[nid] 60 | LNattr = 'SN'+nidname[3:] # del prefix 'NID' 61 | return getattr(gen, LNattr) 62 | 63 | def getCipherDataType(nid): 64 | name = getCipherName(nid) 65 | if name is None: 66 | return None 67 | for t in EVP_CIPHER.CIPHER_DATA: 68 | if name.startswith( t ): 69 | return EVP_CIPHER.CIPHER_DATA[t] 70 | return None 71 | ''' 72 | 73 | ''' rc4.h:71 ''' 74 | ####### RC4_KEY ####### 75 | ''' 76 | def RC4_KEY_getData(self): 77 | return array2bytes(self.data) 78 | def RC4_KEY_fromPyObj(self,pyobj): 79 | #copy P and S 80 | self.data = bytes2array(pyobj.data, ctypes.c_uint) 81 | self.x = pyobj.x 82 | self.y = pyobj.y 83 | return self 84 | RC4_KEY.getData = RC4_KEY_getData 85 | RC4_KEY.fromPyObj = RC4_KEY_fromPyObj 86 | ####### 87 | ''' 88 | 89 | ''' cast.h:80 ''' 90 | ''' 91 | ####### CAST_KEY ####### 92 | def CAST_KEY_getData(self): 93 | return array2bytes(self.data) 94 | def CAST_KEY_getShortKey(self): 95 | return self.short_key 96 | def CAST_KEY_fromPyObj(self,pyobj): 97 | #copy P and S 98 | self.data = bytes2array(pyobj.data, ctypes.c_uint) 99 | self.short_key = pyobj.short_key 100 | return self 101 | CAST_KEY.getData = CAST_KEY_getData 102 | CAST_KEY.getShortKey = CAST_KEY_getShortKey 103 | CAST_KEY.fromPyObj = CAST_KEY_fromPyObj 104 | ####### 105 | ''' 106 | 107 | 108 | ''' blowfish.h:101 ''' 109 | ''' 110 | ####### BF_KEY ####### 111 | def BF_KEY_getP(self): 112 | return array2bytes(self.P) 113 | #return ','.join(["0x%lx"%key for key in self.rd_key]) 114 | def BF_KEY_getS(self): 115 | return array2bytes(self.S) 116 | def BF_KEY_fromPyObj(self,pyobj): 117 | #copy P and S 118 | self.P = bytes2array(pyobj.P, ctypes.c_ulong) 119 | self.S = bytes2array(pyobj.S, ctypes.c_ulong) 120 | return self 121 | BF_KEY.getP = BF_KEY_getP 122 | BF_KEY.getS = BF_KEY_getS 123 | BF_KEY.fromPyObj = BF_KEY_fromPyObj 124 | ####### 125 | ''' 126 | 127 | ''' aes.h:78 ''' 128 | ''' 129 | ####### AES_KEY ####### 130 | def AES_KEY_getKey(self): 131 | #return array2bytes(self.rd_key) 132 | return ','.join(["0x%lx"%key for key in self.rd_key]) 133 | def AES_KEY_getRounds(self): 134 | return self.rounds 135 | def AES_KEY_fromPyObj(self,pyobj): 136 | #copy rd_key 137 | self.rd_key=bytes2array(pyobj.rd_key,ctypes.c_ulong) 138 | #copy rounds 139 | self.rounds=pyobj.rounds 140 | return self 141 | AES_KEY.getKey = AES_KEY_getKey 142 | AES_KEY.getRounds = AES_KEY_getRounds 143 | AES_KEY.fromPyObj = AES_KEY_fromPyObj 144 | ####### 145 | ''' 146 | 147 | ''' 148 | ############ BIGNUM 149 | BIGNUM.expectedValues={ 150 | "neg": [0,1] 151 | } 152 | def BIGNUM_loadMembers(self, mappings, maxDepth): 153 | #self._d = process.readArray(attr_obj_address, ctypes.c_ulong, self.top) 154 | ## or 155 | #ulong_array= (ctypes.c_ulong * self.top) 156 | if not self.isValid(mappings): 157 | log.debug('BigNUm tries to load members when its not validated') 158 | return False 159 | # Load and memcopy d / BN_ULONG * 160 | attr_obj_address=getaddress(self.d) 161 | if not bool(self.d): 162 | log.debug('BIGNUM has a Null pointer d') 163 | return True 164 | memoryMap = is_valid_address_value( attr_obj_address, mappings) 165 | contents=(BN_ULONG*self.top).from_buffer_copy(memoryMap.readArray(attr_obj_address, BN_ULONG, self.top)) 166 | log.debug('contents acquired %d'%ctypes.sizeof(contents)) 167 | self.d.contents=BN_ULONG.from_address(ctypes.addressof(contents)) 168 | self.d=ctypes.cast(contents, ctypes.POINTER(BN_ULONG) ) 169 | return True 170 | 171 | def BIGNUM_isValid(self,mappings): 172 | if ( self.dmax < 0 or self.top < 0 or self.dmax < self.top ): 173 | return False 174 | return LoadableMembersStructure.isValid(self,mappings) 175 | 176 | def BIGNUM___str__(self): 177 | d= getaddress(self.d) 178 | return ("BN { d=0x%lx, top=%d, dmax=%d, neg=%d, flags=%d }"% 179 | (d, self.top, self.dmax, self.neg, self.flags) ) 180 | 181 | BIGNUM.loadMembers = BIGNUM_loadMembers 182 | BIGNUM.isValid = BIGNUM_isValid 183 | BIGNUM.__str__ = BIGNUM___str__ 184 | ################# 185 | ''' 186 | 187 | 188 | ''' 189 | # CRYPTO_EX_DATA crypto.h:158: 190 | def CRYPTO_EX_DATA_loadMembers(self, mappings, maxDepth): 191 | return LoadableMembersStructure.loadMembers(self, mappings, maxDepth) 192 | def CRYPTO_EX_DATA_isValid(self,mappings): 193 | return LoadableMembersStructure.isValid(self,mappings) 194 | 195 | CRYPTO_EX_DATA.loadMembers = CRYPTO_EX_DATA_loadMembers 196 | CRYPTO_EX_DATA.isValid = CRYPTO_EX_DATA_isValid 197 | ################# 198 | ''' 199 | 200 | 201 | 202 | ######## RSA key 203 | RSAKey.expectedValues={ 204 | 'bits': [1024,2048,4096], 205 | 'bytes': [NotNull], 206 | 'modulus': [NotNull], 207 | 'exponent': [NotNull], 208 | 'private_exponent': [NotNull], 209 | 'p': [NotNull], 210 | 'q': [NotNull], 211 | 'iqmp': [NotNull] 212 | } 213 | def RSAKey_loadMembers(self, mappings, maxDepth): 214 | #self.meth = 0 # from_address(0) 215 | # ignore bignum_data. 216 | #self.bignum_data = 0 217 | #self.bignum_data.ptr.value = 0 218 | #self.blinding = 0 219 | #self.mt_blinding = 0 220 | 221 | if not LoadableMembersStructure.loadMembers(self, mappings, maxDepth): 222 | log.debug('RSA not loaded') 223 | return False 224 | return True 225 | 226 | RSAKey.loadMembers = RSAKey_loadMembers 227 | 228 | 229 | config_tag.expectedValues={ 230 | 'sshprot': [ 0,2,3], 231 | #'version': [ 0,1,2], # mostly 1,2 232 | } 233 | 234 | tree234_Tag.expectedValues={ 235 | 'root': [ NotNull], 236 | } 237 | node234_Tag.expectedValues={ 238 | 'parent': [ NotNull], 239 | } 240 | 241 | 242 | ######## ssh_tag main context 243 | ssh_tag.expectedValues={ 244 | # 'fn': [NotNull], 245 | 'state': [ gen.SSH_STATE_PREPACKET, 246 | gen.SSH_STATE_BEFORE_SIZE, 247 | gen.SSH_STATE_INTERMED, 248 | gen.SSH_STATE_SESSION, 249 | gen.SSH_STATE_CLOSED 250 | ], 251 | 'agentfwd_enabled': [0,1], 252 | 'X11_fwd_enabled': [0,1], 253 | } 254 | 255 | ''' 256 | ('v_c', STRING), 257 | ('v_s', STRING), 258 | ('exhash', c_void_p), 259 | ('s', Socket), 260 | ('ldisc', c_void_p), 261 | ('logctx', c_void_p), 262 | ('session_key', c_ubyte * 32), 263 | ('v1_compressing', c_int), 264 | ('v1_remote_protoflags', c_int), 265 | ('v1_local_protoflags', c_int), 266 | ('remote_bugs', c_int), 267 | ('cipher', POINTER(ssh_cipher)), 268 | ('v1_cipher_ctx', c_void_p), 269 | ('crcda_ctx', c_void_p), 270 | ('cscipher', POINTER(ssh2_cipher)), 271 | ('sccipher', POINTER(ssh2_cipher)), 272 | ('cs_cipher_ctx', c_void_p), 273 | ('sc_cipher_ctx', c_void_p), 274 | ('csmac', POINTER(ssh_mac)), 275 | ('scmac', POINTER(ssh_mac)), 276 | ('cs_mac_ctx', c_void_p), 277 | ('sc_mac_ctx', c_void_p), 278 | ('cscomp', POINTER(ssh_compress)), 279 | ('sccomp', POINTER(ssh_compress)), 280 | ('cs_comp_ctx', c_void_p), 281 | ('sc_comp_ctx', c_void_p), 282 | ('kex', POINTER(ssh_kex)), 283 | ('hostkey', POINTER(ssh_signkey)), 284 | ('v2_session_id', c_ubyte * 32), 285 | ('v2_session_id_len', c_int), 286 | ('kex_ctx', c_void_p), 287 | ('savedhost', STRING), 288 | ('savedport', c_int), 289 | ('send_ok', c_int), 290 | ('echoing', c_int), 291 | ('editing', c_int), 292 | ('frontend', c_void_p), 293 | ('ospeed', c_int), 294 | ('ispeed', c_int), 295 | ('term_width', c_int), 296 | ('term_height', c_int), 297 | ('channels', POINTER(tree234)), 298 | ('mainchan', POINTER(ssh_channel)), 299 | ('ncmode', c_int), 300 | ('exitcode', c_int), 301 | ('close_expected', c_int), 302 | ('clean_exit', c_int), 303 | ('rportfwds', POINTER(tree234)), 304 | ('portfwds', POINTER(tree234)), 305 | 306 | 307 | ('size_needed', c_int), 308 | ('eof_needed', c_int), 309 | ('queue', POINTER(POINTER(Packet))), 310 | ('queuelen', c_int), 311 | ('queuesize', c_int), 312 | ('queueing', c_int), 313 | ('deferred_send_data', POINTER(c_ubyte)), 314 | ('deferred_len', c_int), 315 | ('deferred_size', c_int), 316 | ('fallback_cmd', c_int), 317 | ('banner', bufchain), 318 | ('pkt_kctx', Pkt_KCtx), 319 | ('pkt_actx', Pkt_ACtx), 320 | ('x11disp', POINTER(X11Display)), 321 | ('version', c_int), 322 | ('conn_throttle_count', c_int), 323 | ('overall_bufsize', c_int), 324 | ('throttled_all', c_int), 325 | ('v1_stdout_throttling', c_int), 326 | ('v2_outgoing_sequence', c_ulong), 327 | ('ssh1_rdpkt_crstate', c_int), 328 | ('ssh2_rdpkt_crstate', c_int), 329 | ('do_ssh_init_crstate', c_int), 330 | ('ssh_gotdata_crstate', c_int), 331 | ('do_ssh1_login_crstate', c_int), 332 | ('do_ssh1_connection_crstate', c_int), 333 | ('do_ssh2_transport_crstate', c_int), 334 | ('do_ssh2_authconn_crstate', c_int), 335 | ('do_ssh_init_state', c_void_p), 336 | ('do_ssh1_login_state', c_void_p), 337 | ('do_ssh2_transport_state', c_void_p), 338 | ('do_ssh2_authconn_state', c_void_p), 339 | ('rdpkt1_state', rdpkt1_state_tag), 340 | ('rdpkt2_state', rdpkt2_state_tag), 341 | ('protocol_initial_phase_done', c_int), 342 | ('protocol', CFUNCTYPE(None, Ssh, c_void_p, c_int, POINTER(Packet))), 343 | ('s_rdpkt', CFUNCTYPE(POINTER(Packet), Ssh, POINTER(POINTER(c_ubyte)), POINTER(c_int))), 344 | ('cfg', Config), 345 | ('agent_response', c_void_p), 346 | ('agent_response_len', c_int), 347 | ('user_response', c_int), 348 | ('frozen', c_int), 349 | ('queued_incoming_data', bufchain), 350 | ('packet_dispatch', handler_fn_t * 256), 351 | ('qhead', POINTER(queued_handler)), 352 | ('qtail', POINTER(queued_handler)), 353 | ('pinger', Pinger), 354 | ('incoming_data_size', c_ulong), 355 | ('outgoing_data_size', c_ulong), 356 | ('deferred_data_size', c_ulong), 357 | ('max_data_size', c_ulong), 358 | ('kex_in_progress', c_int), 359 | ('next_rekey', c_long), 360 | ('last_rekey', c_long), 361 | ('deferred_rekey_reason', STRING), 362 | ('fullhostname', STRING), 363 | ('gsslibs', POINTER(ssh_gss_liblist)), 364 | ''' 365 | 366 | 367 | ''' 368 | ########## DSA Key 369 | DSA.expectedValues={ 370 | "pad": [0], 371 | "version": [0], 372 | "references": RangeValue(0,0xfff), 373 | "p": [NotNull], 374 | "q": [NotNull], 375 | "g": [NotNull], 376 | "pub_key": [NotNull], 377 | "priv_key": [NotNull] 378 | } 379 | def DSA_printValid(self,mappings): 380 | log.debug( '----------------------- \npad: %d version %d ref %d'%(self.pad,self.version,self.write_params) ) 381 | log.debug(is_valid_address( self.p, mappings) ) 382 | log.debug(is_valid_address( self.q, mappings) ) 383 | log.debug(is_valid_address( self.g, mappings) ) 384 | log.debug(is_valid_address( self.pub_key, mappings) ) 385 | log.debug(is_valid_address( self.priv_key, mappings) ) 386 | return 387 | def DSA_loadMembers(self, mappings, maxDepth): 388 | # clean other structs 389 | # r and kinv can be null 390 | self.meth = None 391 | self._method_mod_p = None 392 | #self.engine = None 393 | 394 | if not LoadableMembersStructure.loadMembers(self, mappings, maxDepth): 395 | log.debug('DSA not loaded') 396 | return False 397 | 398 | return True 399 | 400 | DSA.printValid = DSA_printValid 401 | DSA.loadMembers = DSA_loadMembers 402 | ''' 403 | 404 | ''' 405 | ######### EP_CIPHER 406 | EVP_CIPHER.expectedValues={ 407 | #crypto/objects/objects.h 0 is undef .. crypto cipher is a smaller subset : 408 | # 1-10 19 29-46 60-70 91-98 104 108-123 166 409 | # but for argument sake, we have to keep an open mind 410 | "nid": RangeValue( min(NIDs.keys()), max(NIDs.keys()) ), 411 | "block_size": [1,2,4,6,8,16,24,32,48,64,128], # more or less 412 | "key_len": RangeValue(1,0xff), # key_len *8 bits ..2040 bits for a key is enought ? 413 | # Default value for variable length ciphers 414 | "iv_len": RangeValue(0,0xff), # rc4 has no IV ? 415 | "init": [NotNull], 416 | "do_cipher": [NotNull], 417 | #"cleanup": [NotNull], # aes-cbc ? 418 | "ctx_size": RangeValue(0,0xffff), # app_data struct should not be too big 419 | } 420 | EVP_CIPHER.CIPHER_DATA = { 421 | "DES": DES_key_schedule, 422 | "3DES": DES_key_schedule, 423 | "BF": BF_KEY, 424 | "CAST": CAST_KEY, 425 | "RC4": RC4_KEY, 426 | "ARCFOUR": RC4_KEY, 427 | "AES": AES_KEY, 428 | } 429 | 430 | 431 | ########### EVP_CIPHER_CTX 432 | EVP_CIPHER_CTX.expectedValues={ 433 | "cipher": [NotNull], 434 | "encrypt": [0,1], 435 | "buf_len": RangeValue(0,EVP_MAX_BLOCK_LENGTH), ## number we have left, so must be less than buffer_size 436 | #"engine": , # can be null 437 | #"app_data": , # can be null if cipher_data is not 438 | #"cipher_data": , # can be null if app_data is not 439 | "key_len": RangeValue(1,0xff), # key_len *8 bits ..2040 bits for a key is enought ? 440 | } 441 | 442 | # loadMembers, if nid & cipher_data-> we can assess cipher_data format to be a XX_KEY 443 | def EVP_CIPHER_CTX_loadMembers(self, mappings, maxDepth): 444 | if not super(EVP_CIPHER_CTX,self).loadMembers(mappings, maxDepth): 445 | return False 446 | log.debug('trying to load cipher_data Structs.') 447 | # 448 | #if bool(cipher) and bool(self.cipher.nid) and is_valid_address(cipher_data): 449 | # memcopy( self.cipher_data, cipher_data_addr, self.cipher.ctx_size) 450 | # # cast possible on cipher.nid -> cipherType 451 | 452 | if self.cipher.contents.nid == 0: # NID_undef, not openssl doing 453 | log.info('The cipher is home made - the cipher context data should be application dependant (app_data)') 454 | return True 455 | 456 | struct = getCipherDataType( self.cipher.contents.nid) 457 | log.debug('cipher type is %s - loading %s'%( getCipherName(self.cipher.contents.nid), struct )) 458 | if(struct is None): 459 | log.warning("Unsupported cipher %s"%(self.cipher.contents.nid)) 460 | return True 461 | 462 | # c_void_p is a basic type. 463 | attr_obj_address = self.cipher_data 464 | memoryMap = is_valid_address_value( attr_obj_address, mappings, struct) 465 | log.debug( "cipher_data CAST into : %s "%(struct) ) 466 | if not memoryMap: 467 | log.warning('in CTX On second toughts, cipher_data seems to be at an invalid address. That should not happen (often).') 468 | log.warning('%s addr:0x%lx size:0x%lx addr+size:0x%lx '%(is_valid_address_value( attr_obj_address, mappings), 469 | attr_obj_address, ctypes.sizeof(struct), attr_obj_address+ctypes.sizeof(struct))) 470 | return True 471 | #ok 472 | st = memoryMap.readStruct(attr_obj_address, struct ) 473 | model.keepRef(st, struct, attr_obj_address) 474 | self.cipher_data = ctypes.c_void_p(ctypes.addressof(st)) 475 | # check debug 476 | attr=getattr(self, 'cipher_data') 477 | log.debug('Copied 0x%lx into %s (0x%lx)'%(ctypes.addressof(st), 'cipher_data', attr)) 478 | log.debug('LOADED cipher_data as %s from 0x%lx (%s) into 0x%lx'%(struct, 479 | attr_obj_address, is_valid_address_value(attr_obj_address, mappings, struct), attr )) 480 | log.debug('\t\t---------\n%s\t\t---------'%st.toString()) 481 | return True 482 | 483 | def EVP_CIPHER_CTX_toPyObject(self): 484 | d=super(EVP_CIPHER_CTX,self).toPyObject() 485 | log.debug('Cast a EVP_CIPHER_CTX into PyObj') 486 | # cast app_data or cipher_data to right struct 487 | if bool(self.cipher_data): 488 | struct = getCipherDataType( self.cipher.contents.nid) 489 | if struct is not None: 490 | # CAST c_void_p to struct 491 | d.cipher_data = struct.from_address(self.cipher_data).toPyObject() 492 | return d 493 | 494 | def EVP_CIPHER_CTX_getOIV(self): 495 | return array2bytes(self.oiv) 496 | def EVP_CIPHER_CTX_getIV(self): 497 | return array2bytes(self.iv) 498 | 499 | EVP_CIPHER_CTX.loadMembers = EVP_CIPHER_CTX_loadMembers 500 | EVP_CIPHER_CTX.toPyObject = EVP_CIPHER_CTX_toPyObject 501 | EVP_CIPHER_CTX.getOIV = EVP_CIPHER_CTX_getOIV 502 | EVP_CIPHER_CTX.getIV = EVP_CIPHER_CTX_getIV 503 | 504 | ########## 505 | ''' 506 | 507 | # checkks 508 | # 509 | #import sys,inspect 510 | #src=sys.modules[__name__] 511 | #for (name, klass) in inspect.getmembers(src, inspect.isclass): 512 | # #if klass.__module__ == src.__name__ or klass.__module__.endswith('%s_generated'%(src.__name__) ) : 513 | # # #if not klass.__name__.endswith('_py'): 514 | # print klass, type(klass) #, len(klass.classRef) 515 | 516 | 517 | def printSizeof(mini=-1): 518 | for (name,klass) in inspect.getmembers(sys.modules[__name__], inspect.isclass): 519 | if type(klass) == type(ctypes.Structure) and klass.__module__.endswith('%s_generated'%(__name__) ) : 520 | if ctypes.sizeof(klass) > mini: 521 | print '%s:'%name,ctypes.sizeof(klass) 522 | #print 'SSLCipherSuiteInfo:',ctypes.sizeof(SSLCipherSuiteInfo) 523 | #print 'SSLChannelInfo:',ctypes.sizeof(SSLChannelInfo) 524 | 525 | 526 | if __name__ == '__main__': 527 | printSizeof() 528 | 529 | -------------------------------------------------------------------------------- /sslsnoop/engine.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import os,logging,sys, copy 10 | 11 | import ctypes 12 | from ctypes import cdll 13 | from ctypes_openssh import AES_BLOCK_SIZE, ssh_aes_ctr_ctx 14 | from ctypes_openssl import AES_KEY, BF_KEY, RC4_KEY, CAST_KEY, DES_key_schedule 15 | 16 | from haystack import model 17 | 18 | log=logging.getLogger('engine') 19 | 20 | from ctypes.util import find_library 21 | _libssl = find_library('ssl') 22 | libopenssl=cdll.LoadLibrary(_libssl) 23 | 24 | 25 | 26 | class Engine: 27 | block_size = 16 28 | 29 | def decrypt(self,block): 30 | ''' decrypts ''' 31 | bLen=len(block) 32 | data=(ctypes.c_ubyte*bLen )() ## TODO string_at should work here 33 | for i in range(0, bLen ): 34 | #print i, block[i] 35 | data[i]=ord(block[i]) 36 | # no way.... 37 | return self._decrypt(data,bLen) 38 | 39 | def _decrypt(self,data,bLen): 40 | raise NotImplementedError 41 | 42 | 43 | def myhex(bstr): 44 | s='' 45 | for el in bstr: 46 | s+='\\'+hex(ord(el))[1:] 47 | return s 48 | 49 | 50 | class StatefulAES_CBC_Engine(Engine): 51 | def __init__(self, context ): 52 | self.sync(context) 53 | self._AES_cbc=libopenssl.AES_cbc_encrypt 54 | log.debug('cipher:%s block_size: %d key_len: %d '%(context.name, context.block_size, context.key_len)) 55 | 56 | def _decrypt(self, src, bLen): 57 | if bLen % AES_BLOCK_SIZE: 58 | log.error("Sugar, why do you give me a block the wrong size: %d not modulo of %d"%(bLen, AES_BLOCK_SIZE)) 59 | return None 60 | buf=(ctypes.c_ubyte*AES_BLOCK_SIZE)() # TODO string_at + from_address 61 | dest=(ctypes.c_ubyte*bLen)() # TODO string_at + from_address 62 | enc=ctypes.c_uint(0) ## 0 is decrypt for inbound traffic 63 | #log.debug('BEFORE %s'%( myhex(self.aes_key_ctx.getCounter())) ) 64 | #void AES_cbc_encrypt( 65 | # const unsigned char *in, unsigned char *out, const unsigned long length, 66 | # const AES_KEY *key, unsigned char ivec[AES_BLOCK_SIZE], const int enc 67 | # ) 68 | self._AES_cbc( ctypes.byref(src), ctypes.byref(dest), bLen, ctypes.byref(self.key), 69 | ctypes.byref(self.iv), enc ) 70 | ##log.debug('AFTER %s'%( myhex(self.aes_key_ctx.getCounter())) ) 71 | #print self, repr(model.array2bytes(dest)) 72 | return model.array2bytes(dest) 73 | 74 | def sync(self, context): 75 | ''' refresh the crypto state ''' 76 | self.block_size = context.block_size 77 | self.key = AES_KEY().fromPyObj(context.evpCtx.cipher_data) # 78 | #print self.key 79 | # copy counter content 80 | self.iv = model.bytes2array(context.evpCtx.iv, ctypes.c_ubyte) 81 | log.info('IV value is %s'%(myhex(context.evpCtx.iv)) ) 82 | 83 | 84 | class StatefulAES_Ctr_Engine(Engine): 85 | #ctx->cipher->do_cipher(ctx,out,in,inl); 86 | # -> openssl.AES_ctr128_encrypt(&in,&out,length,&aes_key, ivecArray, ecount_bufArray, &num ) 87 | #AES_encrypt(ivec, ecount_buf, key); # aes_key is struct with cnt, key is really AES_KEY->aes_ctx 88 | #AES_ctr128_inc(ivec); #ssh_Ctr128_inc semble etre different, mais paramiko le fait non ? 89 | def __init__(self, context ): 90 | self.sync(context) 91 | self._AES_ctr=libopenssl.AES_ctr128_encrypt 92 | log.debug('cipher:%s block_size: %d key_len: %d '%(context.name, context.block_size, context.key_len)) 93 | 94 | def _decrypt(self,block, bLen): 95 | if bLen % AES_BLOCK_SIZE: 96 | log.error("Sugar, why do you give me a block the wrong size: %d not modulo of %d"%(bLen, AES_BLOCK_SIZE)) 97 | return None 98 | buf=(ctypes.c_ubyte*AES_BLOCK_SIZE)() 99 | dest=(ctypes.c_ubyte*bLen)() 100 | num=ctypes.c_uint() 101 | log.debug('BEFORE a %s : decrypt %d bytes'%( repr(self.getCounter()) , bLen ) ) 102 | #void AES_ctr128_encrypt( 103 | # const unsigned char *in, unsigned char *out, const unsigned long length, 104 | # const AES_KEY *key, unsigned char ivec[AES_BLOCK_SIZE], 105 | # unsigned char ecount_buf[AES_BLOCK_SIZE], unsigned int *num) 106 | # debug counter overflow 107 | ###last=self.aes_key_ctx.getCounter()[-1] 108 | ###before=self.getCounter() 109 | self._AES_ctr( ctypes.byref(block), ctypes.byref(dest), bLen, ctypes.byref(self.key), 110 | ctypes.byref(self.counter), ctypes.byref(buf), ctypes.byref(num) ) 111 | ''' 112 | newlast=self.aes_key_ctx.getCounter()[-1] 113 | if newlast < last : 114 | log.warning('Counter has overflown') 115 | after=self.getCounter() 116 | log.warning('Before %s'%(before)) 117 | log.warning('After %s'%(after)) 118 | ''' 119 | log.debug('AFTER a %s'%repr(self.getCounter())) 120 | #log.debug('AFTER x %s'%( myhex(self.aes_key_ctx.getCounter())) ) 121 | return model.array2bytes(dest) 122 | 123 | def sync(self, context): 124 | ''' refresh the crypto state ''' 125 | self.block_size = context.block_size 126 | self.aes_key_ctx = ssh_aes_ctr_ctx().fromPyObj(context.app_data) 127 | # we need nothing else 128 | self.key = self.aes_key_ctx.aes_ctx 129 | # copy counter content 130 | self.counter = self.aes_key_ctx.aes_counter 131 | #log.debug('Counter value is %s'%(myhex(self.aes_key_ctx.getCounter())) ) 132 | #log.debug('Key CTX:%s'%( self.aes_key_ctx.toString() ) ) 133 | 134 | def getCounter(self): 135 | #return myhex(self.aes_key_ctx.getCounter()) 136 | return model.array2bytes(self.counter) 137 | 138 | def incCounter(self): 139 | ctr=self.counter 140 | for i in range(len(ctr)-1,-1,-1): 141 | ctr[i] += 1 142 | if ctr[i] != 0: 143 | return 144 | 145 | def decCounter(self): 146 | ctr=self.counter 147 | for i in range(len(ctr)-1,-1,-1): 148 | ctr[i] -= 1 149 | if ctr[i] != 0xff: # underflow 150 | return 151 | 152 | ''' 153 | # reverse aes ... 154 | 155 | so forward AES is : 156 | while ((len--) > 0) { 157 | if (n == 0) { 158 | AES_encrypt(c->aes_counter, buf, &c->aes_ctx); 159 | ssh_ctr_inc(c->aes_counter, AES_BLOCK_SIZE); 160 | } 161 | *(dest++) = *(src++) ^ buf[n]; 162 | n = (n + 1) % AES_BLOCK_SIZE; 163 | } 164 | 165 | that means backwards AES is : 166 | while ((len--) > 0) { 167 | if (n == 0) { 168 | src-=AES_BLOCK_SIZE 169 | ssh_ctr_dec(c->aes_counter, AES_BLOCK_SIZE); 170 | AES_encrypt(c->aes_counter, buf, &c->aes_ctx); 171 | } 172 | *(dest++) = *(src++) ^ buf[n]; 173 | n = (n + 1) % AES_BLOCK_SIZE; 174 | } 175 | 176 | ''' 177 | 178 | 179 | 180 | class StatefulBlowfish_CBC_Engine(Engine): 181 | def __init__(self, context ): 182 | self.sync(context) 183 | self._BF_cbc=libopenssl.BF_cbc_encrypt 184 | log.debug('cipher:%s block_size: %d key_len: %d '%(context.name, context.block_size, context.key_len)) 185 | 186 | def _decrypt(self, src, bLen): 187 | BF_ROUNDS = 16 188 | BF_BLOCK = 8 189 | dest=(ctypes.c_ubyte*bLen)() 190 | enc=ctypes.c_uint(0) ## 0 is decrypt for inbound traffic ## ctx.evpCipherCtx.encrypt [0,1] 191 | #void BF_cbc_encrypt(const unsigned char *in, unsigned char *out, long length, 192 | # const BF_KEY *schedule, unsigned char *ivec, int enc); 193 | self._BF_cbc( ctypes.byref(src), ctypes.byref(dest), bLen, ctypes.byref(self.key), 194 | ctypes.byref(self.iv), enc ) 195 | #print self, repr(model.array2bytes(dest)) 196 | return model.array2bytes(dest) 197 | 198 | def sync(self, context): 199 | ''' refresh the crypto state ''' 200 | self.block_size = context.block_size 201 | self.key = BF_KEY().fromPyObj(context.evpCtx.cipher_data) # 202 | log.debug('BF Key: %s'%self.key) 203 | # copy counter content 204 | self.iv = model.bytes2array(context.evpCtx.iv, ctypes.c_ubyte) 205 | log.info('IV value is %s'%(myhex(context.evpCtx.iv)) ) 206 | 207 | class StatefulCAST_CBC_Engine(Engine): 208 | def __init__(self, context ): 209 | self.sync(context) 210 | self._CAST_cbc=libopenssl.CAST_cbc_encrypt 211 | log.debug('cipher:%s block_size: %d key_len: %d '%(context.name, context.block_size, context.key_len)) 212 | 213 | def _decrypt(self, src, bLen): 214 | dest=(ctypes.c_ubyte*bLen)() 215 | enc=ctypes.c_uint(0) ## 0 is decrypt for inbound traffic 216 | #void CAST_cbc_encrypt(const unsigned char *in, unsigned char *out, long length, 217 | # const CAST_KEY *ks, unsigned char *iv, int enc); 218 | self._CAST_cbc( ctypes.byref(src), ctypes.byref(dest), bLen, ctypes.byref(self.key), 219 | ctypes.byref(self.iv), enc ) 220 | #print self, repr(model.array2bytes(dest)) 221 | return model.array2bytes(dest) 222 | 223 | def sync(self, context): 224 | ''' refresh the crypto state ''' 225 | self.block_size = context.block_size 226 | self.key = CAST_KEY().fromPyObj(context.evpCtx.cipher_data) # 227 | log.debug('CAST Key: %s'%self.key) 228 | # copy counter content 229 | self.iv = model.bytes2array(context.evpCtx.iv, ctypes.c_ubyte) 230 | log.info('IV value is %s'%(myhex(context.evpCtx.iv)) ) 231 | 232 | class StatefulDES_CBC_Engine(Engine): 233 | def __init__(self, context ): 234 | self.sync(context) 235 | self._DES_cbc=libopenssl.DES_cbc_encrypt 236 | log.debug('cipher:%s block_size: %d key_len: %d '%(context.name, context.block_size, context.key_len)) 237 | 238 | def _decrypt(self, src, bLen): 239 | dest=(ctypes.c_ubyte*bLen)() 240 | enc=ctypes.c_uint(0) ## 0 is decrypt for inbound traffic 241 | #void CAST_cbc_encrypt(const unsigned char *in, unsigned char *out, long length, 242 | # const CAST_KEY *ks, unsigned char *iv, int enc); 243 | self._CAST_cbc( ctypes.byref(src), ctypes.byref(dest), bLen, ctypes.byref(self.key), 244 | ctypes.byref(self.iv), enc ) 245 | #print self, repr(model.array2bytes(dest)) 246 | return model.array2bytes(dest) 247 | 248 | def sync(self, context): 249 | ''' refresh the crypto state ''' 250 | self.block_size = context.block_size 251 | self.key = CAST_KEY().fromPyObj(context.evpCtx.cipher_data) # 252 | log.debug('CAST Key: %s'%self.key) 253 | # copy counter content 254 | self.iv = model.bytes2array(context.evpCtx.iv, ctypes.c_ubyte) 255 | log.info('IV value is %s'%(myhex(context.evpCtx.iv)) ) 256 | 257 | class StatefulRC4_Engine(Engine): 258 | def __init__(self, context ): 259 | self.sync(context) 260 | self._RC4=libopenssl.RC4 261 | log.debug('cipher:%s block_size: %d key_len: %d '%(context.name, context.block_size, context.key_len)) 262 | 263 | def _decrypt(self, src, bLen): 264 | dest=(ctypes.c_ubyte*bLen)() 265 | 266 | #void RC4(RC4_KEY *key, unsigned long len, const unsigned char *indata, 267 | # unsigned char *outdata); 268 | self._RC4( ctypes.byref(self.key), bLen, ctypes.byref(src), ctypes.byref(dest) ) 269 | 270 | #print self, repr(model.array2bytes(dest)) 271 | return model.array2bytes(dest) 272 | 273 | def sync(self, context): 274 | ''' refresh the crypto state ''' 275 | self.block_size = context.block_size 276 | self.key = RC4_KEY().fromPyObj(context.evpCtx.cipher_data) # 277 | log.debug('RC4 Key: %s'%self.key) 278 | 279 | 280 | CIPHERS = { 281 | "none": None, #( SSH_CIPHER_NONE, 8, 0, 0, 0, EVP_enc_null ), 282 | "des": None, # ssh1 283 | "3des": None, # ssh1 284 | "blowfish": None, # ssh1 285 | "3des-cbc": None, # not implemented 286 | "blowfish-cbc": StatefulBlowfish_CBC_Engine, # works 287 | "cast128-cbc": StatefulCAST_CBC_Engine, # works 288 | "arcfour": StatefulRC4_Engine, # doesn't work 289 | "arcfour128": StatefulRC4_Engine, # doesn't work 290 | "arcfour256": StatefulRC4_Engine, # doesn't work 291 | "aes128-cbc": StatefulAES_CBC_Engine, # inbound works, outbound does not 292 | "aes192-cbc": StatefulAES_CBC_Engine, # inbound works, outbound does not 293 | "aes256-cbc": StatefulAES_CBC_Engine, # inbound works, outbound does not 294 | "rijndael-cbc@lysator.liu.se": StatefulAES_CBC_Engine, 295 | "aes128-ctr": StatefulAES_Ctr_Engine, # works 296 | "aes192-ctr": StatefulAES_Ctr_Engine, # works 297 | "aes256-ctr": StatefulAES_Ctr_Engine, # works 298 | } 299 | 300 | 301 | 302 | def testDecrypt(): 303 | buf='?A\xb7\ru\xc9\x08\xe2em\x16\x06\x1a\x18\xfb\x805,\xd8\x1f\x11\xa3\x1b )G\xe2\r`\xfaw\x87\xef\xfa\xa7\x95\xe1\x84>\xe1\x90\xec\xe1\xfa\xe5\x1e\x9c\xe3' 304 | 305 | 306 | 307 | def main(argv): 308 | logging.basicConfig(level=logging.INFO) 309 | logging.debug(argv) 310 | 311 | testDecrypt() 312 | return -1 313 | 314 | 315 | if __name__ == "__main__": 316 | main(sys.argv[1:]) 317 | 318 | 319 | -------------------------------------------------------------------------------- /sslsnoop/finder.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import os 10 | import logging 11 | import subprocess 12 | import sys 13 | import time 14 | 15 | import psutil 16 | 17 | import openssh 18 | import openssl 19 | import utils 20 | 21 | log=logging.getLogger('finder') 22 | 23 | class Args: 24 | pid=None 25 | memfile=None 26 | memdump=None 27 | debug=None 28 | mmap=True 29 | 30 | def parseSSL(pid, sniffer): 31 | args=Args() 32 | args.pid = pid 33 | return openssl.search(args) 34 | 35 | 36 | 37 | 38 | _targets={ 39 | 'ssh': openssh.launchLiveDecryption, 40 | 'ssh-agent': parseSSL, 41 | 'sshd': parseSSL, # sshd a SSL keys too 42 | #'firefox': [] 43 | } 44 | 45 | Processes = [] 46 | 47 | def buildTuples(targets): 48 | rets=[] 49 | for proc in psutil.process_iter(): 50 | if proc.name in targets: 51 | rets.append( (proc.pid,proc) ) 52 | return rets 53 | 54 | 55 | def pgrep(name): 56 | pids=[] 57 | for proc in psutil.process_iter(): 58 | if proc.name == name: 59 | pids.append(name) 60 | return pids 61 | 62 | def makeFilter(conn): 63 | # [connection(fd=3, family=2, type=1, local_address=('192.168.1.101', 36386), remote_address=('213.186.33.2', 22), status='ESTABLISHED')] 64 | pcap_filter = "host %s and port %s and host %s and port %s" %(conn.local_address[0],conn.local_address[1] , 65 | conn.remote_address[0],conn.remote_address[1] ) 66 | return pcap_filter 67 | 68 | 69 | 70 | def runthread(callable, sniffer, proc,conn): 71 | ##from multiprocessing import Process 72 | ##p = Process(target=s1.run) 73 | from threading import Thread 74 | args = (proc.pid, sniffer) 75 | p = Thread(target=callable, args=args) 76 | p.start() 77 | Processes.append(p) 78 | log.info('Thread launched') 79 | return 80 | 81 | 82 | def main(argv): 83 | logging.basicConfig(level=logging.INFO) 84 | #logging.getLogger('model').setLevel(logging.INFO) 85 | 86 | 87 | # we must have big privileges... 88 | if os.getuid() + os.geteuid() != 0: 89 | log.error("You must be root/using sudo to read memory and sniff traffic. So there's no point in going further") 90 | return 91 | 92 | if not os.access('outputs', os.X_OK) : 93 | os.mkdir('outputs/') 94 | 95 | options=buildTuples(_targets) 96 | threads=[] 97 | forked=0 98 | # get sniffer up 99 | sniffer = utils.launchScapy() 100 | for pid,proc in options: 101 | log.info("Searching in %s/%d memory"%(proc.name,proc.pid)) 102 | conn = utils.checkConnections(proc) 103 | if not conn and 'ssh-agent' != proc.name: 104 | continue 105 | log.info('Adding this pid to watch list') 106 | runthread(_targets[proc.name], sniffer, proc,conn) 107 | 108 | forked+=1 109 | log.info('Subprocess launched on pid %d'%(proc.pid)) 110 | 111 | for p in Processes: 112 | p.join() 113 | time.sleep(5) 114 | log.info(' ============== %d process forked. look into outputs/ for data '%(forked)) 115 | sys.exit(0) 116 | return 0 117 | 118 | 119 | if __name__ == "__main__": 120 | main(sys.argv[1:]) 121 | 122 | 123 | 124 | 125 | 126 | -------------------------------------------------------------------------------- /sslsnoop/generate.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import logging, os, subprocess, sys 10 | 11 | import cleaner, preprocess 12 | 13 | log=logging.getLogger('generate') 14 | 15 | class Generator: 16 | ''' 17 | Wrapper around Ctypeslib 18 | Make python objects out of structures from C headers 19 | 20 | @param cleaned: the desired input c file name, preprocessed and cleaned 21 | @param pymodulename: the desired output module name 22 | ''' 23 | def __init__(self, cleaned, pymodulename='ctypes_linux_generated'): 24 | if not os.access(cleaned, os.F_OK): 25 | raise IOError('The cleaned file %s does not exist'%(cleaned)) 26 | self.cleaned = cleaned 27 | self.py = pymodulename 28 | self.xmlfile = '%s.%s'%(self.py, 'xml') 29 | self.pyfile = '%s.%s'%(self.py, 'py') 30 | self.gccxml = 'gccxml' 31 | self.h2xml = 'h2xml' 32 | self.xml2py = 'xml2py' 33 | 34 | def makeXml(self): 35 | cmd_line = [ self.gccxml, self.cleaned, '-fxml=%s'%(self.xmlfile), '-fextended-identifiers', '-fpreprocessed'] 36 | p = subprocess.Popen(cmd_line, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True) 37 | p.wait() 38 | build = p.stderr.read().strip() 39 | #print 'makexml', build 40 | if len(build) == 0: 41 | log.info( "GENERATED XML %s"%(self.xmlfile)) 42 | else: 43 | log.error('Please clean %s\n%s'%(self.cleaned, build)) 44 | return len(build) 45 | 46 | def makeH2Xml(self, args): 47 | cmd_line = [ self.h2xml, '-c', self.cleaned, '-o', self.xmlfile ] 48 | cmd_line.extend(args) 49 | p = subprocess.Popen(cmd_line, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True) 50 | p.wait() 51 | build = p.stderr.read().strip() 52 | print 'makexml', build 53 | if len(build) == 0: 54 | log.info( "GENERATED XML %s"%(self.xmlfile)) 55 | else: 56 | log.error('Please clean %s\n%s'%(self.cleaned, build)) 57 | return len(build) 58 | 59 | def makePy(self): 60 | if not os.access(self.xmlfile, os.F_OK): 61 | log.error('The XML file %s has not been generated'%(self.xml)) 62 | return -1 63 | cmd_line = [self.xml2py, self.xmlfile, '-o', self.pyfile] 64 | # we need define's 65 | # '-k', 'd', '-k', 'e', '-k', 's', '-k', 't'] 66 | p = subprocess.Popen(cmd_line, stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE, close_fds=True) 67 | p.wait() 68 | build = p.stderr.read().strip() 69 | if len(build) == 0: 70 | log.warning('\n%s'%build) 71 | log.info( "GENERATED PY module %s"%(self.pyfile)) 72 | return 0 73 | 74 | def run(self): 75 | ret = self.makeXml() 76 | if ret > 0: 77 | return ret 78 | ret = self.makePy() 79 | if ret > 0: 80 | return ret 81 | log.info('GENERATION Done. Enjoy %s'%(self.pyfile)) 82 | return 0 83 | 84 | def gen( cleaned, modulename): 85 | p = Generator(cleaned, modulename) 86 | return p.run() 87 | 88 | def gen2( cleaned, modulename, args): 89 | p = Generator(cleaned, modulename) 90 | p.makeH2Xml(args) 91 | p.makePy() 92 | return 0 93 | 94 | 95 | def make(sourcefile, modulename, target=False): 96 | ''' using gccxml directly distort ctypeslib performances 97 | but on some libraries, we don't have a choice. 98 | ''' 99 | if not os.access(sourcefile, os.F_OK): 100 | raise IOError(sourcefile) 101 | #sourcefile 102 | basename = os.path.basename(sourcefile) 103 | preprocessed = "%s.c"%(modulename) 104 | cleaned = "%s_clean.c"%(modulename) 105 | xml = "%s.xml"%(modulename) 106 | pyfinal = "%s.py"%(modulename) 107 | if target: 108 | gen2(sourcefile, modulename, target) 109 | log.info('PYFINAL - OK') 110 | else: 111 | if not os.access(pyfinal, os.F_OK): 112 | if not os.access(cleaned, os.F_OK): 113 | if not os.access(preprocessed, os.F_OK): 114 | # preprocess the file 115 | if preprocess.process(sourcefile, preprocessed) > 0: 116 | return 117 | log.info('PREPROCESS - OK') 118 | # clean it 119 | if cleaner.clean(preprocessed, cleaned) > 0: 120 | return 121 | log.info('CLEAN - OK') 122 | # generate yfinal 123 | if gen(cleaned, modulename) > 0: 124 | return 125 | log.info('PYFINAL - OK') 126 | __import__(modulename) 127 | import inspect 128 | nbClass = len(inspect.getmembers(sys.modules[modulename], inspect.isclass)) 129 | nbMembers = len(inspect.getmembers(sys.modules[modulename], inspect.isclass)) 130 | log.info("module %s has %d members for %d class"%(modulename, nbMembers, nbClass)) 131 | 132 | logging.basicConfig(level=logging.INFO) 133 | 134 | #generate.gen('ctypes_linux_generated_clean.c','ctypes_linux_generated') 135 | 136 | # generate.make('ctypes_linux.c','ctypes_linux_generated') 137 | make('ctypes_openssl.c','ctypes_openssl_generated', preprocess.OPENSSL_ARGS) 138 | 139 | #make('ctypes_nss.c','ctypes_nss_generated', preprocess.NSS_ARGS) 140 | 141 | -------------------------------------------------------------------------------- /sslsnoop/lrucache.py: -------------------------------------------------------------------------------- 1 | # lrucache.py -- a simple LRU (Least-Recently-Used) cache class 2 | 3 | # Copyright 2004 Evan Prodromou 4 | # Licensed under the Academic Free License 2.1 5 | 6 | # arch-tag: LRU cache main module 7 | 8 | """a simple LRU (Least-Recently-Used) cache module 9 | 10 | This module provides very simple LRU (Least-Recently-Used) cache 11 | functionality. 12 | 13 | An *in-memory cache* is useful for storing the results of an 14 | 'expensive' process (one that takes a lot of time or resources) for 15 | later re-use. Typical examples are accessing data from the filesystem, 16 | a database, or a network location. If you know you'll need to re-read 17 | the data again, it can help to keep it in a cache. 18 | 19 | You *can* use a Python dictionary as a cache for some purposes. 20 | However, if the results you're caching are large, or you have a lot of 21 | possible results, this can be impractical memory-wise. 22 | 23 | An *LRU cache*, on the other hand, only keeps _some_ of the results in 24 | memory, which keeps you from overusing resources. The cache is bounded 25 | by a maximum size; if you try to add more values to the cache, it will 26 | automatically discard the values that you haven't read or written to 27 | in the longest time. In other words, the least-recently-used items are 28 | discarded. [1]_ 29 | 30 | .. [1]: 'Discarded' here means 'removed from the cache'. 31 | 32 | """ 33 | 34 | from __future__ import generators 35 | import time 36 | from heapq import heappush, heappop, heapify 37 | 38 | __version__ = "0.2" 39 | __all__ = ['CacheKeyError', 'LRUCache', 'DEFAULT_SIZE'] 40 | __docformat__ = 'reStructuredText en' 41 | 42 | DEFAULT_SIZE = 16 43 | """Default size of a new LRUCache object, if no 'size' argument is given.""" 44 | 45 | class CacheKeyError(KeyError): 46 | """Error raised when cache requests fail 47 | 48 | When a cache record is accessed which no longer exists (or never did), 49 | this error is raised. To avoid it, you may want to check for the existence 50 | of a cache record before reading or deleting it.""" 51 | pass 52 | 53 | class LRUCache(object): 54 | """Least-Recently-Used (LRU) cache. 55 | 56 | Instances of this class provide a least-recently-used (LRU) cache. They 57 | emulate a Python mapping type. You can use an LRU cache more or less like 58 | a Python dictionary, with the exception that objects you put into the 59 | cache may be discarded before you take them out. 60 | 61 | Some example usage:: 62 | 63 | cache = LRUCache(32) # new cache 64 | cache['foo'] = get_file_contents('foo') # or whatever 65 | 66 | if 'foo' in cache: # if it's still in cache... 67 | # use cached version 68 | contents = cache['foo'] 69 | else: 70 | # recalculate 71 | contents = get_file_contents('foo') 72 | # store in cache for next time 73 | cache['foo'] = contents 74 | 75 | print cache.size # Maximum size 76 | 77 | print len(cache) # 0 <= len(cache) <= cache.size 78 | 79 | cache.size = 10 # Auto-shrink on size assignment 80 | 81 | for i in range(50): # note: larger than cache size 82 | cache[i] = i 83 | 84 | if 0 not in cache: print 'Zero was discarded.' 85 | 86 | if 42 in cache: 87 | del cache[42] # Manual deletion 88 | 89 | for j in cache: # iterate (in LRU order) 90 | print j, cache[j] # iterator produces keys, not values 91 | """ 92 | 93 | class __Node(object): 94 | """Record of a cached value. Not for public consumption.""" 95 | 96 | def __init__(self, key, obj, timestamp): 97 | object.__init__(self) 98 | self.key = key 99 | self.obj = obj 100 | self.atime = timestamp 101 | self.mtime = self.atime 102 | 103 | def __cmp__(self, other): 104 | return cmp(self.atime, other.atime) 105 | 106 | def __repr__(self): 107 | return "<%s %s => %s (%s)>" % \ 108 | (self.__class__, self.key, self.obj, \ 109 | time.asctime(time.localtime(self.atime))) 110 | 111 | def __init__(self, size=DEFAULT_SIZE): 112 | # Check arguments 113 | if size <= 0: 114 | raise ValueError, size 115 | elif type(size) is not type(0): 116 | raise TypeError, size 117 | object.__init__(self) 118 | self.__heap = [] 119 | self.__dict = {} 120 | self.size = size 121 | """Maximum size of the cache. 122 | If more than 'size' elements are added to the cache, 123 | the least-recently-used ones will be discarded.""" 124 | 125 | def __len__(self): 126 | return len(self.__heap) 127 | 128 | def __contains__(self, key): 129 | return self.__dict.has_key(key) 130 | 131 | def __setitem__(self, key, obj): 132 | if self.__dict.has_key(key): 133 | node = self.__dict[key] 134 | node.obj = obj 135 | node.atime = time.time() 136 | node.mtime = node.atime 137 | heapify(self.__heap) 138 | else: 139 | # size may have been reset, so we loop 140 | while len(self.__heap) >= self.size: 141 | lru = heappop(self.__heap) 142 | del self.__dict[lru.key] 143 | node = self.__Node(key, obj, time.time()) 144 | self.__dict[key] = node 145 | heappush(self.__heap, node) 146 | 147 | def __getitem__(self, key): 148 | if not self.__dict.has_key(key): 149 | raise CacheKeyError(key) 150 | else: 151 | node = self.__dict[key] 152 | node.atime = time.time() 153 | heapify(self.__heap) 154 | return node.obj 155 | 156 | def __delitem__(self, key): 157 | if not self.__dict.has_key(key): 158 | raise CacheKeyError(key) 159 | else: 160 | node = self.__dict[key] 161 | del self.__dict[key] 162 | self.__heap.remove(node) 163 | heapify(self.__heap) 164 | return node.obj 165 | 166 | def __iter__(self): 167 | copy = self.__heap[:] 168 | while len(copy) > 0: 169 | node = heappop(copy) 170 | yield node.key 171 | raise StopIteration 172 | 173 | def __setattr__(self, name, value): 174 | object.__setattr__(self, name, value) 175 | # automagically shrink heap on resize 176 | if name == 'size': 177 | while len(self.__heap) > value: 178 | lru = heappop(self.__heap) 179 | del self.__dict[lru.key] 180 | 181 | def __repr__(self): 182 | return "<%s (%d elements)>" % (str(self.__class__), len(self.__heap)) 183 | 184 | def mtime(self, key): 185 | """Return the last modification time for the cache record with key. 186 | May be useful for cache instances where the stored values can get 187 | 'stale', such as caching file or network resource contents.""" 188 | if not self.__dict.has_key(key): 189 | raise CacheKeyError(key) 190 | else: 191 | node = self.__dict[key] 192 | return node.mtime 193 | 194 | if __name__ == "__main__": 195 | cache = LRUCache(25) 196 | print cache 197 | for i in range(50): 198 | cache[i] = str(i) 199 | print cache 200 | if 46 in cache: 201 | del cache[46] 202 | print cache 203 | cache.size = 10 204 | print cache 205 | cache[46] = '46' 206 | print cache 207 | print len(cache) 208 | for c in cache: 209 | print c 210 | print cache 211 | print cache.mtime(46) 212 | for c in cache: 213 | print c 214 | -------------------------------------------------------------------------------- /sslsnoop/network.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import logging,os,socket,select, sys,time 10 | import multiprocessing, Queue 11 | import scapy.config 12 | 13 | from lrucache import LRUCache 14 | 15 | import stream 16 | 17 | log = logging.getLogger('network') 18 | 19 | QUEUE_SIZE = 15000 20 | 21 | def hexify(data): 22 | s='' 23 | for i in range(0,len(data)): 24 | s+="%02x"% ord(data[i]) 25 | if i%16==15: 26 | s+="\r\n" 27 | elif i%2==1: 28 | s+=" " 29 | #s+="\r\n" 30 | return s 31 | 32 | class Sniffer(): 33 | worker=None 34 | def __init__(self,filterRules='tcp', packetCount=0, timeout=None): 35 | ''' 36 | This sniffer can run in a thread. But it should be one of the few thread running (BPL) 37 | 38 | @param filterRules: a pcap compatible filter string 39 | @param packetCount: 0/Unlimited or packet capture limit 40 | @param timeout: None/Unlimited or stop after 41 | ''' 42 | # set scapy to use native pcap instead of SOCK_RAW 43 | scapy.config.conf.use_pcap = True 44 | ## if using SOCK_RAW, we need to mess with filter to up the capture size higher than 1514/1600 bytes 45 | #maxSize="\' -s \'0xffff" # abusing scapy-to-tcpdump string format 46 | #self.filterRules=filterRules + maxSize 47 | self.filterRules = filterRules 48 | self.packetCount = packetCount 49 | self.timeout = timeout 50 | # 51 | self.streams = {} 52 | self._running_thread = None 53 | return 54 | 55 | 56 | def run(self): 57 | # scapy - with config initialised 58 | #scapy.sendrecv.sniff(count=self.packetCount,timeout=self.timeout,store=0,filter=self.filterRules,prn=self.cbSSHPacket) 59 | from scapy.all import sniff 60 | log.info('Using L2listen = %s'%(scapy.config.conf.L2listen)) 61 | # XXX TODO, define iface from saddr and daddr // scapy.all.read_routes() 62 | sniff(count=self.packetCount, timeout=self.timeout, store=0, filter=self.filterRules, prn=self.enqueue, iface='any') 63 | log.warning('============ SNIFF Terminated ====================') 64 | return 65 | 66 | def hasStream(self, packet): 67 | ''' checks if the stream has a queue ''' 68 | (shost,sport,dhost,dport) = getConnectionTuple(packet) 69 | return (shost,sport,dhost,dport) in self.streams 70 | 71 | def getStream(self, packet): 72 | ''' returns the queue for that stream ''' 73 | if self.hasStream(packet): 74 | return self.streams[getConnectionTuple(packet)] 75 | #log.debug('Packet with no such connection %s %s %s %s'%(getConnectionTuple(packet))) 76 | return None,None 77 | 78 | def addStream(self, connection): 79 | ''' forget that stream ''' 80 | shost,sport = connection.local_address 81 | dhost,dport = connection.remote_address 82 | if shost.startswith('127.') or shost.startswith('::1'): 83 | log.warning('=============================================================') 84 | log.warning('scapy is gonna truncate big packet on the loopback interface.') 85 | log.warning('please change your test params,or use offline mode with pcap.') 86 | log.warning('=============================================================') 87 | #q = multiprocessing.Queue(QUEUE_SIZE) 88 | q = Queue.Queue(QUEUE_SIZE) 89 | st = stream.TCPStream(q, connection) 90 | #save in both directions 91 | self.streams[(shost,sport,dhost,dport)] = (st,q) 92 | self.streams[(dhost,dport,shost,sport)] = (st,q) 93 | return st 94 | 95 | def dropStream(self, packet): 96 | ''' forget that stream ''' 97 | if self.hasStream(packet): 98 | (shost,sport,dhost,dport) = getConnectionTuple(packet) 99 | if (shost,sport,dhost,dport) in self.streams: 100 | del self.streams[(shost,sport,dhost,dport)] 101 | if (dhost,dport,shost,sport) in self.streams: 102 | del self.streams[(dhost,dport,shost,sport)] 103 | log.info('Dropped %s,%s,%s,%s from valid connections.'%(shost,sport,dhost,dport)) 104 | return None 105 | 106 | def enqueue(self, packet): 107 | st,q = self.getStream(packet) 108 | if q is None: 109 | return 110 | try: 111 | log.debug('Queuing a packet from %s:%s\t-> %s:%s'%(getConnectionTuple(packet))) 112 | q.put_nowait(packet) 113 | except Queue.Full: 114 | log.warning('a Queue is Full (%d). lost packet for %s'%(q.qsize(), repr(st.connection))) 115 | self.dropStream( packet ) 116 | except Exception,e: 117 | log.error(e) 118 | return 119 | 120 | def makeStream(self, connection): 121 | ''' create a TCP Stream recognized by sniffer 122 | the Stream can be used to read captured data 123 | 124 | The Stream should be run is a thread or a subprocess (better because of GIL). 125 | ''' 126 | shost,sport = connection.local_address 127 | dhost,dport = connection.remote_address 128 | if (shost,sport,dhost,dport) in self.streams: 129 | raise ValueError('Stream already exists') 130 | tcpstream = self.addStream(connection) 131 | log.debug('Created a TCPStream for %s'%(tcpstream)) 132 | return tcpstream 133 | 134 | 135 | def getConnectionTuple(packet): 136 | ''' Supposedly an IP/IPv6 model''' 137 | try: 138 | shost = packet.payload.src 139 | sport = packet.payload.payload.sport 140 | dhost = packet.payload.dst 141 | dport = packet.payload.payload.dport 142 | except Exception, e: 143 | log.debug("%s - %s"% (type(packet), packet.show2())) 144 | raise e 145 | return (shost,sport,dhost,dport) 146 | 147 | 148 | 149 | class PcapFileSniffer(Sniffer): 150 | ''' Use scapy's offline mode to simulate network by reading a pcap file. 151 | ''' 152 | def __init__(self, pcapfile, filterRules='tcp', packetCount=0): 153 | Sniffer.__init__(self, filterRules=filterRules, packetCount=packetCount) 154 | self.pcapfile = pcapfile 155 | def run(self): 156 | from scapy.all import sniff 157 | sniff(store=0, prn=self.enqueue, offline=self.pcapfile) 158 | log.info('Finishing the pcap reading') 159 | for v, k in list(self.streams.items()): 160 | st,q = k 161 | if not q.empty(): 162 | log.debug('waiting on %s'%(repr(v))) 163 | q.join() 164 | del self.streams[v] 165 | del q 166 | st.pleaseStop() 167 | 168 | log.info('============ SNIFF Terminated ====================') 169 | 170 | return 171 | 172 | 173 | 174 | 175 | 176 | 177 | 178 | 179 | 180 | 181 | 182 | 183 | 184 | 185 | 186 | -------------------------------------------------------------------------------- /sslsnoop/openssl.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import os,logging,sys, argparse 10 | 11 | import ctypes 12 | import ctypes_openssl,ctypes_openssh 13 | 14 | from ctypes import * # TODO delete 15 | from ptrace.ctypes_libc import libc 16 | from haystack.abouchet import StructFinder 17 | from haystack.memory_mapper import MemoryMapper 18 | from output import FileWriter 19 | 20 | # linux only 21 | from ptrace.debugger.debugger import PtraceDebugger 22 | from ptrace.debugger.memory_mapping import readProcessMappings 23 | 24 | log=logging.getLogger('openssl') 25 | 26 | from ctypes.util import find_library 27 | _libssl = find_library('ssl') 28 | _libssl = ctypes.cdll.LoadLibrary(_libssl) 29 | 30 | 31 | class RSAFileWriter(FileWriter): 32 | def __init__(self,folder='outputs'): 33 | FileWriter.__init__(self,'id_rsa','key',folder) 34 | def writeToFile(self,instance): 35 | prefix=self.prefix 36 | ''' 37 | PEM_write_RSAPrivateKey(f, rsa_p, None, None, 0, None, None) 38 | PEM_write_RSAPrivateKey(fp,x, [enc,kstr,klen,cb,u]) 39 | -> PEM_ASN1_write((int (*)())i2d_RSAPrivateKey,PEM_STRING_RSA,fp, (char *)x, [enc,kstr,klen,cb,u]) 40 | int PEM_ASN1_write(i2d_of_void *i2d, const char *name, FILE *fp, char *x, [const EVP_CIPHER *, unsigned char *kstr,int , pem_password_cb *, void *]) 41 | -> PEM_ASN1_write_bio(i2d, name, b, x [,enc,kstr,klen,callback,u] ); 42 | int PEM_ASN1_write_bio(i2d_of_void *i2d, const char *name, BIO *bp, char *x , [..] 43 | -> i2d_RSAPrivateKey( sur x ) 44 | -> ASN1_item_i2d_bio(ASN1_ITEM_rptr(RSAPrivateKey), bp, rsa); 45 | -> i=BIO_write(out,&(b[j]),n); 46 | -> i=PEM_write_bio(bp,name,buf,data,i); 47 | 48 | en gros, c'est ctypes_openssl.RSA().writeASN1(file) 49 | ''' 50 | filename=self.get_valid_filename() 51 | f=libc.fopen(filename,"w") 52 | ret=_libssl.PEM_write_RSAPrivateKey(f, ctypes.byref(instance), None, None, 0, None, None) 53 | libc.fclose(f) 54 | if ret < 1: 55 | log.error("Error saving key to file %s"% filename) 56 | return False 57 | log.info ("[X] Key saved to file %s"%filename) 58 | return True 59 | 60 | class DSAFileWriter(FileWriter): 61 | def __init__(self,folder='outputs'): 62 | FileWriter.__init__(self,'id_dsa','key',folder) 63 | def writeToFile(self,instance): 64 | prefix=self.prefix 65 | filename=self.get_valid_filename() 66 | f=libc.fopen(filename,"w") 67 | ret=_libssl.PEM_write_DSAPrivateKey(f, ctypes.byref(instance), None, None, 0, None, None) 68 | if ret < 1: 69 | log.error("Error saving key to file %s"% filename) 70 | return False 71 | log.info ("[X] Key saved to file %s"%filename) 72 | return True 73 | 74 | 75 | class OpenSSLStructFinder(StructFinder): 76 | ''' Must not fork to ptrace. We need the real ctypes structs ''' 77 | # interesting structs 78 | rsaw=RSAFileWriter() 79 | dsaw=DSAFileWriter() 80 | def __init__(self, mappings, targetmapping): 81 | StructFinder.__init__(self, mappings, targetmapping) 82 | self.OPENSSL_STRUCTS={ # name, ( struct, callback) 83 | 'RSA': (ctypes_openssl.RSA, self.rsaw.writeToFile ), 84 | 'DSA': (ctypes_openssl.DSA, self.dsaw.writeToFile ) 85 | } 86 | def findAndSave(self, maxNum=1, fullScan=False, nommap=False): 87 | log.debug('look for RSA keys') 88 | outs=self.find_struct(ctypes_openssl.RSA, maxNum=maxNum ) 89 | for rsa,addr in outs: 90 | self.save(rsa) 91 | log.debug('look for DSA keys') 92 | outs=self.find_struct(ctypes_openssl.DSA, maxNum=maxNum) 93 | for dsa,addr in outs: 94 | self.save(dsa) 95 | return 96 | #'BIGNUM': 'RSA': (ctypes_openssl.BIGNUM, ) 97 | def save(self,instance): 98 | if type(instance) == ctypes_openssl.RSA: 99 | self.rsaw.writeToFile(instance) 100 | elif type(instance) == ctypes_openssl.DSA: 101 | self.dsaw.writeToFile(instance) 102 | else: 103 | log.error('I dont know how to save that : %s'%(instance)) 104 | 105 | def usage(txt): 106 | log.error("Usage : %s [offset] # find SSL Structs in process"% txt) 107 | sys.exit(-1) 108 | 109 | 110 | def argparser(): 111 | parser = argparse.ArgumentParser(prog='openssl.py', description='Capture of RSA and DSA keys.') 112 | parser.add_argument('pid', type=int, help='Target PID') 113 | parser.add_argument('--nommap', dest='mmap', action='store_const', const=False, default=True, help='disable mmap()-ing') 114 | parser.add_argument('--debug', dest='debug', action='store_const', const=True, help='setLevel to DEBUG') 115 | parser.set_defaults(func=search) 116 | return parser 117 | 118 | 119 | def search(args): 120 | if args.debug: 121 | logging.basicConfig(level=logging.DEBUG) 122 | log.info("Target has pid %d"%args.pid) 123 | mappings = MemoryMapper(args).getMappings() 124 | targetMapping = [m for m in mappings if m.pathname == '[heap]'] 125 | if len(targetMapping) == 0: 126 | log.warning('No [heap] memorymapping found. Searching everywhere.') 127 | targetMapping = mappings 128 | finder = OpenSSLStructFinder(mappings, targetMapping) 129 | outs=finder.findAndSave() 130 | return 131 | 132 | 133 | def main(argv): 134 | logging.basicConfig(level=logging.INFO) 135 | 136 | # use optarg on v, a and to 137 | parser = argparser() 138 | opts = parser.parse_args(argv) 139 | try: 140 | opts.func(opts) 141 | except ImportError,e: 142 | log.error('Struct type does not exists.') 143 | print e 144 | 145 | 146 | log.info("done for pid %d"%opts.pid) 147 | 148 | return -1 149 | 150 | 151 | if __name__ == "__main__": 152 | main(sys.argv[1:]) 153 | 154 | -------------------------------------------------------------------------------- /sslsnoop/openvpn.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import argparse, os, logging, sys, time, pickle, struct, threading 10 | 11 | import ctypes 12 | import ctypes_openssh, ctypes_openssl 13 | import output 14 | 15 | from engine import CIPHERS 16 | import haystack 17 | 18 | 19 | from output import FileWriter 20 | 21 | log=logging.getLogger('openvpn') 22 | 23 | 24 | 25 | 26 | 27 | class OpenvpnKeysFinder(): 28 | ''' wrapper around a fork/exec to abouchet StructFinder ''' 29 | 30 | def __init__(self, pid, fullScan=False): 31 | self.pid = pid 32 | self.fullScan = fullScan 33 | return 34 | 35 | def findCipherCtx(self, maxNum=3): 36 | ''' ''' 37 | outs=haystack.findStruct(self.pid, ctypes_openssl.EVP_CIPHER_CTX, maxNum=maxNum) 38 | if outs is None: 39 | log.error("The session_state has not been found. maybe it's not OpenSSH ?") 40 | return None,None 41 | # 42 | return outs 43 | 44 | def refreshCipherCtx(self, offset): 45 | ''' ''' 46 | instance,validated=haystack.refreshStruct(self.pid, ctypes_openssl.EVP_CIPHER_CTX, offset) 47 | if not validated: 48 | log.error("The session_state has not been re-validated. You should look for it again.") 49 | return None,None 50 | return instance,offset 51 | 52 | def findKeys(self, maxNum=3): 53 | ''' ''' 54 | contexts = self.findCipherCtx(maxNum) 55 | log.debug('received %d EVP context '%( len(contexts))) 56 | self.printCiphers(contexts) 57 | return contexts 58 | 59 | def printCiphers(self, cipherContexts): 60 | for ctx, addr in cipherContexts: 61 | print 'Cipher : %s-%d'%(ctypes_openssl.getCipherName(ctx.cipher.nid), 8*ctx.cipher.key_len) 62 | return 63 | 64 | def save(self, instance): 65 | # pickle ? 66 | ssfw=FileWriter(self.process.pid) 67 | ssfw.writeToFile(instance) 68 | return 69 | 70 | 71 | 72 | def connectionToString(connection, reverse=False): 73 | log.debug('make a string for %s'%(repr(connection))) 74 | if reverse: 75 | return "%s:%d-%s:%d"%(connection.remote_address[0],connection.remote_address[1], connection.local_address[0],connection.local_address[1]) 76 | else: 77 | return "%s:%d-%s:%d"%(connection.local_address[0],connection.local_address[1],connection.remote_address[0],connection.remote_address[1]) 78 | 79 | 80 | 81 | class OpenvpnLiveDecryptatator(OpenvpnKeysFinder): 82 | ''' Decrypt SSH traffic in live ''' 83 | def __init__(self, pid, sessionStateAddr=None, stream = None, scapyThread = None, serverMode=None, fullScan=False, maxNum=1): 84 | OpenSSHKeysFinder. __init__(self, pid, fullScan=fullScan) 85 | self.stream = stream 86 | self.scapy = scapyThread 87 | #self.serverMode=serverMode 88 | self.maxNum = maxNum 89 | self.session_state_addr=sessionStateAddr 90 | self.inbound=dict() 91 | self.outbound=dict() 92 | return 93 | 94 | def run(self): 95 | ''' launch sniffer and decrypter threads ''' 96 | if self.scapy is None: 97 | from finder import launchScapy, getConnectionForPID 98 | self.scapy=launchScapy() 99 | conn = getConnectionForPID(self.pid) 100 | self.stream = self.scapy.makeStream(conn) 101 | elif not self.scapy.thread.isAlive(): 102 | self.scapy.thread.start() 103 | # ptrace ssh 104 | if self.session_state_addr is None: 105 | self.ciphers,self.session_state_addr=self.findActiveKeys(maxNum = self.maxNum) 106 | else: 107 | self.ciphers,self.session_state_addr=self.findActiveKeys(offset=self.session_state_addr) 108 | if self.ciphers is None: 109 | raise ValueError('Struct not found') 110 | # unstop() the process 111 | ### Forked no useful self.process.cont() 112 | # process is running... sniffer is listening 113 | log.info('Please make some ssh traffic') 114 | self.init() 115 | self.loop() 116 | log.info("done for pid %d, struct at 0x%lx"%(self.process.pid, self.session_state_addr)) 117 | return 118 | 119 | def init(self): 120 | ''' plug sockets, packetizer and outputs together ''' 121 | receiveCtx,sendCtx = self.ciphers.getCiphers() 122 | # Inbound 123 | log.info('activate INBOUND receive') 124 | self.inbound['context'] = receiveCtx 125 | self.inbound['socket'] = self.stream.getInbound().getSocket() 126 | self.inbound['packetizer'] = Packetizer( self.inbound['socket'] ) 127 | self.inbound['packetizer'].set_log(logging.getLogger('inbound.packetizer')) 128 | self.inbound['engine'] = self.activate_cipher(self.inbound['packetizer'], receiveCtx ) 129 | name = 'ssh-%s'%( connectionToString(self.stream.connection) ) 130 | self.inbound['filewriter'] = output.SSHStreamToFile(self.inbound['packetizer'], self.inbound, name) 131 | 132 | # out bound 133 | log.info('activate OUTBOUND send') 134 | self.outbound['context'] = sendCtx 135 | self.outbound['socket'] = self.stream.getOutbound().getSocket() 136 | self.outbound['packetizer'] = Packetizer(self.outbound['socket']) 137 | self.outbound['packetizer'].set_log(logging.getLogger('outbound.packetizer')) 138 | self.outbound['engine'] = self.activate_cipher(self.outbound['packetizer'], self.outbound['context'] ) 139 | name = 'ssh-%s'%( connectionToString(self.stream.connection, reverse=True) ) 140 | self.outbound['filewriter'] = output.SSHStreamToFile(self.outbound['packetizer'], self.outbound, name) 141 | 142 | # worker to watch out for data for decryption 143 | self.worker = output.Supervisor() 144 | self.worker.add( self.inbound['socket'], self.inbound['filewriter'].process ) 145 | self.worker.add( self.outbound['socket'], self.outbound['filewriter'].process ) 146 | ## run streams 147 | self.stream.getInbound().setActiveMode() 148 | self.stream.getOutbound().setActiveMode() 149 | threading.Thread(target=self.stream.run).start() 150 | return 151 | 152 | def loop(self): 153 | self.worker.run() 154 | return 155 | 156 | def activate_cipher(self, packetizer, context): 157 | "switch on newly negotiated encryption parameters for inbound traffic" 158 | #packetizer.set_log(log) 159 | #packetizer.set_hexdump(True) 160 | # find Engine from engine.ciphers 161 | engine = CIPHERS[context.name](context) 162 | log.debug( 'cipher:%s block_size: %d key_len: %d '%(context.name, context.block_size, context.key_len ) ) 163 | #print engine, type(engine) 164 | mac = context.mac 165 | if mac is not None: 166 | mac_key = mac.key #XXX TODO 167 | mac_engine = Transport._mac_info[mac.name]['class'] 168 | mac_len = mac.mac_len 169 | # again , we need a stateful HMAC engine. 170 | # we disable HMAC checking to get around. 171 | # fix our engines in packetizer 172 | ## packetizer.set_hexdump(True) 173 | packetizer.set_inbound_cipher(engine, context.block_size, mac_engine, mac_len , mac_key) 174 | if context.comp.enabled != 0: 175 | name = context.comp.name 176 | compress_in = Transport._compression_info[name][1] 177 | log.debug('Switching on inbound compression ...') 178 | packetizer.set_inbound_compressor(compress_in()) 179 | #ok 180 | return engine 181 | 182 | def refresh(self): 183 | log.warning('Refreshing Engine states') 184 | # ptrace ssh 185 | if self.session_state_addr is None: 186 | self.ciphers,self.session_state_addr=self.findActiveKeys(maxNum = self.maxNum) 187 | else: 188 | self.ciphers,self.session_state_addr=self.findActiveKeys(offset=self.session_state_addr) 189 | if self.ciphers is None: 190 | raise ValueError('Struct not found') 191 | # refresh both engine 192 | receiveCtx,sendCtx = self.ciphers.getCiphers() 193 | # Inbound 194 | log.info('activate INBOUND receive') 195 | self.inbound['context'] = receiveCtx 196 | self.outbound['context'] = sendCtx 197 | self.inbound['engine'].sync(self.inbound['context']) 198 | self.outbound['engine'].sync(self.outbound['context']) 199 | pass 200 | 201 | 202 | 203 | def argparser(): 204 | parser = argparse.ArgumentParser(prog='sshsnoop', description='Live decription of Openssh traffic.') 205 | parser.add_argument('pid', type=int, help='Target PID') 206 | parser.add_argument('--addr', type=str, help='active_context memory address') 207 | parser.add_argument('--debug', action='store_const', const=True, default=False, help='debug mode') 208 | parser.set_defaults(func=search) 209 | return parser 210 | 211 | def search(args): 212 | if args.debug: 213 | logging.basicConfig(level=logging.DEBUG) 214 | log.debug("==================- DEBUG MODE -================== ") 215 | pid=int(args.pid) 216 | log.info("Target has pid %d"%pid) 217 | if args.addr != None: 218 | offset = int(args.addr,16) 219 | 220 | test = OpenvpnKeysFinder(args.pid, fullScan=True) 221 | test.findKeys() 222 | 223 | #decryptatator=OpenSSHLiveDecryptatator(pid, sessionStateAddr=sessionStateAddr, serverMode=serverMode) 224 | #decryptatator.run() 225 | #log.info("done for pid %d, struct at 0x%lx"%(pid,decryptatator.session_state_addr)) 226 | sys.exit(0) 227 | return -1 228 | 229 | 230 | def main(argv): 231 | logging.basicConfig(level=logging.INFO) 232 | 233 | # we must have big privileges... 234 | if os.getuid() + os.geteuid() != 0: 235 | log.error("You must be root/using sudo to read memory and sniff traffic.") 236 | return 237 | 238 | parser = argparser() 239 | opts = parser.parse_args(argv) 240 | try: 241 | opts.func(opts) 242 | except ImportError,e: 243 | log.error('Struct type does not exists.') 244 | print e 245 | 246 | return 0 247 | 248 | 249 | 250 | 251 | if __name__ == "__main__": 252 | main(sys.argv[1:]) 253 | 254 | 255 | -------------------------------------------------------------------------------- /sslsnoop/output.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import os 10 | import logging 11 | import sys 12 | import time 13 | import io 14 | import select 15 | import socket 16 | import pickle 17 | import threading 18 | from threading import Thread 19 | 20 | from paramiko_packet import NeedRekeyException 21 | from paramiko.ssh_exception import SSHException 22 | from paramiko.common import * 23 | 24 | from stream import MissingDataException 25 | 26 | 27 | log=logging.getLogger('output') 28 | 29 | MAX_KEYS=255 30 | 31 | 32 | class FileWriter: 33 | def __init__(self,prefix,suffix,folder): 34 | self.prefix=prefix 35 | self.suffix=suffix 36 | self.folder=folder 37 | def get_valid_filename(self): 38 | filename_FMT="%s-%d.%s" 39 | for i in xrange(1,MAX_KEYS): 40 | filename=filename_FMT%(self.prefix,i,self.suffix) 41 | afilename=os.path.normpath(os.path.sep.join([self.folder,filename])) 42 | if not os.access(afilename,os.F_OK): 43 | return afilename 44 | # 45 | log.error("Too many file keys extracted in %s directory"%(self.folder)) 46 | return None 47 | def writeToFile(self,instance): 48 | fname = self.get_valid_filename() 49 | p = Pickler(fname) 50 | pickle.dump(instance) 51 | return fname 52 | 53 | class SSHStreamToFile(): 54 | ''' Pipes the data from a (ssh) socket into a different file for each packet type. 55 | supposedly, this would demux channels into files. 56 | We still need to differenciate at higher level, with two SSHStreamFile 57 | between upload and download. 58 | ''' 59 | BUFSIZE=4096 60 | def __init__(self, packetizer, ctx, basename, folder='outputs', fmt="%Y%m%d-%H%M%S"): 61 | self.packetizer = packetizer 62 | self.datename = "%s"%time.strftime(fmt,time.gmtime()) 63 | self.fname = os.path.sep.join([folder,basename]) 64 | self.outs = dict() 65 | self.engine = ctx.engine 66 | self.socket = ctx.state.getSocket() 67 | ## 68 | self.lastMessage=None 69 | self.decrypt_errors=0 70 | return 71 | 72 | def _outputStream(self, channel): 73 | name="%s.%s.%d"%(self.fname, self.datename, channel ) 74 | if name in self.outs: 75 | return self.outs[name] 76 | else: 77 | self.outs[name] = io.FileIO(name , 'w' ) 78 | log.info("New Output Filename is %s"%(name)) 79 | log.debug("Output Filename is %s"%(name)) 80 | return self.outs[name] 81 | 82 | def process(self): 83 | try: 84 | m = self._process() 85 | if self.decrypt_errors: 86 | log.info("we read %d blocks/%d bytes and couldn't make sense out of it"%(self.decrypt_errors, self.decrypt_errors*16 )) 87 | log.info("But we made it : to %s"%(str(m) ) ) 88 | self.decrypt_errors = 0 89 | except SSHException,e: # only size errror... no sense. should be only one exception. 90 | #self.decrypt_errors+=1 91 | log.error('SSH exception catched on %s - %s - killing this channel'%(self.fname,e)) 92 | #return 93 | raise EOFError(e) 94 | 95 | 96 | def _process(self): 97 | ''' m can be rewind()-ed , __str__ ()-ed or others... 98 | ''' 99 | _expected_packet = tuple() 100 | try: 101 | ptype, m = self.packetizer.read_message() 102 | self.lastMessage=m 103 | chanid = m.get_int() 104 | log.debug('Got msg:%d for channel:%d value:%s'%(ptype, chanid, repr(m))) 105 | m.rewind() 106 | #log.error("now message was (%d) : %s"%(len(str(m)),repr(str(m))) ) 107 | #self.lastCounter=self.engine.getCounter() 108 | if ptype != MSG_CHANNEL_DATA: # MSG_CHANNEL_DATA 109 | log.debug("===================== ptype:%d len:%d "%(ptype, len(str(m)) ) ) 110 | except NeedRekeyException,e: 111 | log.warning('=============================== Please refresh keys for rekey') 112 | return e 113 | except OverflowError,e: 114 | log.warning('SSH exception catched/bad packet size on %s'%(self.fname)) 115 | #self.refresher.refresh() 116 | return e 117 | except MissingDataException, e: 118 | log.warning('=============================== Missing data. Please refresh keys for rekey') 119 | return e 120 | 121 | if ptype == MSG_IGNORE: 122 | log.warning('================================== MSG_IGNORE') 123 | return 'MSG_IGNORE' 124 | elif ptype == MSG_DISCONNECT: 125 | chan = m.get_int() 126 | log.info( "==================================== DISCONNECT MESSAGE chan %d"%(chan)) 127 | log.info( m.get_string()) 128 | self.packetizer.close() 129 | raise EOFError('MSG_DISCONNECT') 130 | elif ptype == MSG_DEBUG: 131 | always_display = m.get_boolean() 132 | msg = m.get_string() 133 | lang = m.get_string() 134 | log.warning('Debug msg: ' + util.safe_string(msg)) 135 | return 'MSG_DEBUG' 136 | else: 137 | if len(_expected_packet) > 0: 138 | if ptype not in _expected_packet: 139 | raise SSHException('Expecting packet from %r, got %d' % (_expected_packet, ptype)) 140 | _expected_packet = tuple() 141 | if (ptype >= 30) and (ptype <= 39): 142 | log.info("KEX Message, we need to rekey") 143 | return 'KEX' 144 | # 145 | if ptype == MSG_CHANNEL_DATA: 146 | chanid = m.get_int() 147 | out=self._outputStream(chanid) 148 | ret=out.write( str(m.get_string()) ) # TODO: as this is a PoC we assume its printable characters by default on channel_data 149 | out.flush() # beuahhh 150 | log.debug("%d bytes written for channel %d"%(ret, ptype)) 151 | return m 152 | 153 | class pcapWriter(): 154 | ''' Use scapy to write a data packet to a pcap file ''' 155 | def __init__(self,fname,src,sport,dst,dport): 156 | self.fname = fname 157 | from scapy.all import IP,TCP 158 | self.pkt = IP(src=src,dst=dst)/TCP(sport=sport,dport=dport) 159 | def write(self,data): 160 | from scapy.utils import wrpcap 161 | pkt = self.pkt / data 162 | return wrpcap(filename, pkt) 163 | def flush(self): 164 | pass 165 | 166 | class SSHStreamToPcap(SSHStreamToFile): 167 | ''' Write a decoded packet to a pcap file 168 | ''' 169 | pcapwriter = None 170 | def __init__(self, packetizer, ctx, basename, folder='outputs', fmt="%Y%m%d-%H%M%S"): 171 | SSHStreamToFile.__init__(self, packetizer, ctx, basename, folder='outputs', fmt="%Y%m%d-%H%M%S") 172 | self.fname = os.path.sep.join([self.fname,'pcap']) 173 | self.pcapwriter = pcapWriter(name, src,sport,dst,dport) 174 | def _outputStream(self, channel): 175 | name="%s.%s.%d"%(self.fname, self.datename, channel ) 176 | log.debug("Output Filename is %s"%(name)) 177 | return self.pcapwriter 178 | 179 | 180 | class Supervisor(threading.Thread): 181 | def __init__(self ): 182 | self.stopSwitch=threading.Event() 183 | self.readables=dict() 184 | self.selectables=set() 185 | self._readables=dict() 186 | self._selectables=set() 187 | self.lock=threading.Lock() 188 | self.todo=False 189 | return 190 | 191 | def add(self, socket, handler): 192 | ''' 193 | @param soket: the socket to select() onto 194 | @param handler: the callable to run when data arrives. 195 | ''' 196 | self.lock.acquire() 197 | self.readables[socket] = handler 198 | self.selectables.add(socket) 199 | self.todo=True 200 | self.lock.release() 201 | return 202 | 203 | def sub(self, socket): 204 | self.lock.acquire() 205 | del self.readables[socket] 206 | self.selectables.discard(socket) 207 | self.todo=True 208 | self.lock.release() 209 | return 210 | 211 | def _syncme(self): 212 | self.lock.acquire() 213 | self._readables = dict(self.readables) 214 | self._selectables = set(self.selectables) 215 | self.todo=False 216 | self.lock.release() 217 | return 218 | 219 | def runCheck(self): 220 | return not self.stopSwitch.isSet() 221 | 222 | def pleaseStop(self): 223 | return self.stopSwitch.set() 224 | 225 | def run(self): 226 | # thread inbound reads and writes 227 | while True: 228 | # check 229 | if self.todo: 230 | self._syncme 231 | r,w,o = select.select(self.selectables,[],[],2) 232 | if len(r) == 0: 233 | log.debug("select waked up without anything to read... going back to select()") 234 | if not self.runCheck(): 235 | return # quit 236 | continue 237 | # read them and write them 238 | for socket_ in r: 239 | try: 240 | self.readables[socket_]() 241 | log.debug("read and write done for %s"%(socket_)) 242 | except EOFError, e: 243 | log.debug('forgetting about this output engine: %s'%(e)) 244 | self.sub(socket_) 245 | continue 246 | #loop 247 | log.info('Supervisor finished running') 248 | return 249 | 250 | 251 | class RawPacketsToFile: 252 | ''' Dumps a simplex stream to a raw data file. 253 | ''' 254 | def __init__(self, socket_, fname): 255 | self.socket = socket_ 256 | self.file = file(fname,'w') 257 | 258 | def run(self): 259 | try: 260 | while True: 261 | data = self.socket.recv(4096) 262 | if len(data) == 0: 263 | break 264 | self.file.write(data) 265 | log.debug('Written %d bytes'%(len(data))) 266 | except socket.error, e: 267 | pass # bad file descriptor 268 | self.file.close() 269 | log.info('RawPacketsToFile finished - %s'%(self.file.name)) 270 | 271 | 272 | 273 | 274 | 275 | 276 | 277 | 278 | 279 | 280 | 281 | 282 | 283 | 284 | 285 | 286 | -------------------------------------------------------------------------------- /sslsnoop/paramiko_packet.py: -------------------------------------------------------------------------------- 1 | # Copyright (C) 2003-2007 Robey Pointer 2 | # 3 | # This file is part of paramiko. 4 | # 5 | # Paramiko is free software; you can redistribute it and/or modify it under the 6 | # terms of the GNU Lesser General Public License as published by the Free 7 | # Software Foundation; either version 2.1 of the License, or (at your option) 8 | # any later version. 9 | # 10 | # Paramiko is distrubuted in the hope that it will be useful, but WITHOUT ANY 11 | # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR 12 | # A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more 13 | # details. 14 | # 15 | # You should have received a copy of the GNU Lesser General Public License 16 | # along with Paramiko; if not, write to the Free Software Foundation, Inc., 17 | # 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA. 18 | 19 | """ 20 | Packetizer. 21 | """ 22 | 23 | import errno 24 | import select 25 | import socket 26 | import struct 27 | import threading 28 | import time 29 | 30 | from paramiko.common import * 31 | from paramiko import util 32 | from paramiko.ssh_exception import SSHException 33 | from paramiko.message import Message 34 | 35 | PACKET_MAX_SIZE = (256 * 1024) #packet.c 36 | 37 | got_r_hmac = False 38 | try: 39 | import r_hmac 40 | got_r_hmac = True 41 | except ImportError: 42 | pass 43 | def compute_hmac(key, message, digest_class): 44 | if got_r_hmac: 45 | return r_hmac.HMAC(key, message, digest_class).digest() 46 | from Crypto.Hash import HMAC 47 | return HMAC.HMAC(key, message, digest_class).digest() 48 | 49 | 50 | class NeedRekeyException (Exception): 51 | pass 52 | 53 | class SSHException2 (SSHException): 54 | pass 55 | 56 | 57 | class Packetizer (object): 58 | """ 59 | Implementation of the base SSH packet protocol. 60 | """ 61 | 62 | # READ the secsh RFC's before raising these values. if anything, 63 | # they should probably be lower. 64 | REKEY_PACKETS = pow(2, 30) 65 | REKEY_BYTES = pow(2, 30) 66 | 67 | def __init__(self, socket): 68 | self.__socket = socket 69 | self.__logger = None 70 | self.__closed = False 71 | self.__dump_packets = False 72 | self.__need_rekey = False 73 | self.__init_count = 0 74 | self.__remainder = '' 75 | 76 | # used for noticing when to re-key: 77 | self.__sent_bytes = 0 78 | self.__sent_packets = 0 79 | self.__received_bytes = 0 80 | self.__received_packets = 0 81 | self.__received_packets_overflow = 0 82 | 83 | # current inbound/outbound ciphering: 84 | self.__block_size_out = 8 85 | self.__block_size_in = 8 86 | self.__mac_size_out = 0 87 | self.__mac_size_in = 0 88 | self.__block_engine_out = None 89 | self.__block_engine_in = None 90 | self._mac_enabled = False 91 | self.__mac_engine_out = None 92 | self.__mac_engine_in = None 93 | self.__mac_key_out = '' 94 | self.__mac_key_in = '' 95 | self.__compress_engine_out = None 96 | self.__compress_engine_in = None 97 | self.__sequence_number_out = 0L 98 | self.__sequence_number_in = 0L 99 | 100 | # lock around outbound writes (packet computation) 101 | self.__write_lock = threading.RLock() 102 | 103 | # keepalives: 104 | self.__keepalive_interval = 0 105 | self.__keepalive_last = time.time() 106 | self.__keepalive_callback = None 107 | 108 | def set_log(self, log): 109 | """ 110 | Set the python log object to use for logging. 111 | """ 112 | self.__logger = log 113 | 114 | def set_inbound_cipher(self, block_engine, block_size, mac_engine, mac_size, mac_key): 115 | """ 116 | Switch inbound data cipher. 117 | """ 118 | self.__block_engine_in = block_engine 119 | self.__block_size_in = block_size 120 | self.__mac_engine_in = mac_engine 121 | self.__mac_size_in = mac_size 122 | self.__mac_key_in = mac_key 123 | self.__received_bytes = 0 124 | self.__received_packets = 0 125 | self.__received_packets_overflow = 0 126 | # wait until the reset happens in both directions before clearing rekey flag 127 | self.__init_count |= 2 128 | if self.__init_count == 3: 129 | self.__init_count = 0 130 | self.__need_rekey = False 131 | 132 | def set_outbound_compressor(self, compressor): 133 | self.__compress_engine_out = compressor 134 | 135 | def set_inbound_compressor(self, compressor): 136 | self.__compress_engine_in = compressor 137 | 138 | def close(self): 139 | self.__closed = True 140 | self.__socket.close() 141 | 142 | def set_hexdump(self, hexdump): 143 | self.__dump_packets = hexdump 144 | 145 | def get_hexdump(self): 146 | return self.__dump_packets 147 | 148 | def get_mac_size_in(self): 149 | return self.__mac_size_in 150 | 151 | def get_mac_size_out(self): 152 | return self.__mac_size_out 153 | 154 | def need_rekey(self): 155 | """ 156 | Returns C{True} if a new set of keys needs to be negotiated. This 157 | will be triggered during a packet read or write, so it should be 158 | checked after every read or write, or at least after every few. 159 | 160 | @return: C{True} if a new set of keys needs to be negotiated 161 | """ 162 | return self.__need_rekey 163 | 164 | def set_keepalive(self, interval, callback): 165 | """ 166 | Turn on/off the callback keepalive. If C{interval} seconds pass with 167 | no data read from or written to the socket, the callback will be 168 | executed and the timer will be reset. 169 | """ 170 | self.__keepalive_interval = interval 171 | self.__keepalive_callback = callback 172 | self.__keepalive_last = time.time() 173 | 174 | def read_all(self, n, check_rekey=False): 175 | """ 176 | Read as close to N bytes as possible, blocking as long as necessary. 177 | 178 | @param n: number of bytes to read 179 | @type n: int 180 | @return: the data read 181 | @rtype: str 182 | @raise EOFError: if the socket was closed before all the bytes could 183 | be read 184 | """ 185 | out = '' 186 | # handle over-reading from reading the banner line 187 | if len(self.__remainder) > 0: 188 | out = self.__remainder[:n] 189 | self.__remainder = self.__remainder[n:] 190 | n -= len(out) 191 | if PY22: 192 | return self._py22_read_all(n, out) 193 | while n > 0: 194 | got_timeout = False 195 | try: 196 | self._log(DEBUG,'self.__socket.recv(%d) %d'%(n,self.__received_bytes)) 197 | x = self.__socket.recv(n) 198 | if len(x) == 0: 199 | raise EOFError() 200 | out += x 201 | n -= len(x) 202 | except socket.timeout: 203 | got_timeout = True 204 | except socket.error, e: 205 | # on Linux, sometimes instead of socket.timeout, we get 206 | # EAGAIN. this is a bug in recent (> 2.6.9) kernels but 207 | # we need to work around it. 208 | if (type(e.args) is tuple) and (len(e.args) > 0) and (e.args[0] == errno.EAGAIN): 209 | got_timeout = True 210 | elif (type(e.args) is tuple) and (len(e.args) > 0) and (e.args[0] == errno.EINTR): 211 | # syscall interrupted; try again 212 | pass 213 | elif self.__closed: 214 | raise EOFError() 215 | else: 216 | raise 217 | if got_timeout: 218 | if self.__closed: 219 | raise EOFError() 220 | if check_rekey and (len(out) == 0) and self.__need_rekey: 221 | raise NeedRekeyException() 222 | self._check_keepalive() 223 | return out 224 | 225 | 226 | def readline(self, timeout): 227 | """ 228 | Read a line from the socket. We assume no data is pending after the 229 | line, so it's okay to attempt large reads. 230 | """ 231 | buf = self.__remainder 232 | while not '\n' in buf: 233 | buf += self._read_timeout(timeout) 234 | n = buf.index('\n') 235 | self.__remainder = buf[n+1:] 236 | buf = buf[:n] 237 | if (len(buf) > 0) and (buf[-1] == '\r'): 238 | buf = buf[:-1] 239 | return buf 240 | 241 | 242 | def read_message(self): 243 | """ 244 | Only one thread should ever be in this function (no other locking is 245 | done). 246 | 247 | @raise SSHException: if the packet is mangled 248 | @raise NeedRekeyException: if the transport should rekey 249 | """ 250 | header = self.read_all(self.__block_size_in, check_rekey=True) 251 | if self.__block_engine_in != None: 252 | self._log(DEBUG, 'read %d header in paramiko before decrypt: %s '%(self.__block_size_in, repr(header) )); 253 | header = self.__block_engine_in.decrypt(header) 254 | self._log(DEBUG, 'DECRYPTING HEADER : %s'%( repr(header) )); 255 | if self.__dump_packets: 256 | self._log(DEBUG, util.format_binary(header, 'IN: ')); 257 | 258 | packet_size = struct.unpack('>I', header[:4])[0] 259 | self._log(DEBUG, 'packet_size: %d max:%d'%(packet_size,PACKET_MAX_SIZE)); 260 | if (packet_size > PACKET_MAX_SIZE): 261 | self._log(DEBUG, 'packet_size: %d max:%d'%(packet_size,PACKET_MAX_SIZE)); 262 | raise SSHException2('Invalid packet size') 263 | # leftover contains decrypted bytes from the first block (after the length field) 264 | leftover = header[4:] 265 | if (packet_size - len(leftover)) % self.__block_size_in != 0: 266 | raise SSHException('Invalid packet blocking') 267 | buf = self.read_all(packet_size + self.__mac_size_in - len(leftover)) 268 | self._log(DEBUG,"%d self.read_all(packet_size(%d) + self.__mac_size_in(%d) - len(leftover)(%d))"%( len(buf), 269 | packet_size,self.__mac_size_in,len(leftover)) ) 270 | 271 | packet = buf[:packet_size - len(leftover)] 272 | post_packet = buf[packet_size - len(leftover):] 273 | if self.__block_engine_in != None: 274 | self._log(DEBUG, 'body in paramiko before decrypt: %s '%( repr(buf) )); 275 | packet = self.__block_engine_in.decrypt(packet) 276 | self._log(DEBUG, 'DECRYPTING PACKET %s'%( repr(packet) )); 277 | if self.__dump_packets: 278 | self._log(DEBUG, util.format_binary(packet, 'IN: ')); 279 | packet = leftover + packet 280 | ## XXX Tsssi... a protected method ould have been nice 281 | if self._mac_enabled and self.__mac_size_in > 0: 282 | mac = post_packet[:self.__mac_size_in] 283 | mac_payload = struct.pack('>II', self.__sequence_number_in, packet_size) + packet 284 | my_mac = compute_hmac(self.__mac_key_in, mac_payload, self.__mac_engine_in)[:self.__mac_size_in] 285 | if my_mac != mac: 286 | raise SSHException('Mismatched MAC') 287 | padding = ord(packet[0]) 288 | payload = packet[1:packet_size - padding] 289 | #randpool.add_event() 290 | if self.__dump_packets: 291 | self._log(DEBUG, 'Got payload (%d bytes, %d padding)' % (packet_size, padding)) 292 | 293 | if self.__compress_engine_in is not None: 294 | payload = self.__compress_engine_in(payload) 295 | self._log(DEBUG, 'Decompressed payload ') 296 | 297 | msg = Message(payload[1:]) 298 | msg.seqno = self.__sequence_number_in 299 | self.__sequence_number_in = (self.__sequence_number_in + 1) & 0xffffffffL 300 | 301 | # check for rekey 302 | self.__received_bytes += packet_size + self.__mac_size_in + 4 303 | self.__received_packets += 1 304 | if self.__need_rekey: 305 | # we've asked to rekey -- give them 20 packets to comply before 306 | # dropping the connection 307 | self._log(DEBUG, 'Rekey needed') 308 | 309 | self.__received_packets_overflow += 1 310 | if self.__received_packets_overflow >= 20: 311 | raise SSHException('Remote transport is ignoring rekey requests') 312 | elif (self.__received_packets >= self.REKEY_PACKETS) or \ 313 | (self.__received_bytes >= self.REKEY_BYTES): 314 | # only ask once for rekeying 315 | self._log(DEBUG, 'Rekeying (hit %d packets, %d bytes received)' % 316 | (self.__received_packets, self.__received_bytes)) 317 | self.__received_packets_overflow = 0 318 | self._trigger_rekey() 319 | 320 | cmd = ord(payload[0]) 321 | if cmd in MSG_NAMES: 322 | cmd_name = MSG_NAMES[cmd] 323 | else: 324 | cmd_name = '$%x' % cmd 325 | if self.__dump_packets: 326 | self._log(DEBUG, 'Read packet <%s>, length %d' % (cmd_name, len(payload))) 327 | return cmd, msg 328 | 329 | 330 | ########## protected 331 | 332 | 333 | def _log(self, level, msg): 334 | if self.__logger is None: 335 | return 336 | if issubclass(type(msg), list): 337 | for m in msg: 338 | self.__logger.log(level, m) 339 | else: 340 | self.__logger.log(level, msg) 341 | 342 | def _check_keepalive(self): 343 | if (not self.__keepalive_interval) or (not self.__block_engine_out) or \ 344 | self.__need_rekey: 345 | # wait till we're encrypting, and not in the middle of rekeying 346 | return 347 | now = time.time() 348 | if now > self.__keepalive_last + self.__keepalive_interval: 349 | self.__keepalive_callback() 350 | self.__keepalive_last = now 351 | 352 | def _py22_read_all(self, n, out): 353 | while n > 0: 354 | r, w, e = select.select([self.__socket], [], [], 0.1) 355 | if self.__socket not in r: 356 | if self.__closed: 357 | raise EOFError() 358 | self._check_keepalive() 359 | else: 360 | x = self.__socket.recv(n) 361 | if len(x) == 0: 362 | raise EOFError() 363 | out += x 364 | n -= len(x) 365 | return out 366 | 367 | def _py22_read_timeout(self, timeout): 368 | start = time.time() 369 | while True: 370 | r, w, e = select.select([self.__socket], [], [], 0.1) 371 | if self.__socket in r: 372 | x = self.__socket.recv(1) 373 | if len(x) == 0: 374 | raise EOFError() 375 | break 376 | if self.__closed: 377 | raise EOFError() 378 | now = time.time() 379 | if now - start >= timeout: 380 | raise socket.timeout() 381 | return x 382 | 383 | def _read_timeout(self, timeout): 384 | if PY22: 385 | return self._py22_read_timeout(timeout) 386 | start = time.time() 387 | while True: 388 | try: 389 | x = self.__socket.recv(128) 390 | if len(x) == 0: 391 | raise EOFError() 392 | break 393 | except socket.timeout: 394 | pass 395 | if self.__closed: 396 | raise EOFError() 397 | now = time.time() 398 | if now - start >= timeout: 399 | raise socket.timeout() 400 | return x 401 | 402 | def _trigger_rekey(self): 403 | # outside code should check for this flag 404 | self.__need_rekey = True 405 | 406 | def __str__(self): 407 | return " 4 | * Copyright (c) 1995 Tatu Ylonen , Espoo, Finland 5 | * All rights reserved 6 | * This file contains code implementing the packet protocol and communication 7 | * with the other side. This same code is used both on client and server side. 8 | * 9 | * As far as I am concerned, the code I have written for this software 10 | * can be used freely for any purpose. Any derived versions of this 11 | * software must be clearly marked as such, and if the derived work is 12 | * incompatible with the protocol description in the RFC file, it must be 13 | * called by a name other than "ssh" or "Secure Shell". 14 | * 15 | * 16 | * SSH2 packet format added by Markus Friedl. 17 | * Copyright (c) 2000, 2001 Markus Friedl. All rights reserved. 18 | * 19 | * Redistribution and use in source and binary forms, with or without 20 | * modification, are permitted provided that the following conditions 21 | * are met: 22 | * 1. Redistributions of source code must retain the above copyright 23 | * notice, this list of conditions and the following disclaimer. 24 | * 2. Redistributions in binary form must reproduce the above copyright 25 | * notice, this list of conditions and the following disclaimer in the 26 | * documentation and/or other materials provided with the distribution. 27 | * 28 | * THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS"" AND ANY EXPRESS OR 29 | * IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES 30 | * OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. 31 | * IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, 32 | * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT 33 | * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 34 | * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 35 | * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 36 | * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF 37 | * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 38 | */ 39 | 40 | #include 41 | 42 | #include 43 | #include "openbsd-compat/sys-queue.h" 44 | #include 45 | #include 46 | #ifdef HAVE_SYS_TIME_H 47 | # include 48 | #endif 49 | 50 | #include 51 | #include 52 | #include 53 | 54 | #include 55 | #include 56 | #include 57 | #include 58 | #include 59 | #include 60 | #include 61 | 62 | #include "xmalloc.h" 63 | #include "buffer.h" 64 | #include "packet.h" 65 | #include "crc32.h" 66 | #include "compress.h" 67 | #include "deattack.h" 68 | #include "channels.h" 69 | #include "compat.h" 70 | #include "ssh1.h" 71 | #include "ssh2.h" 72 | #include "cipher.h" 73 | #include "key.h" 74 | #include "kex.h" 75 | #include "mac.h" 76 | #include "log.h" 77 | #include "canohost.h" 78 | #include "misc.h" 79 | #include "ssh.h" 80 | #include "roaming.h" 81 | 82 | #ifdef PACKET_DEBUG 83 | #define DBG(x) x 84 | #else 85 | #define DBG(x) 86 | #endif 87 | 88 | #define PACKET_MAX_SIZE (256 * 1024) 89 | 90 | struct packet_state { 91 | u_int32_t seqnr; 92 | u_int32_t packets; 93 | u_int64_t blocks; 94 | u_int64_t bytes; 95 | }; 96 | 97 | struct packet { 98 | TAILQ_ENTRY(packet) next; 99 | u_char type; 100 | Buffer payload; 101 | }; 102 | 103 | struct session_state { 104 | /* 105 | * This variable contains the file descriptors used for 106 | * communicating with the other side. connection_in is used for 107 | * reading; connection_out for writing. These can be the same 108 | * descriptor, in which case it is assumed to be a socket. 109 | */ 110 | int connection_in; 111 | int connection_out; 112 | 113 | /* Protocol flags for the remote side. */ 114 | u_int remote_protocol_flags; 115 | 116 | /* Encryption context for receiving data. Only used for decryption. */ 117 | CipherContext receive_context; 118 | 119 | /* Encryption context for sending data. Only used for encryption. */ 120 | CipherContext send_context; 121 | 122 | /* Buffer for raw input data from the socket. */ 123 | Buffer input; 124 | 125 | /* Buffer for raw output data going to the socket. */ 126 | Buffer output; 127 | 128 | /* Buffer for the partial outgoing packet being constructed. */ 129 | Buffer outgoing_packet; 130 | 131 | /* Buffer for the incoming packet currently being processed. */ 132 | Buffer incoming_packet; 133 | 134 | /* Scratch buffer for packet compression/decompression. */ 135 | Buffer compression_buffer; 136 | int compression_buffer_ready; 137 | 138 | /* 139 | * Flag indicating whether packet compression/decompression is 140 | * enabled. 141 | */ 142 | int packet_compression; 143 | 144 | /* default maximum packet size */ 145 | u_int max_packet_size; 146 | 147 | /* Flag indicating whether this module has been initialized. */ 148 | int initialized; 149 | 150 | /* Set to true if the connection is interactive. */ 151 | int interactive_mode; 152 | 153 | /* Set to true if we are the server side. */ 154 | int server_side; 155 | 156 | /* Set to true if we are authenticated. */ 157 | int after_authentication; 158 | 159 | int keep_alive_timeouts; 160 | 161 | /* The maximum time that we will wait to send or receive a packet */ 162 | int packet_timeout_ms; 163 | 164 | /* Session key information for Encryption and MAC */ 165 | Newkeys *newkeys[MODE_MAX]; 166 | struct packet_state p_read, p_send; 167 | 168 | u_int64_t max_blocks_in, max_blocks_out; 169 | u_int32_t rekey_limit; 170 | 171 | /* Session key for protocol v1 */ 172 | u_char ssh1_key[SSH_SESSION_KEY_LENGTH]; 173 | u_int ssh1_keylen; 174 | 175 | /* roundup current message to extra_pad bytes */ 176 | u_char extra_pad; 177 | 178 | /* XXX discard incoming data after MAC error */ 179 | u_int packet_discard; 180 | Mac *packet_discard_mac; 181 | 182 | /* Used in packet_read_poll2() */ 183 | u_int packlen; 184 | 185 | /* Used in packet_send2 */ 186 | int rekeying; 187 | 188 | /* Used in packet_set_interactive */ 189 | int set_interactive_called; 190 | 191 | /* Used in packet_set_maxsize */ 192 | int set_maxsize_called; 193 | 194 | TAILQ_HEAD(, packet) outgoing; 195 | }; 196 | 197 | static struct session_state *active_state, *backup_state; 198 | 199 | static struct session_state * 200 | alloc_session_state(void) 201 | { 202 | struct session_state *s = xcalloc(1, sizeof(*s)); 203 | 204 | s->connection_in = -1; 205 | s->connection_out = -1; 206 | s->max_packet_size = 32768; 207 | s->packet_timeout_ms = -1; 208 | return s; 209 | } 210 | #include "includes.h" 211 | 212 | #include 213 | 214 | #include 215 | 216 | #include 217 | #include 218 | 219 | #include "xmalloc.h" 220 | #include "log.h" 221 | #include "cipher.h" 222 | 223 | /* compatibility with old or broken OpenSSL versions */ 224 | #include "openbsd-compat/openssl-compat.h" 225 | 226 | 227 | struct Cipher { 228 | char *name; 229 | int number; /* for ssh1 only */ 230 | u_int block_size; 231 | u_int key_len; 232 | u_int discard_len; 233 | u_int cbc_mode; 234 | const EVP_CIPHER *(*evptype)(void); 235 | }; 236 | 237 | /** umac */ 238 | 239 | typedef u_int8_t UINT8; /* 1 byte */ 240 | typedef u_int16_t UINT16; /* 2 byte */ 241 | typedef u_int32_t UINT32; /* 4 byte */ 242 | typedef u_int64_t UINT64; /* 8 bytes */ 243 | typedef unsigned int UWORD; /* Register */ 244 | 245 | #define AES_BLOCK_LEN 16 246 | #define UMAC_OUTPUT_LEN 8 /* Alowable: 4, 8, 12, 16 */ 247 | 248 | /* OpenSSL's AES */ 249 | //#include "openbsd-compat/openssl-compat.h" 250 | //#ifndef USE_BUILTIN_RIJNDAEL 251 | # include 252 | //#endif 253 | typedef AES_KEY aes_int_key[1]; 254 | 255 | //#include 256 | 257 | typedef struct pdf_ctx { 258 | UINT8 cache[AES_BLOCK_LEN]; /* Previous AES output is saved */ 259 | UINT8 nonce[AES_BLOCK_LEN]; /* The AES input making above cache */ 260 | aes_int_key prf_key; /* Expanded AES key for PDF */ 261 | } pdf_ctx; 262 | 263 | 264 | #define STREAMS (UMAC_OUTPUT_LEN / 4) /* Number of times hash is applied */ 265 | #define L1_KEY_LEN 1024 /* Internal key bytes */ 266 | #define L1_KEY_SHIFT 16 /* Toeplitz key shift between streams */ 267 | #define L1_PAD_BOUNDARY 32 /* pad message to boundary multiple */ 268 | #define ALLOC_BOUNDARY 16 /* Keep buffers aligned to this */ 269 | #define HASH_BUF_BYTES 64 /* nh_aux_hb buffer multiple */ 270 | 271 | typedef struct nh_ctx{ 272 | UINT8 nh_key [L1_KEY_LEN + L1_KEY_SHIFT * (STREAMS - 1)]; /* NH Key */ 273 | UINT8 data [HASH_BUF_BYTES]; /* Incomming data buffer */ 274 | int next_data_empty; /* Bookeeping variable for data buffer. */ 275 | int bytes_hashed; /* Bytes (out of L1_KEY_LEN) incorperated. */ 276 | UINT64 state[STREAMS]; /* on-line state */ 277 | } nh_ctx; 278 | 279 | typedef struct uhash_ctx { 280 | nh_ctx hash; /* Hash context for L1 NH hash */ 281 | UINT64 poly_key_8[STREAMS]; /* p64 poly keys */ 282 | UINT64 poly_accum[STREAMS]; /* poly hash result */ 283 | UINT64 ip_keys[STREAMS*4]; /* Inner-product keys */ 284 | UINT32 ip_trans[STREAMS]; /* Inner-product translation */ 285 | UINT32 msg_len; /* Total length of data passed */ 286 | /* to uhash */ 287 | } uhash_ctx; 288 | 289 | 290 | typedef struct umac_ctx { 291 | uhash_ctx hash; /* Hash function for message compression */ 292 | pdf_ctx pdf; /* PDF for hashed output */ 293 | void *free_ptr; /* Address to free this struct via */ 294 | } umac_ctx; 295 | 296 | 297 | 298 | //#include 299 | #include 300 | 301 | 302 | 303 | /** 304 | INCLUDES="-I./biblio/openssh-5.5p1/ -I./biblio/openssh-5.5p1/build-deb/ -I./biblio/openssl-0.9.8o/crypto/ -I./biblio/openssl-0.9.8o/" 305 | gcc -g -O2 -Wall -Wpointer-arith -Wuninitialized -Wsign-compare -Wno-pointer-sign -Wformat-security -fno-strict-aliasing -fno-builtin-memset -fstack-protector-all -Os -DSSH_EXTRAVERSION=\"Debian-4ubuntu5\" $INCLUDES -DSSHDIR=\"/etc/ssh\" -D_PATH_SSH_PROGRAM=\"/usr/bin/ssh\" -D_PATH_SSH_ASKPASS_DEFAULT=\"/usr/bin/ssh-askpass\" -D_PATH_SFTP_SERVER=\"/usr/lib/openssh/sftp-server\" -D_PATH_SSH_KEY_SIGN=\"/usr/lib/openssh/ssh-keysign\" -D_PATH_SSH_PKCS11_HELPER=\"/usr/lib/openssh/ssh-pkcs11-helper\" -D_PATH_SSH_PIDDIR=\"/var/run\" -D_PATH_PRIVSEP_CHROOT_DIR=\"/var/run/sshd\" -DSSH_RAND_HELPER=\"/usr/lib/openssh/ssh-rand-helper\" -D_PATH_SSH_DATADIR=\"/usr/share/ssh\" -DHAVE_CONFIG_H ssh-types.c -o ssh-types 306 | 307 | */ 308 | 309 | #define MAX_PACKETS (1U<<31) 310 | 311 | int main(){ 312 | 313 | #ifdef __SIZE_SSL 314 | printf("BIGNUM: %d\n",sizeof(BIGNUM)); 315 | printf("STACK: %d\n",sizeof(STACK)); 316 | printf("CRYPTO_EX_DATA: %d\n",sizeof(CRYPTO_EX_DATA)); 317 | printf("BN_MONT_CTX: %d\n",sizeof(BN_MONT_CTX)); 318 | printf("EVP_PKEY: %d\n",sizeof(EVP_PKEY)); 319 | printf("ENGINE_CMD_DEFN: %d\n",sizeof(ENGINE_CMD_DEFN)); 320 | printf("ENGINE: %d\n",sizeof(ENGINE)); 321 | printf("RSA: %d\n",sizeof(RSA)); 322 | printf("DSA: %d\n",sizeof(DSA)); 323 | printf("EVP_CIPHER: %d\n",sizeof(EVP_CIPHER)); 324 | printf("EVP_CIPHER_CTX: %d\n",sizeof(EVP_CIPHER_CTX)); 325 | printf("EVP_MD: %d\n",sizeof(EVP_MD)); 326 | printf("EVP_MD_CTX: %d\n",sizeof(EVP_MD_CTX)); 327 | printf("HMAC_CTX: %d\n",sizeof(HMAC_CTX)); 328 | printf("AES_KEY: %d\n",sizeof(AES_KEY)); 329 | printf("HMAC_MAX_MD_CBLOCK: %d\n", HMAC_MAX_MD_CBLOCK); 330 | printf("EVP_MAX_BLOCK_LENGTH: %d\n", EVP_MAX_BLOCK_LENGTH); 331 | printf("EVP_MAX_IV_LENGTH: %d\n",EVP_MAX_IV_LENGTH); 332 | printf("AES_MAXNR: %d\n",AES_MAXNR); 333 | #else 334 | printf("Cipher: %d\n",sizeof(Cipher)); 335 | printf("CipherContext: %d\n",sizeof(CipherContext)); 336 | printf("Enc: %d\n",sizeof(Enc)); 337 | printf("nh_ctx: %d\n",sizeof(struct nh_ctx)); 338 | printf("uhash_ctx: %d\n",sizeof(struct uhash_ctx)); 339 | printf("pdf_ctx: %d\n",sizeof(struct pdf_ctx)); 340 | printf("umac_ctx: %d\n",sizeof(struct umac_ctx)); 341 | printf("Mac: %d\n",sizeof(Mac)); 342 | printf("Comp: %d\n",sizeof(Comp)); 343 | printf("Newkeys: %d\n",sizeof(Newkeys)); 344 | printf("Buffer: %d\n",sizeof(Buffer)); 345 | printf("packet: %d\n",sizeof(struct packet)); 346 | printf("packet_state: %d\n",sizeof(struct packet_state)); 347 | printf("TAILQ_HEAD_PACKET: %d\n",sizeof(TAILQ_HEAD(,packet))); 348 | printf("TAILQ_ENTRY_PACKET: %d\n",sizeof(TAILQ_ENTRY(packet))); 349 | printf("session_state: %d\n",sizeof(struct session_state)); 350 | printf("UINT32: %d\n",sizeof(UINT32)); 351 | printf("UINT64: %d\n",sizeof(UINT64)); 352 | printf("UINT8: %d\n",sizeof(UINT8)); 353 | printf("AES_BLOCK_LEN: %d\n",AES_BLOCK_LEN); 354 | printf("HASH_BUF_BYTES: %d\n",HASH_BUF_BYTES); 355 | printf("UMAC_OUTPUT_LEN: %d\n",UMAC_OUTPUT_LEN); 356 | printf("SSH_SESSION_KEY_LENGTH: %d\n",SSH_SESSION_KEY_LENGTH); 357 | printf("L1_KEY_LEN: %d\n",L1_KEY_SHIFT); 358 | printf("L1_KEY_SHIFT: %d\n",L1_KEY_SHIFT); 359 | printf("MODE_MAX: %d\n",MODE_MAX); 360 | printf("STREAMS: %d\n",STREAMS); 361 | #endif 362 | } 363 | 364 | 365 | 366 | -------------------------------------------------------------------------------- /sslsnoop/stream.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import logging, os, socket, sys, time 10 | import multiprocessing, Queue 11 | 12 | from lrucache import LRUCache 13 | 14 | WAIT_RETRANSMIT = 20 15 | QSIZE = 5000 16 | 17 | log=logging.getLogger('stream') 18 | 19 | 20 | def hexify(data): 21 | s='' 22 | for i in range(0,len(data)): 23 | s+="%02x"% ord(data[i]) 24 | if i%16==15: 25 | s+="\r\n" 26 | elif i%2==1: 27 | s+=" " 28 | #s+="\r\n" 29 | return s 30 | 31 | 32 | class MissingDataException (Exception): 33 | ''' declare we are lost in translation ''' 34 | def __init__(self,nb=0): 35 | self.nb=nb 36 | 37 | class BadStateError (Exception): 38 | pass 39 | 40 | 41 | class State: 42 | pass 43 | 44 | 45 | class TCPState(State): 46 | ''' TCP state. 47 | enqueueRaw is used by TCP Stream to triage packet in one direction. 48 | STATE SEARCH: packets are then ordered into orderedQueue following a simple TCP seq state machine 49 | STATE ACTIVE: packets payload are added to the socket in an ordered fashion following a simple TCP seq state machine 50 | orderedQueue contains ordered packets waiting to be processed 51 | - at some point, packets data from ordered Queue must be put in write_socket 52 | - a override can change condition of write.... (decryption testing ?) 53 | - setSearchMode() set the state to queue packest 54 | - setActiveMode() set the state to write packets payload 55 | write_socket is used to put data from ordered TCP packets. 56 | read_socket is used by a reader ( Packetizer) to read data in active mode. 57 | 58 | ''' 59 | name = None 60 | packet_count = 0 61 | byte_count = 0 62 | start_seq = None 63 | max_seq = 0 64 | expected_seq = 0 65 | rawQueue = None 66 | orderedQueue = None 67 | write_socket = None 68 | read_socket = None 69 | ts_missing = None 70 | # for variable size retransmission 71 | packets = None 72 | activeLock = None 73 | def __init__(self,name): 74 | self.name=name 75 | self.rawQueue = {} 76 | self.orderedQueue = Queue.Queue(QSIZE) 77 | self.packets = LRUCache(QSIZE) 78 | self.activeLock = multiprocessing.Lock() 79 | # make socket UNIX way. non existent in Windows 80 | read, write = socket.socketpair() 81 | self.read_socket = socket.socket(_sock=read) ## useless to get a python object we do not override after all? 82 | self.write_socket = write 83 | # ok 84 | self.setSearchMode() 85 | log.debug('%s: created in search mode'%(self.name)) 86 | return 87 | 88 | def _enqueueRaw(self, packet): 89 | ''' the rawQueue gets all unexpected packets. 90 | reodering will happen before adding them to orderedQueue or socket ''' 91 | self.rawQueue[packet.seq]= packet 92 | return 93 | 94 | def _isMissing(self): 95 | return not (self.ts_missing is None) 96 | def _resetMissing(self): 97 | self.ts_missing = None 98 | def _setMissing(self): 99 | self.ts_missing = time.time() 100 | 101 | 102 | def _requeue(self): 103 | ''' Internal func . 104 | get packets from rawQueue and put them in processing orderedQueue or in socket ''' 105 | queue = self.rawQueue.values() 106 | queue.sort(key=lambda p: p.seq) 107 | # add the first and the next that are in perfect suite. 108 | toadd = [queue.pop(0),] 109 | log.debug('%s: _requeue : Poped packet seq : %s'%(self.name, toadd[0].seq)) 110 | for i in xrange(0,len(queue)): 111 | if toadd[-1].seq + len(toadd[-1].payload) == queue[0].seq : 112 | toadd.append(queue.pop(0)) 113 | else: 114 | log.debug('Stop prequeuing toadd[-1].seq+len(toadd[-1].payload): %d , queue[0].seq: %d'%(toadd[-1].seq+len(toadd[-1].payload) , queue[0].seq )) 115 | break 116 | # let remaining in state.queue 117 | self.rawQueue=dict( [ (p.seq, p) for p in queue]) 118 | # add to output 119 | for p in toadd: 120 | self.addPacket( p ) 121 | log.debug('Prequeued %d packets, remaining %d, queued from %d to %d '%( 122 | len(toadd), len(self.rawQueue), toadd[0].seq, (toadd[-1].seq + len(toadd[-1].payload)) 123 | )) 124 | # set expected 125 | self.max_seq = toadd[-1].seq 126 | self.expected_seq = toadd[-1].seq + len(toadd[-1].payload) 127 | # reset time counter 128 | if len(self.rawQueue) > 0: 129 | self._setMissing() 130 | else: 131 | self._resetMissing() 132 | return 133 | 134 | def _addPacketToOrderedQueue(self, packet ): 135 | ''' search mode ''' 136 | ''' Ordered packet are added here before their contents gets to socket. ''' 137 | log.debug('%s: _addPacketToOrderedQueue'%(self.name)) 138 | self.orderedQueue.put(packet) 139 | return 1 140 | 141 | def _addPacketToSocket(self, packet): 142 | ''' active mode ''' 143 | self.activeLock.acquire() 144 | cnt = self.write_socket.send( packet.payload.load ) 145 | self.byte_count += cnt 146 | self.packet_count += 1 147 | self.activeLock.release() 148 | log.debug('%s: Adding packet to socket - %d bytes added'%(self.name, cnt)) 149 | return True 150 | 151 | def _checkStateFalse(self, packet): 152 | return False 153 | 154 | def _checkState1(self, packet): 155 | ''' temp methods to get initial seqnum from first packet ''' 156 | seq = packet.seq 157 | ## debug head initialisation 158 | if self.start_seq is None: 159 | self.start_seq=seq 160 | self.expected_seq=seq 161 | # head done. switch to normal behaviour 162 | self.checkState = self._checkState 163 | log.debug('%s: Switching to regular checkState'%(self.name)) 164 | return self._checkState( packet ) 165 | 166 | def _checkState(self, packet): 167 | ''' if packet is expected return True. 168 | else, add it to queues and return False 169 | ''' 170 | ## we should not processs empty packets 171 | payloadLen = len(packet.payload) 172 | if payloadLen == 0: 173 | return False 174 | seq = packet.seq 175 | log.debug('%s: Checking state of packet %d exp: %d len: %d'%(self.name, packet.seq, self.expected_seq, len(packet.payload.load))) 176 | 177 | # packet is expected 178 | if seq == self.expected_seq: # JIT 179 | log.debug('%s: got a good paket, adding..'%(self.name)) 180 | self.max_seq = seq 181 | self.expected_seq = seq + payloadLen 182 | self.addPacket(packet) 183 | # 184 | self.packets[seq] = packet # debug the packets 185 | self._resetMissing() 186 | # check if next is already in self.queue , if expected has changed 187 | self.checkForExpectedPackets() 188 | return True 189 | 190 | # packet is future 191 | elif seq > self.expected_seq: 192 | log.debug('%s: Future packet, queuing it...'%(self.name)) 193 | # seq is in advance, add it to queue 194 | self._enqueueRaw(packet) 195 | #log.debug('Queuing packet seq: %d len: %d %s'%(seq, payloadLen, state)) 196 | 197 | # we are waiting for something... 198 | if not self._isMissing(): 199 | self._setMissing() 200 | 201 | # check if next is already in self.queue 202 | self.checkForExpectedPackets() 203 | #do not add this one, it's maybe already added anyway... 204 | return False 205 | 206 | # packet is a retranmission 207 | elif seq < self.expected_seq: 208 | # TCP retransmission 209 | log.debug('TCP retransmit - We just received %d when we already processed %d'%(seq, self.max_seq)) 210 | # never hitted 211 | if seq+payloadLen > self.expected_seq : 212 | log.warning(' ***** EXTRA DATA FOUND ON TCP RETRANSMISSION ') 213 | # we need to recover the extra data to put it in the stream 214 | nb = (seq+payloadLen) - self.expected_seq 215 | data = packet.payload.load[-nb:] 216 | log.warning('packet seq %d has already been received'%(seq)) 217 | log.warning('recent packet : %s'%(repr(packet.underlayer) )) 218 | log.warning('first packet : %s'%(repr(self.packets[seq].underlayer ) )) 219 | seq2 = seq+len(self.packets[seq].payload) 220 | log.warning('first+1 packet : %s'%(repr(self.packets[seq2].underlayer ) )) 221 | packet.payload.load = data 222 | log.warning('NEW packet : %s'%(repr(packet.underlayer) )) 223 | # updates seq 224 | self.max_seq = seq 225 | self.expected_seq = seq2 226 | # save it 227 | self.packets[seq]=packet 228 | # we can process this (new) one, it's containing only the non-used remains 229 | self.addPacket(packet) 230 | return True 231 | # ignore it, it's a retransmission 232 | return False 233 | # else, seq < expected_seq and seq >= self.max_seq. That's not possible... it's current packet. 234 | # it's a dup already dedupped ? partial fragment ? 235 | log.warning('received a bogus fragment seq: %d for %s'%(seq, self)) 236 | return False 237 | 238 | ################ PUBLIC METHODS 239 | ''' check the state of the packet. 240 | if packet is expected return True. 241 | else, add it to queues and return False 242 | at first, we are expected to find our first seq ''' 243 | checkState = _checkState1 244 | ''' at first, add to queue. When the state goes into active mode, switch to socket ''' 245 | addPacket = _addPacketToOrderedQueue 246 | 247 | def checkForExpectedPackets(self): 248 | ''' check for some internal expectation. ''' 249 | ret = False 250 | log.debug('time to check the raw queue for expected packets ') 251 | if self.expected_seq in self.rawQueue : 252 | log.debug('requeue all expected packets to ordered queue ') 253 | self._requeue() 254 | ret = True 255 | # waiting for too long 256 | elif self._isMissing() and time.time() > ( self.ts_missing + WAIT_RETRANSMIT) : 257 | log.error('%s: Some data is missing. the sniffer losts some packets ? Dying. '%(self.name)) 258 | #raise MissingDataException() 259 | self.checkState = self._checkStateFalse 260 | return ret 261 | 262 | def getSocket(self): 263 | return self.read_socket 264 | 265 | def getFirstPacketData(self, block=False): 266 | ''' pop the first packet ''' 267 | if not self.searchMode: 268 | raise BadStateError('State %s is in Active Mode. Not poping allowed') 269 | # wait for it if self.orderedQueue.qsize() == 0: or except Queue.Empty 270 | p = self.orderedQueue.get(block=block) 271 | d = p.payload.load 272 | self.orderedQueue.task_done() 273 | #log.info('orderedQueue size : %d'%(self.orderedQueue.qsize())) 274 | return d, self.orderedQueue.qsize() 275 | 276 | def setSearchMode(self): 277 | self.addPacket = self._addPacketToOrderedQueue 278 | self.searchMode = True 279 | 280 | def setActiveMode(self, data=None): 281 | ''' go in active mode. 282 | all ordered packets will be written to the socket for subsequent use... 283 | 284 | WARNING: you should have a thread running on read_socket, otherwise, 285 | the socket buffer space will quickly be overflown and this will block on socket.send() 286 | 287 | @param data: some data can be pre-written to the socket . 288 | ''' 289 | # stop any data from behing inserted in socket witouht proper timing 290 | log.debug('Activating the active mode. Data will be decrypted') 291 | self.activeLock.acquire() 292 | self.searchMode = False 293 | self.addPacket = self._addPacketToSocket 294 | if data is not None: 295 | log.debug('Prepended %d bytes of data before remaining packets'%(len(data) ) ) 296 | self.byte_count += self.write_socket.send( data ) 297 | self.packet_count += 1 298 | # push data 299 | queue = self.orderedQueue 300 | self.orderedQueue = Queue.Queue(QSIZE) 301 | while not queue.empty(): 302 | packet = queue.get() 303 | self.byte_count += self.write_socket.send( packet.payload.load ) 304 | self.packet_count += 1 305 | log.debug('%d bytes written for %d packets'%(self.byte_count, self.packet_count)) 306 | self.activeLock.release() 307 | # operation can now resume. the socket is active 308 | return 309 | 310 | def finish(self): 311 | ''' use current data to the best of it's knowledge .. ''' 312 | self.orderedQueue.join() 313 | # if orderedQueue is empty, data is in socket. 314 | # no more data in coming up from stream/network we can close the socket. 315 | self.activeLock.acquire() 316 | self.write_socket.close() 317 | self.activeLock.release() 318 | 319 | def __str__(self): 320 | return "%s: %d bytes/%d packets max_seq:%d expected_seq:%d q:%d"%(self.name, self.byte_count,self.packet_count, 321 | self.max_seq,self.expected_seq, self.orderedQueue.qsize()) 322 | 323 | class stack: 324 | ''' A stream is duplex. ''' 325 | def __init__(self): 326 | self.inbound=TCPState('inbound') 327 | self.outbound=TCPState('outbound') 328 | def __str__(self): 329 | return "\n%s\n%s"%(self.inbound,self.outbound) 330 | 331 | class Stream: 332 | ''' 333 | A Stream is a duplex communication. 334 | 335 | All packets coming from a scapy socket are triaged here, for both directions. 336 | 337 | If the TCPState is in active mode (setActiveMode()) data will be available in the socket. 338 | 339 | ''' 340 | worker=None 341 | def __init__(self, inQueue, connection, protocolName): 342 | ''' 343 | @param inQueue: packet Queue from socket_scapy ## from multiprocessing import Process, Queue 344 | @param connectionTuple: connection Metadata to identify inbound/outbound 345 | ''' 346 | self.inQueue = inQueue # triage must happen 347 | self.connection = connection 348 | self.protocolName = protocolName 349 | # contains TCP state & packets queue before reordering 350 | self.stack = stack() # duplex context 351 | self.running = True 352 | 353 | def getInbound(self): 354 | return self.stack.inbound 355 | 356 | def getOutbound(self): 357 | return self.stack.outbound 358 | 359 | def _isInbound(self, packet): 360 | raise NotImplementedError() 361 | 362 | def check(self): 363 | return self.running 364 | 365 | def pleaseStop(self): 366 | self.running = False 367 | 368 | def run(self): 369 | ''' loops on self.inQueue and calls triage ''' 370 | while self.check(): 371 | try: 372 | for p in self.inQueue.get(block=True, timeout=1): 373 | self.triage(p) 374 | self.inQueue.task_done() 375 | except Queue.Empty,e: 376 | log.debug('Empty queue') 377 | pass 378 | self.finish() 379 | pass 380 | 381 | def triage(self, obj): 382 | ''' pile packets in the right state machine and call the processing 383 | @param obj: the packet 384 | ''' 385 | # check queues only 386 | if obj is None: 387 | self.stack.inbound.checkForExpectedPackets() 388 | self.stack.outbound.checkForExpectedPackets() 389 | return None 390 | 391 | # triage 392 | packet=obj[self.protocolName] 393 | pLen=len(packet.payload) 394 | if pLen == 0: # ignore acks and stuff 395 | return None 396 | # real triage 397 | if self._isInbound(packet): 398 | log.debug('Packet is Inbound') 399 | if self.stack.inbound.checkState( packet ) and pLen > 0: 400 | log.debug('packet added') 401 | elif self._isOutbound(packet): 402 | log.debug('Packet is Outbound') 403 | if self.stack.outbound.checkState( packet ) and pLen > 0: 404 | log.debug('packet added') 405 | else: 406 | log.warning('This packet is nor outbound, nor inbound by my standards. Your network sniffer queuing system sucks.') 407 | return 408 | 409 | def finish(self): 410 | self.getInbound().finish() 411 | self.getOutbound().finish() 412 | log.info('Closing Stream - closing both sockets') 413 | 414 | def __str__(self): 415 | return ""%str(self.connection) 416 | 417 | 418 | class TCPStream(Stream): 419 | ''' Simple subclass for TCP packets ''' 420 | def __init__(self, inQueue, connection ): 421 | Stream.__init__(self, inQueue, connection, protocolName='TCP') 422 | 423 | def _isInbound(self, packet): 424 | ''' check if the connection metadata corrects ''' 425 | host,port = self.connection.local_address 426 | return host == packet.underlayer.dst and port == packet.dport 427 | 428 | def _isOutbound(self, packet): 429 | ''' check if the connection metadata corrects ''' 430 | host,port = self.connection.local_address 431 | return host == packet.underlayer.src and port == packet.sport 432 | 433 | 434 | 435 | 436 | 437 | 438 | 439 | -------------------------------------------------------------------------------- /sslsnoop/test-scapy.py: -------------------------------------------------------------------------------- 1 | 2 | import scapy.config 3 | 4 | class a: 5 | def __init__(self): 6 | scapy.config.conf.use_pcap=True 7 | self.packetCount = 10 8 | self.timeout = 10 9 | self.filterRules = 'tcp and port 22' 10 | return 11 | def enqueue(self,p): 12 | print len(p), p['TCP'].seq, p.summary() 13 | def run(self): 14 | from scapy.all import sniff 15 | print ('Using L2listen = %s'%(scapy.config.conf.L2listen)) 16 | sniff(iface='any', count=self.packetCount,timeout=self.timeout,store=0,filter=self.filterRules,prn=self.enqueue) 17 | print ('============ SNIFF Terminated ====================') 18 | 19 | 20 | o=a() 21 | 22 | o.run() 23 | 24 | -------------------------------------------------------------------------------- /sslsnoop/test.py: -------------------------------------------------------------------------------- 1 | 2 | #test read memory 3 | 4 | #import ptrace 5 | #f=file('/proc/8902/maps') 6 | #lines=f.readlines() 7 | 8 | import logging,sys,os 9 | logging.basicConfig(level=logging.INFO) 10 | 11 | import haystack 12 | from haystack.reverse import reversers 13 | 14 | import math 15 | 16 | log=logging.getLogger('test') 17 | 18 | #context = reversers.getContext('skype.1.i') 19 | #heap = context.heap 20 | #bytes = heap.mmap().getByteBuffer() 21 | 22 | def H(data): 23 | if not data: 24 | return 0 25 | entropy = 0 26 | for x in range(256): 27 | p_x = float(data.count(chr(x)))/len(data) 28 | if p_x > 0: 29 | entropy += - p_x*math.log(p_x, 2) 30 | return entropy 31 | 32 | def entropy_scan (data, block_size) : 33 | # creates blocks of block_size for all possible offsets ('x'): 34 | blocks = (data[x : block_size + x] for x in range (len (data) - block_size)) 35 | i = 0 36 | for block in (blocks) : 37 | i += 1 38 | yield H (block) 39 | 40 | 41 | results = [] 42 | if False: 43 | for vaddr,s in zip(context._malloc_addresses, context._malloc_sizes): 44 | #print vaddr, 45 | vals = heap.readBytes(int(vaddr), int(s)) 46 | ent = H(vals) 47 | results.append( (ent, vaddr) ) 48 | else: 49 | from haystack.reverse.win32 import win7heapwalker 50 | from haystack import dump_loader 51 | mappings = dump_loader.load('test/dumps/putty/putty.1.dump') 52 | for vaddr,s in win7heapwalker.getUserAllocations(mappings, mappings.getMmapForAddr(0x5c0000) ): 53 | #print vaddr, 54 | heap = mappings.getMmapForAddr(vaddr) 55 | vals = heap.readBytes(int(vaddr), int(s)) 56 | ent = H(vals) 57 | results.append( (ent, int(vaddr), int(s)) ) 58 | 59 | results.sort() 60 | #print results 61 | 62 | for i in range(1,6): 63 | ent,addr,size = results[-i] 64 | print '%2.2f @%x size:%d'%(results[-i]) 65 | heap = mappings.getMmapForAddr(addr) 66 | vals = heap.readBytes(int(addr), int(size)) 67 | print repr(vals) 68 | continue # need fscking context 69 | ent,addr = results[-i] 70 | st = context.getStructureForAddr(addr) 71 | st.decodeFields() 72 | print st.toString() 73 | print '## # ', ent 74 | print '----------------' 75 | 76 | # it actually works.... 77 | # so multiple previous work on google. 78 | 79 | 80 | 81 | 82 | 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | def printBytes(data): 91 | for i in range(0,len(data)/8,8): 92 | print "0x%lx"%data[i], 93 | 94 | 95 | 96 | 97 | 98 | def dbg_read(addr): 99 | from ptrace.cpu_info import CPU_64BITS, CPU_WORD_SIZE, CPU_POWERPC 100 | for a in range(addr,addr+88,CPU_WORD_SIZE): 101 | print "0x%lx"%process.readWord(a) 102 | 103 | 104 | def readRsa(addr): 105 | dbg=PtraceDebugger() 106 | process=dbg.addProcess(PID, is_attached=False) 107 | if process is None: 108 | log.error("Error initializing Process debugging for %d"% PID) 109 | sys.exit(-1) 110 | # read where it is 111 | ######################### RAAAAAAAAAAAAH 112 | rsa=ctypes_openssl.RSA.from_buffer_copy(process.readStruct(addr,ctypes_openssl.RSA)) 113 | mappings=readProcessMappings(process) 114 | print "isValid : ", rsa.isValid(mappings) 115 | #rsa.printValid(maps) 116 | print rsa 117 | #print rsa.n 118 | #print rsa.n.contents 119 | #print ctypes.byref(rsa.n.contents) 120 | #print rsa 121 | print '------------ === Loading members' 122 | ret=rsa.loadMembers(process,mappings) 123 | print '------------ === Loading members finished' 124 | print ret,rsa 125 | return rsa 126 | 127 | def readDsa(addr): 128 | dbg=PtraceDebugger() 129 | process=dbg.addProcess(PID, is_attached=False) 130 | if process is None: 131 | log.error("Error initializing Process debugging for %d"% pid) 132 | sys.exit(-1) 133 | # read where it is 134 | dsa=ctypes_openssl.DSA.from_buffer_copy(process.readStruct(addr,ctypes_openssl.DSA)) 135 | mappings=readProcessMappings(process) 136 | print "isValid : ", dsa.isValid(mappings) 137 | #dsa.printValid(maps) 138 | #print 'DSA1 -> ', dsa 139 | #print '------------' 140 | #print 'DSA1.q -> ', dsa.q 141 | print '------------ === Loading members' 142 | dsa.loadMembers(process,mappings) 143 | #print '------------ ===== ==== ' 144 | #print 'DSA2.q -> ', dsa.q 145 | #print 'DSA2.q.contents -> ', dsa.q.contents 146 | #print ctypes.byref(rsa.n.contents) 147 | #print dsa 148 | return dsa 149 | 150 | def writeWithLibRSA(addr): 151 | ssl=cdll.LoadLibrary("libssl.so") 152 | # need original data struct 153 | #rsa=process.readBytes(addr, ctypes.sizeof(ctypes_openssl.RSA) ) 154 | #rsa=ctypes.addressof(process.readStruct(addr,ctypes_openssl.RSA)) 155 | rsa=readRsa(addr) 156 | rsa_p=ctypes.addressof(rsa) 157 | print 'rsa acquired 0x%lx copied to 0x%lx'%(addr,rsa_p) 158 | f=libc.fopen("test.out","w") 159 | print 'file opened',f 160 | ret=ssl.PEM_write_RSAPrivateKey(f, rsa_p, None, None, 0, None, None) 161 | print 'key written' 162 | print ret,f 163 | 164 | def writeWithLibDSA(addr): 165 | ssl=cdll.LoadLibrary("libssl.so") 166 | dsa=readDsa(addr) 167 | dsa_p=ctypes.addressof(dsa) 168 | print 'dsa acquired 0x%lx copied to 0x%lx'%(addr,dsa_p) 169 | f=libc.fopen("test.out","w") 170 | print 'file opened',f 171 | ret=ssl.PEM_write_DSAPrivateKey(f, dsa_p, None, None, 0, None, None) 172 | print 'key written' 173 | print ret,f 174 | 175 | def withM2(addr): 176 | import M2Crypto 177 | from M2Crypto.BIO import MemoryBuffer 178 | from M2Crypto import RSA as mRSA 179 | rsa=process.readBytes(addr, ctypes.sizeof(ctypes_openssl.RSA) ) 180 | bio=MemoryBuffer(rsa) 181 | # tsssi need PEM 182 | myrsa=mRSA.load_key_bio(bio) 183 | return myrsa 184 | 185 | def printSize(): 186 | ctypes_openssl.printSizeof() 187 | ctypes_openssh.printSizeof() 188 | 189 | def printme(obj): 190 | log.info(obj) 191 | 192 | def findCipherContext(): 193 | dbg=PtraceDebugger() 194 | process=dbg.addProcess(pid,is_attached=False) 195 | if process is None: 196 | log.error("Error initializing Process debugging for %d"% pid) 197 | sys.exit(-1) 198 | mappings=readProcessMappings(process) 199 | stack=process.findStack() 200 | for m in mappings: 201 | #if m.pathname != '[heap]': 202 | # continue 203 | if not abouchet.hasValidPermissions(m): 204 | continue 205 | print m,m.permissions 206 | abouchet.find_struct(process, m, ctypes_openssh.CipherContext, printme) 207 | 208 | def isMemOf(addr,mpid): 209 | class p: 210 | pid=mpid 211 | myP=p() 212 | for m in readProcessMappings(myP): 213 | if addr in m: 214 | print myP, m 215 | return True 216 | return False 217 | 218 | def testScapy(): 219 | import socket_scapy 220 | socket_scapy.test() 221 | 222 | def testScapyThread(): 223 | import socket_scapy,select 224 | from threading import Thread 225 | port=22 226 | sshfilter="tcp and port %d"%(port) 227 | soscapy=socket_scapy.socket_scapy(sshfilter,packetCount=100) 228 | log.info('Please make some ssh traffic') 229 | sniffer = Thread(target=soscapy.run) 230 | sniffer.start() 231 | # sniffer is strted, let's consume 232 | nbblocks=0 233 | data='' 234 | readso=soscapy.getInboundSocket() 235 | while sniffer.isAlive(): 236 | r,w,oo=select.select([readso],[],[],1) 237 | if len(r)>0: 238 | data+=readso.recv(16) 239 | nbblocks+=1 240 | # try to finish socket 241 | print 'sniffer is finished' 242 | r,w,oo=select.select([readso],[],[],0) 243 | while len(r)>0: 244 | data+=readso.recv(16) 245 | nbblocks+=1 246 | r,w,oo=select.select([readso],[],[],0) 247 | #end 248 | print "received %d blocks/ %d bytes"%(nbblocks,len(data)) 249 | print 'sniffer captured : ',soscapy 250 | 251 | 252 | 253 | def testwatwever(): 254 | #rsa=readRsa(addr) 255 | 256 | #writeWithLibRSA(addr) 257 | #print '---------------' 258 | #abouchet.find_keys(process,stack) 259 | 260 | #dsa=readDsa(0xb835b4e8) 261 | #print dsa 262 | #rsa=readRsa(0xb835c9a0) 263 | #writeWithLibDSA(addr) 264 | 265 | #printSize() 266 | 267 | #x pourPEM_write_RSAPrivateKey 268 | 269 | class B(ctypes.Structure): 270 | _fields_=[("b1",ctypes.c_ulong),('b2',ctypes.c_ulonglong)] 271 | 272 | class A(ctypes.Structure): 273 | _fields_=[("a1",ctypes.c_ulong),('b',ctypes.POINTER(B)),('a2',ctypes.c_ulonglong)] 274 | 275 | myaddr=0xb7c16884 276 | # rsa-> n 277 | myaddr=0xb8359148 278 | pid=12563 279 | if len(sys.argv) == 2: 280 | myaddr=int(sys.argv[1],16) 281 | if isMemOf(myaddr, pid ): 282 | print "isMemOf(0x%lx, 8831) - python"%myaddr 283 | if isMemOf(myaddr, PID): 284 | print "isMemOf(0x%lx, 27477) - ssh-agent"%myaddr 285 | 286 | #findCipherContext() 287 | 288 | 289 | soscapy=testScapyThread() 290 | 291 | 292 | 293 | 294 | 295 | -------------------------------------------------------------------------------- /sslsnoop/utils.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | # 4 | # Copyright (C) 2011 Loic Jaquemet loic.jaquemet+python@gmail.com 5 | # 6 | 7 | __author__ = "Loic Jaquemet loic.jaquemet+python@gmail.com" 8 | 9 | import logging 10 | import pickle 11 | import sys 12 | import socket 13 | import threading 14 | 15 | import argparse 16 | import psutil 17 | 18 | import output 19 | import network 20 | import openssh #OpenSSHLiveDecryptatator 21 | from paramiko_packet import Packetizer 22 | 23 | log = logging.getLogger('utils') 24 | 25 | 26 | 27 | def connectionToString(connection, reverse=False): 28 | log.debug('make a string for %s'%(repr(connection))) 29 | if reverse: 30 | return "%s:%d-%s:%d"%(connection.remote_address[0],connection.remote_address[1], connection.local_address[0],connection.local_address[1]) 31 | else: 32 | return "%s:%d-%s:%d"%(connection.local_address[0],connection.local_address[1],connection.remote_address[0],connection.remote_address[1]) 33 | 34 | def getConnectionForPID(pid): 35 | proc = psutil.Process(pid) 36 | return checkConnections(proc) 37 | 38 | def checkConnections(proc): 39 | conns=proc.get_connections() 40 | conns = [ c for c in conns if c.status == 'ESTABLISHED'] 41 | if len(conns) == 0 : 42 | return False 43 | elif len(conns) > 1 : 44 | log.warning(' %s has more than 1 connection ?'%(proc.name)) 45 | return False 46 | elif conns[0].status != 'ESTABLISHED' : 47 | log.warning(' %s has no ESTABLISHED connections (1 %s)'%(conn[0].status)) 48 | return False 49 | log.info('Found connection %s for %s'%(conns[0], proc.name)) 50 | return conns[0] 51 | 52 | class Connection: 53 | ''' Mimic psutils connection class ''' 54 | def __init__(self, src, sport, dst, dport): 55 | self.local_address = (src,sport) 56 | self.remote_address = (dst, dport) 57 | self.status = 'ESTABLISHED' 58 | def __str__(self): 59 | return "%s:%d-%s:%d"%(self.local_address[0],self.local_address[1],self.remote_address[0],self.remote_address[1]) 60 | 61 | 62 | 63 | def launchScapy(): 64 | from threading import Thread 65 | sshfilter = "tcp " 66 | soscapy = network.Sniffer(sshfilter) 67 | sniffer = Thread(target=soscapy.run) 68 | soscapy.thread = sniffer 69 | sniffer.start() 70 | return soscapy 71 | 72 | 73 | 74 | 75 | 76 | def dumpPcapToFiles(pcapfile, connection, fname='raw'): 77 | # create the pcap reader 78 | soscapy = network.PcapFileSniffer(pcapfile) 79 | sniffer = threading.Thread(target=soscapy.run, name='scapy') 80 | soscapy.thread = sniffer 81 | # create output engines 82 | stream = soscapy.makeStream( connection ) 83 | # prepare them 84 | stream.getInbound().setActiveMode() 85 | stream.getOutbound().setActiveMode() 86 | in_s = stream.getInbound().getSocket() 87 | out_s = stream.getOutbound().getSocket() 88 | win = output.RawPacketsToFile(in_s, fname+'-in.raw') 89 | wout = output.RawPacketsToFile(out_s, fname+'-out.raw') 90 | # make thread for all that 91 | st = threading.Thread(target=stream.run, name='stream') 92 | st.start() 93 | win_th = threading.Thread(target=win.run, name='inbound data writer') 94 | wout_th = threading.Thread(target=wout.run, name='outbound data writer') 95 | win_th.start() 96 | wout_th.start() 97 | # launch scapy and start the dumping 98 | sniffer.start() 99 | # wait for them 100 | for t in [sniffer, st, win_th, wout_th]: 101 | log.debug('waiting for %s'%(t.name)) 102 | t.join() 103 | in_s.close() 104 | return (fname+'-in.raw', fname+'-out.raw') 105 | 106 | 107 | def rawFilesFromPcap(pcapfile, connection, fname='raw'): 108 | fin,fout = dumpPcapToFiles(pcapfile, connection, fname='raw') 109 | return file(fin), file(fout) 110 | 111 | 112 | def writeToSocket(socket_,file_): 113 | while True: 114 | data = file_.read(4096) 115 | if len(data) == 0: 116 | break 117 | socket_.send(data) 118 | 119 | 120 | 121 | def dump(args): 122 | connection = Connection(args.src,args.sport, args.dst,args.dport) 123 | fname = 'raw-%s'%(connection) 124 | args.pcapfile.close() 125 | dumpPcapToFiles(args.pcapfile.name, connection, fname) 126 | 127 | 128 | def offline(args): 129 | connection = Connection(args.src,args.sport, args.dst,args.dport) 130 | fname = 'raw-%s'%(connection) 131 | args.pcapfile.close() 132 | decrypt = openssh.OpenSSHPcapDecrypt(args.pcapfile.name, connection, args.sessionstatefile) 133 | decrypt.run() 134 | log.info('Decrypt done -- ') 135 | 136 | 137 | 138 | 139 | -------------------------------------------------------------------------------- /test/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | """Unit test module.""" 5 | 6 | import sys 7 | if sys.version_info < (2, 7): 8 | import unittest2 as unittest 9 | else: 10 | import unittest 11 | 12 | __author__ = "Loic Jaquemet" 13 | __copyright__ = "Copyright (C) 2012 Loic Jaquemet" 14 | __email__ = "loic.jaquemet+python@gmail.com" 15 | __license__ = "GPL" 16 | __maintainer__ = "Loic Jaquemet" 17 | __status__ = "Production" 18 | 19 | 20 | 21 | def alltests(): 22 | ret = unittest.TestLoader().discover('test/sslsnoop/') 23 | return ret 24 | 25 | 26 | #alltests = suite() 27 | 28 | if __name__ == '__main__': 29 | unittest.main(verbosity=0) 30 | #suite = unittest.TestLoader().loadTestsFromTestCase(TestFunctions) 31 | #unittest.TextTestRunner(verbosity=2).run(suite) 32 | -------------------------------------------------------------------------------- /test/dsa.orig: -------------------------------------------------------------------------------- 1 | -----BEGIN DSA PRIVATE KEY----- 2 | MIIBvAIBAAKBgQCb324iVmFLE+r3Jxk95ubtB4hYYa7quGawwYEqhyxmM0BqkBEp 3 | epv2RBNY+put0SbIw7RXP/onrAXNCFBs5JCwRkbb+q9tEdBvGaCl7ZkXB41VHs4N 4 | 3w6NMZ3CW1FBDmR/5lEjd/hWLyKRnYhjBXIix3TGldDK3+x/uds7IJ47GwIVAOAx 5 | gol7ZRCr1dqdcSyhkAzRT+OhAoGBAIwvNhh+vIxEze+F/7/tBHmZ8V2EYZENEuQI 6 | 3UORP1DhpwH530niJKCuLVqfbPoQr42Go+hZvQwd7wezAoa6WZLYlsQy/T9UTmxF 7 | EYrMk3b8vEIaGA9eDmj1VdWkq6LtJ1w/XxXsYfleWl8vhKg+rINUQXPwgIeJNgQd 8 | Cn+TuGq+AoGAS+m7idWUbgj2yTsQRM/SEKi7ADaeHYq2wOhN977ogSTl/OpTX6nG 9 | 24Q+fe0FHh9FkjS/Yb9snHjFmT4IAz6bQVfPwvrjavwcgjUlA76bY93DwK1TXFvq 10 | a2PRrmMarWUpJMCQmvgoiWn3vOOM6N9OWDqYOawqhkL7RHYduB28QpgCFQCi1BL/ 11 | aW5vEe6hZEtFW3oCrjjpzw== 12 | -----END DSA PRIVATE KEY----- 13 | # in case you ask, this is not a production key. It was created for the sole purpose of this ssh decryption project. Go away. 14 | -------------------------------------------------------------------------------- /test/dsa.orig.pub: -------------------------------------------------------------------------------- 1 | ssh-dss AAAAB3NzaC1kc3MAAACBAJvfbiJWYUsT6vcnGT3m5u0HiFhhruq4ZrDBgSqHLGYzQGqQESl6m/ZEE1j6m63RJsjDtFc/+iesBc0IUGzkkLBGRtv6r20R0G8ZoKXtmRcHjVUezg3fDo0xncJbUUEOZH/mUSN3+FYvIpGdiGMFciLHdMaV0Mrf7H+52zsgnjsbAAAAFQDgMYKJe2UQq9XanXEsoZAM0U/joQAAAIEAjC82GH68jETN74X/v+0EeZnxXYRhkQ0S5AjdQ5E/UOGnAfnfSeIkoK4tWp9s+hCvjYaj6Fm9DB3vB7MChrpZktiWxDL9P1RObEURisyTdvy8QhoYD14OaPVV1aSrou0nXD9fFexh+V5aXy+EqD6sg1RBc/CAh4k2BB0Kf5O4ar4AAACAS+m7idWUbgj2yTsQRM/SEKi7ADaeHYq2wOhN977ogSTl/OpTX6nG24Q+fe0FHh9FkjS/Yb9snHjFmT4IAz6bQVfPwvrjavwcgjUlA76bY93DwK1TXFvqa2PRrmMarWUpJMCQmvgoiWn3vOOM6N9OWDqYOawqhkL7RHYduB28Qpg= jal@skippy 2 | -------------------------------------------------------------------------------- /test/id_dsa-1.key: -------------------------------------------------------------------------------- 1 | -----BEGIN DSA PRIVATE KEY----- 2 | MIIBvAIBAAKBgQCb324iVmFLE+r3Jxk95ubtB4hYYa7quGawwYEqhyxmM0BqkBEp 3 | epv2RBNY+put0SbIw7RXP/onrAXNCFBs5JCwRkbb+q9tEdBvGaCl7ZkXB41VHs4N 4 | 3w6NMZ3CW1FBDmR/5lEjd/hWLyKRnYhjBXIix3TGldDK3+x/uds7IJ47GwIVAOAx 5 | gol7ZRCr1dqdcSyhkAzRT+OhAoGBAIwvNhh+vIxEze+F/7/tBHmZ8V2EYZENEuQI 6 | 3UORP1DhpwH530niJKCuLVqfbPoQr42Go+hZvQwd7wezAoa6WZLYlsQy/T9UTmxF 7 | EYrMk3b8vEIaGA9eDmj1VdWkq6LtJ1w/XxXsYfleWl8vhKg+rINUQXPwgIeJNgQd 8 | Cn+TuGq+AoGAS+m7idWUbgj2yTsQRM/SEKi7ADaeHYq2wOhN977ogSTl/OpTX6nG 9 | 24Q+fe0FHh9FkjS/Yb9snHjFmT4IAz6bQVfPwvrjavwcgjUlA76bY93DwK1TXFvq 10 | a2PRrmMarWUpJMCQmvgoiWn3vOOM6N9OWDqYOawqhkL7RHYduB28QpgCFQCi1BL/ 11 | aW5vEe6hZEtFW3oCrjjpzw== 12 | -----END DSA PRIVATE KEY----- 13 | # in case you ask, this is not a production key. It was created for the sole purpose of this ssh decryption project. Go away. 14 | -------------------------------------------------------------------------------- /test/ssh/types.c.log: -------------------------------------------------------------------------------- 1 | session_state: 572 2 | Buffer: 16 3 | CipherContext: 148 4 | Newkeys: 256 5 | Mac: 216 6 | Cipher: 28 7 | Comp: 12 8 | EVP_CIPHER_CTX: 140 9 | EVP_MD: 72 10 | Enc: 28 11 | nh_ctx: 1128 12 | packet: 28 13 | packet_state: 24 14 | pdf_ctx: 276 15 | AES_KEY: 244 16 | uhash_ctx: 1236 17 | umac_ctx: 1516 18 | HMAC_CTX: 184 19 | TAILQ_ENTRY_PACKET: 8 20 | TAILQ_HEAD_PACKET: 8 21 | UINT32: 4 22 | UINT64: 8 23 | UINT8: 1 24 | AES_BLOCK_LEN: 16 25 | HASH_BUF_BYTES: 64 26 | UMAC_OUTPUT_LEN: 8 27 | SSH_SESSION_KEY_LENGTH: 32 28 | L1_KEY_LEN: 16 29 | L1_KEY_SHIFT: 16 30 | MODE_MAX: 2 31 | STREAMS: 2 32 | -------------------------------------------------------------------------------- /test/ssh/types.py.log: -------------------------------------------------------------------------------- 1 | session_state: 572 2 | Buffer: 16 3 | CipherContext: 148 4 | Newkeys: 256 5 | Mac: 216 6 | Cipher: 28 7 | Comp: 12 8 | EVP_CIPHER_CTX: 140 9 | EVP_MD: 72 10 | Enc: 28 11 | nh_ctx: 1128 12 | packet: 28 13 | packet_state: 24 14 | pdf_ctx: 276 15 | AES_KEY: 244 16 | uhash_ctx: 1236 17 | umac_ctx: 1516 18 | HMAC_CTX: 184 19 | TAILQ_ENTRY_PACKET: 8 20 | TAILQ_HEAD_PACKET: 8 21 | UINT32: 4 22 | UINT64: 8 23 | UINT8: 1 24 | AES_BLOCK_LEN: 16 25 | HASH_BUF_BYTES: 64 26 | UMAC_OUTPUT_LEN: 8 27 | SSH_SESSION_KEY_LENGTH: 32 28 | L1_KEY_LEN: 16 29 | L1_KEY_SHIFT: 16 30 | MODE_MAX: 2 31 | STREAMS: 2 32 | -------------------------------------------------------------------------------- /test/sslsnoop/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | """Unit test module.""" 5 | 6 | import unittest 7 | 8 | __author__ = "Loic Jaquemet" 9 | __copyright__ = "Copyright (C) 2012 Loic Jaquemet" 10 | __email__ = "loic.jaquemet+python@gmail.com" 11 | __license__ = "GPL" 12 | __maintainer__ = "Loic Jaquemet" 13 | __status__ = "Production" 14 | 15 | 16 | 17 | if __name__ == '__main__': 18 | unittest.main(verbosity=0) 19 | #suite = unittest.TestLoader().loadTestsFromTestCase(TestFunctions) 20 | #unittest.TextTestRunner(verbosity=2).run(suite) 21 | -------------------------------------------------------------------------------- /test/sslsnoop/test_ssh_data.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | """Tests end results data on ssh dump.""" 5 | 6 | import logging 7 | import unittest 8 | import sys 9 | 10 | import sslsnoop 11 | #need Config before from sslsnoop import ctypes_openssh as cssh 12 | 13 | from haystack import dump_loader 14 | from haystack import utils 15 | from haystack import abouchet 16 | from haystack import memory_mapper 17 | 18 | __author__ = "Loic Jaquemet" 19 | __copyright__ = "Copyright (C) 2012 Loic Jaquemet" 20 | __email__ = "loic.jaquemet+python@gmail.com" 21 | __license__ = "GPL" 22 | __maintainer__ = "Loic Jaquemet" 23 | __status__ = "Production" 24 | 25 | log = logging.getLogger('test_ssh_data') 26 | 27 | 28 | class SSH_1_Data(object): 29 | ''' dict of expected values. 30 | This class will be dynamicaly generated with test_xxx_xxx methods ''' 31 | expected = { 32 | "connection_in": 3, 33 | "connection_out": 3, 34 | "remote_protocol_flags": 0L, 35 | "receive_context": { # 36 | "plaintext": 0, 37 | "evp": { # 38 | "cipher": { # 0xb7832b20, #(FIELD NOT LOADED) 39 | "nid": 0, 40 | "block_size": 16, 41 | "key_len": 16, 42 | "iv_len": 16, 43 | "flags": 58L, 44 | "init": 0x0, #(FIELD NOT LOADED) 45 | "do_cipher": 0x0, #(FIELD NOT LOADED) 46 | "cleanup": 0x0, #(FIELD NOT LOADED) 47 | "ctx_size": 0, 48 | "set_asn1_parameters": 0x0, #(FIELD NOT LOADED) 49 | "get_asn1_parameters": 0x0, #(FIELD NOT LOADED) 50 | "ctrl": 0x0, #(FIELD NOT LOADED) 51 | "app_data": 0x0, 52 | }, 53 | "engine": 0x0, 54 | "encrypt": 0, 55 | "buf_len": 0, 56 | "oiv": b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 57 | "iv": b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 58 | "buf": b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 59 | "num": 0, 60 | "app_data": 0xb84f3910, #Void pointer NOT LOADED 61 | "key_len": 16, 62 | "flags": 0L, 63 | "cipher_data": 0x0, 64 | "final_used": 0, 65 | "block_mask": 15, 66 | "final": b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 67 | }, 68 | "cipher": { # 0xb7832288, #(FIELD NOT LOADED) 69 | "name": "aes128-ctr" , #(CString) 70 | "number": -3, 71 | "block_size": 16L, 72 | "key_len": 16L, 73 | "discard_len": 0L, 74 | "cbc_mode": 0L, 75 | "evptype": 0xb77f5d40, #(FIELD NOT LOADED) 76 | }, 77 | }, 78 | "send_context": { # 79 | "plaintext": 0, 80 | "evp": { # 81 | "cipher": 0xb7832b20, #(FIELD NOT LOADED) 82 | "engine": 0x0, 83 | "encrypt": 1, 84 | "buf_len": 0, 85 | "oiv": b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 86 | "iv": b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 87 | "buf": b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 88 | "num": 0, 89 | "app_data": 0xb84f4068, #Void pointer NOT LOADED 90 | "key_len": 16, 91 | "flags": 0L, 92 | "cipher_data": 0x0, 93 | "final_used": 0, 94 | "block_mask": 15, 95 | "final": b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 96 | }, 97 | "cipher": { # 0xb7832288, #(FIELD NOT LOADED) 98 | "name": "aes128-ctr" , #(CString) 99 | "number": -3, 100 | "block_size": 16L, 101 | "key_len": 16L, 102 | "discard_len": 0L, 103 | "cbc_mode": 0L, 104 | "evptype": 0xb77f5d40, #(FIELD NOT LOADED) 105 | }, 106 | }, 107 | "input": { # 108 | "buf": 0xb84ee558, #(FIELD NOT LOADED) 109 | "alloc": 4096L, 110 | "offset": 48L, 111 | "end": 48L, 112 | }, 113 | "output": { # 114 | "buf": 0xb84ef560, #(FIELD NOT LOADED) 115 | "alloc": 4096L, 116 | "offset": 448L, 117 | "end": 448L, 118 | }, 119 | "outgoing_packet": { # 120 | "buf": 0xb84f0568, #(FIELD NOT LOADED) 121 | "alloc": 4096L, 122 | "offset": 0L, 123 | "end": 0L, 124 | }, 125 | "incoming_packet": { # 126 | "buf": 0xb84f1570, #(FIELD NOT LOADED) 127 | "alloc": 4096L, 128 | "offset": 28L, 129 | "end": 28L, 130 | }, 131 | "compression_buffer": { # 132 | "buf": 0x0, 133 | "alloc": 0L, 134 | "offset": 0L, 135 | "end": 0L, 136 | }, 137 | "compression_buffer_ready": 0, 138 | "packet_compression": 0, 139 | "max_packet_size": 32768L, 140 | "initialized": 1, 141 | "interactive_mode": 1, 142 | "server_side": 0, 143 | "after_authentication": 1, 144 | "keep_alive_timeouts": 0, 145 | "packet_timeout_ms": -1, 146 | "newkeys" :[ 0xb84f4240, #(FIELD NOT LOADED) 147 | 0xb84f4360, #(FIELD NOT LOADED) 148 | ], 149 | "p_read": { # 150 | "seqnr": 12L, 151 | "packets": 9L, 152 | "blocks": 33L, 153 | "bytes": 2024L, 154 | }, 155 | "p_send": { # 156 | "seqnr": 12L, 157 | "packets": 9L, 158 | "blocks": 97L, 159 | "bytes": 2792L, 160 | }, 161 | "max_blocks_in": 4294967296L, 162 | "max_blocks_out": 4294967296L, 163 | "rekey_limit": 0L, 164 | "ssh1_key": b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', 165 | "ssh1_keylen": 0L, 166 | "extra_pad": 0, 167 | "packet_discard": 0L, 168 | "packet_discard_mac": 0x0, 169 | "packlen": 0L, 170 | "rekeying": 0, 171 | "set_interactive_called": 1, 172 | "set_maxsize_called": 0, 173 | "outgoing": { # 174 | "tqh_first": 0x0, 175 | "tqh_last": 0xb84ee54c, #(FIELD NOT LOADED) 176 | }, 177 | } 178 | pass 179 | 180 | def get_py_object(self): 181 | raise NotImplementedError 182 | 183 | def get_dict_value(root, attrlist): 184 | tmp = root 185 | for member in attrlist: 186 | tmp = tmp[member] 187 | return tmp 188 | 189 | def get_member_value(root, attrlist): 190 | tmp = root 191 | for member in attrlist: 192 | tmp = getattr(tmp, member) 193 | return tmp 194 | 195 | def test_tpl(self, attr): 196 | print 'testing ', attr 197 | expected = get_dict_value( self.expected, attr) 198 | found = get_member_value( self.get_py_object(), attr) 199 | print type(found) 200 | if utils.isFunctionType(type(found)): 201 | print 'nope' 202 | else: 203 | self.assertEquals( expected, found, '%s expected: %s found: %s'%('.'.join(attr), expected, found) ) 204 | import code 205 | code.interact(local =locals()) 206 | 207 | def _gen_attr_tests(cls): 208 | _gen_recurse_dict( cls, cls.expected, [] ) 209 | 210 | import functools 211 | def _gen_recurse_dict( cls, d, attr ): 212 | for k,v in d.items(): 213 | next = attr+[k] 214 | if type(v) == dict: 215 | #print type(v), next 216 | _gen_recurse_dict( cls, v, next ) 217 | elif type(v) == tuple: 218 | print 'tuple', v, k 219 | #elif type(v) == list: 220 | # for subel in enumerate(v): 221 | # next = attr 222 | # _gen_recurse_dict( cls, subel, next ) 223 | else: 224 | name = 'test_%s'%('_'.join(next)) 225 | setattr(cls, name, lambda s: test_tpl(s, next) ) 226 | 227 | # generate dynamic value-tests function for each members of the structure 228 | _gen_attr_tests(SSH_1_Data) 229 | 230 | 231 | class Test_SSH_1_Data_pickled(unittest.TestCase, SSH_1_Data): 232 | ''' 233 | ssh.1 234 | session_state is at 0xb84ee318 235 | 236 | test_xxx will be generated in SSH_1_Data 237 | ''' 238 | 239 | @classmethod 240 | def setUpClass(self): 241 | d = {'pickled': True, 242 | 'dumpname': 'test/dumps/ssh/ssh.1/', 243 | 'structName': 'sslsnoop.ctypes_openssh.session_state', 244 | 'addr': '0xb84ee318', 245 | 'pid': None, 246 | 'memfile': None, 247 | 'interactive': None, 248 | 'human': None, 249 | 'json': None, 250 | } 251 | args = type('args', ( object,), d) 252 | # setup haystack 253 | from haystack import config 254 | config.make_config_from_memdump(d['dumpname']) 255 | # 256 | addr = int(args.addr,16) 257 | structType = abouchet.getKlass(args.structName) 258 | self.mappings = memory_mapper.MemoryMapper(dumpname=args.dumpname).getMappings() 259 | self.finder = abouchet.StructFinder(self.mappings) 260 | memoryMap = utils.is_valid_address_value(addr, self.finder.mappings) 261 | # done 262 | self.session_state, self.found = self.finder.loadAt( memoryMap, addr, structType) 263 | self.pyobj = self.session_state.toPyObject() 264 | # if both None, that dies. 265 | #self.mappings = None 266 | #self.finder = None 267 | 268 | def get_py_object(self): 269 | return self.pyobj 270 | 271 | @classmethod 272 | def tearDownClass(self): 273 | self.mappings = None 274 | self.finder = None 275 | self.found = None 276 | self.session_state = None 277 | from haystack import model 278 | model.reset() 279 | pass 280 | 281 | def test_session_state(self): 282 | ''' test session_state against expected values''' 283 | self.assertTrue(self.found) 284 | 285 | 286 | def _recurse_expect(self, foundRoot, expectedRoot, attr ): 287 | ''' gro through the exepected values and compare results ''' 288 | for k,v in expectedRoot.items(): 289 | next = attr+[k] # attrname list 290 | foundNext = getattr(foundRoot, k) 291 | if type(v) == dict: 292 | self._recurse_expect( foundNext, v, next ) 293 | else: 294 | test_tpl(self, next) 295 | 296 | 297 | if __name__ == '__main__': 298 | #logging.basicConfig(level=logging.INFO) 299 | #log.setLevel(level=logging.DEBUG) 300 | unittest.main(verbosity=0) 301 | 302 | --------------------------------------------------------------------------------