├── .gitignore ├── .travis.yml ├── LICENSE ├── README.md ├── data └── .gitignore ├── docs ├── .gitignore ├── Makefile ├── conf.py ├── docs.rst ├── docs │ ├── cell.rst │ ├── cell │ │ ├── cell.rst │ │ ├── exceptions.rst │ │ ├── fixedlen.rst │ │ ├── relay.rst │ │ ├── util.rst │ │ └── varlen.rst │ ├── circuit.rst │ ├── circuit │ │ ├── circuit.rst │ │ ├── circuitmanager.rst │ │ ├── exceptions.rst │ │ └── ntorfsm.rst │ ├── connection.rst │ ├── connection │ │ ├── connection.rst │ │ ├── connectionpool.rst │ │ ├── exceptions.rst │ │ └── v3.rst │ ├── crypto.rst │ ├── crypto │ │ ├── exceptions.rst │ │ ├── ntorhandshake.rst │ │ ├── relaycrypto.rst │ │ └── util.rst │ ├── netstatus.rst │ ├── netstatus │ │ ├── exceptions.rst │ │ └── netstatus.rst │ ├── path.rst │ ├── path │ │ ├── exceptions.rst │ │ └── path.rst │ ├── socks.rst │ ├── socks │ │ └── socks.rst │ ├── stream.rst │ ├── stream │ │ └── stream.rst │ ├── util.rst │ └── util │ │ ├── exitrequest.rst │ │ └── tools.rst ├── index.rst ├── installation.rst ├── make.bat ├── overview.rst ├── roadmap.rst ├── simplifications.rst └── usage.rst ├── oppy ├── __init__.py ├── cell │ ├── __init__.py │ ├── cell.py │ ├── definitions.py │ ├── exceptions.py │ ├── fixedlen.py │ ├── relay.py │ ├── util.py │ └── varlen.py ├── circuit │ ├── __init__.py │ ├── circuit.py │ ├── circuitbuildtask.py │ ├── circuitmanager.py │ └── definitions.py ├── connection │ ├── __init__.py │ ├── connection.py │ ├── connectionbuildtask.py │ ├── connectionmanager.py │ └── definitions.py ├── crypto │ ├── .gitignore │ ├── __init__.py │ ├── ntor.py │ └── util.py ├── history │ ├── __init__.py │ └── guards.py ├── netstatus │ ├── __init__.py │ ├── microconsensusmanager.py │ ├── microdescriptormanager.py │ └── netstatus.py ├── oppy ├── path │ ├── __init__.py │ ├── exceptions.py │ ├── path.py │ └── util.py ├── socks │ ├── __init__.py │ └── socks.py ├── stream │ ├── __init__.py │ └── stream.py ├── tests │ ├── __init__.py │ ├── integration │ │ ├── __init__.py │ │ └── cell │ │ │ ├── __init__.py │ │ │ ├── cellbase.py │ │ │ ├── test_fixedlen.py │ │ │ └── test_varlen.py │ └── unit │ │ ├── __init__.py │ │ ├── circuit │ │ ├── __init__.py │ │ ├── test_circuit.py │ │ ├── test_circuitbuildtask.py │ │ └── test_circuitmanager.py │ │ ├── connection │ │ ├── __init__.py │ │ ├── cert_der.py │ │ ├── test_connection.py │ │ ├── test_connectionbuildtask.py │ │ └── test_connectionmanager.py │ │ ├── crypto │ │ ├── __init__.py │ │ ├── test_ntor.py │ │ └── test_util.py │ │ ├── netstatus │ │ ├── __init__.py │ │ ├── test_microconsensusmanager.py │ │ ├── test_microdescriptormanager.py │ │ └── test_netstatus.py │ │ ├── path │ │ ├── __init__.py │ │ ├── test_path.py │ │ └── test_util.py │ │ ├── socks │ │ ├── __init__.py │ │ └── test_socks.py │ │ ├── stream │ │ ├── __init__.py │ │ └── test_stream.py │ │ └── util │ │ ├── __init__.py │ │ ├── test_exitrequest.py │ │ └── test_tools.py └── util │ ├── __init__.py │ ├── exitrequest.py │ └── tools.py ├── requirements.txt └── simplifications.md /.gitignore: -------------------------------------------------------------------------------- 1 | ## Testing Files ## 2 | _trial_temp/ 3 | *.swo 4 | 5 | ## Data files ## 6 | data/ 7 | 8 | ### Python ### 9 | # Byte-compiled / optimized / DLL files 10 | __pycache__/ 11 | *.py[cod] 12 | 13 | # C extensions 14 | *.so 15 | 16 | # Distribution / packaging 17 | .Python 18 | env/ 19 | build/ 20 | develop-eggs/ 21 | dist/ 22 | downloads/ 23 | eggs/ 24 | .eggs/ 25 | lib/ 26 | lib64/ 27 | parts/ 28 | sdist/ 29 | var/ 30 | *.egg-info/ 31 | .installed.cfg 32 | *.egg 33 | 34 | # PyInstaller 35 | # Usually these files are written by a python script from a template 36 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 37 | *.manifest 38 | *.spec 39 | 40 | # Installer logs 41 | pip-log.txt 42 | pip-delete-this-directory.txt 43 | 44 | # Unit test / coverage reports 45 | htmlcov/ 46 | .tox/ 47 | .coverage 48 | .coverage.* 49 | .cache 50 | nosetests.xml 51 | coverage.xml 52 | *,cover 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Sphinx documentation 59 | docs/_build/ 60 | 61 | ### Vim ### 62 | [._]*.s[a-w][a-z] 63 | [._]s[a-w][a-z] 64 | *.un~ 65 | Session.vim 66 | .netrwhist 67 | *~ 68 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: 2 | python 3 | python: 4 | - "2.7" 5 | install: 6 | "pip install coveralls -r requirements.txt" 7 | branches: 8 | only: 9 | - master 10 | script: 11 | - export PYTHONPATH=${PYTHONPATH}:$(pwd) 12 | - coverage run --branch --source oppy $(which trial) oppy 13 | after_success: 14 | - coveralls 15 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2014, 2015 Nik Kinkel 2 | 3 | Redistribution and use in source and binary forms, with or without 4 | modification, are permitted provided that the following conditions are 5 | met: 6 | 7 | * Redistributions of source code must retain the above copyright 8 | notice, this list of conditions and the following disclaimer. 9 | 10 | * Redistributions in binary form must reproduce the above 11 | copyright notice, this list of conditions and the following disclaimer 12 | in the documentation and/or other materials provided with the 13 | distribution. 14 | 15 | * Neither the names of the copyright owners nor the names of its 16 | contributors may be used to endorse or promote products derived from 17 | this software without specific prior written permission. 18 | 19 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 | "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 | LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 | A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 | OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 | SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 | LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 | DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 | THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [![Build Status](https://travis-ci.org/nskinkel/oppy.svg?branch=master)](https://travis-ci.org/nskinkel/oppy) 2 | 3 | [![Coverage Status](https://coveralls.io/repos/nskinkel/oppy/badge.svg?branch=master)](https://coveralls.io/r/nskinkel/oppy?branch=master) 4 | 5 | #oppy 6 | `oppy` is a Tor onion proxy implementation written in Python. Any further references to "Tor" or "tor" 7 | refer to the protocol, unless otherwise noted, and do not imply endorsement 8 | from The Tor Project organization. `oppy` is produced independently from the 9 | Tor® anonymity software and carries no guarantee from The Tor Project about 10 | quality, suitability or anything else. 11 | 12 | `oppy` is [free software](https://fsf.org), distributed under the "modified" 13 | (or 3-clause) BSD license. 14 | 15 | To learn more about what Onion Proxies do, please see `tor-spec.txt`, the Tor 16 | protocol specification. 17 | 18 | For full documentation, see: [oppy-docs](https://nskinkel.github.com/oppy) 19 | 20 | 21 | ###Warning 22 | `oppy` is provided in the hope it will be useful, however **oppy will NOT 23 | provide strong anonymity**. `oppy` is just a prototype: it's not very well 24 | tested yet, and it makes a number of simplifications. 25 | 26 | If you need strong anonymity, please use the 27 | [official Tor software](https://www.torproject.org/download/download-easy.html) 28 | from The Tor Project. 29 | 30 | `oppy` is, at the moment, mainly meant for developers and hackers to play 31 | with. 32 | 33 | 34 | ###Installation 35 | 36 | First, install the dependencies: 37 | 38 | ` 39 | $ pip -r requirements.txt 40 | ` 41 | 42 | Now you're ready to clone this repository: 43 | 44 | ``` 45 | $ git clone https://github.com/nskinkel/oppy 46 | ``` 47 | 48 | Next, `cd` to the top-level `oppy` directory and add it to your python path 49 | 50 | ``` 51 | $ export PYTHONPATH=$PYTHONPATH:$(pwd) 52 | ``` 53 | 54 | *Note*: the "top-level" directory that should be added to your python path 55 | is the directory containing the `oppy`, `docs`, and `data` directories. 56 | 57 | ###Usage 58 | `oppy` aims to be a fully functional Tor client and can be used just the 59 | same way as a regular Tor client. 60 | 61 | `oppy` supports the following arguments: 62 | 63 | ``` 64 | -l --log-level python log level, defauls to INFO 65 | -f --log-file filename to write logs to, defaults to sys.stdout 66 | -p --SOCKS-port local port for oppy's SOCKS interface to listen on (defaults to 10050) 67 | -h --help print these options 68 | ``` 69 | 70 | To run oppy at the DEBUG log level on port 10050, from the oppy/oppy directory 71 | run: 72 | 73 | ``` 74 | $ ./oppy -l debug -p 10050 75 | ``` 76 | 77 | `oppy` will print some information as it gathers network status documents and 78 | starts building circuits. After the first circuit opens up, `oppy` will be 79 | listening on port 10050 for incoming SOCKS 5 connections. 80 | 81 | You can tell any application that can use a SOCKS 5 proxy to use `oppy` (e.g. 82 | SSH or Firefox) - just configure that application to use SOCKS 5 on localhost 83 | on the port that `oppy` is running on. 84 | 85 | You can also tell the Tor Browser to use `oppy` instead of its own Tor process. 86 | 87 | If you're using a web browser with `oppy`, browse to 88 | [Tor check](https://check.torproject.org) to verify `oppy` is working. 89 | 90 | ####Warning: 91 | You will **not** get strong anonymity by running, say, vanilla Firefox through 92 | a tor process and using "normal" browsing habits. See [a list of warnings](https://www.torproject.org/download/download#warning) for some reasons why this 93 | is not sufficient for strong anonymity. 94 | 95 | ###Bugs and Simplifications Made 96 | A few of the major "noticeable" simplifications that directly impact regular 97 | usage include: 98 | 99 | - oppy doesn't know how to recover from RelayEnd cells sent because of 100 | reasons like EXIT_POLICY. In these cases oppy just closes the stream, so 101 | this can sometimes look, to the user, like oppy is just not working. 102 | - oppy doesn't currently calculate circuit build timeouts or try to 103 | rebuild slow circuits (or circuits which become unresponsive). Again, 104 | this can look to the user like oppy has stopped working (e.g. web 105 | pages may stop loading if a stream gets assigned to a slow/unresponsive 106 | circuit). 107 | - oppy doesn't yet put a timeout on downloading server descriptors, 108 | so sometimes this will hang if oppy chooses a bad V2Dir cache. 109 | 110 | For a more complete list of the simplifications oppy makes, see: 111 | simplifications.md. 112 | -------------------------------------------------------------------------------- /data/.gitignore: -------------------------------------------------------------------------------- 1 | cached-consensus 2 | cached-descriptors 3 | -------------------------------------------------------------------------------- /docs/.gitignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/docs/.gitignore -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = ../../oppy-docs 9 | 10 | # User-friendly check for sphinx-build 11 | ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) 12 | $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) 13 | endif 14 | 15 | # Internal variables. 16 | PAPEROPT_a4 = -D latex_paper_size=a4 17 | PAPEROPT_letter = -D latex_paper_size=letter 18 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 19 | # the i18n builder cannot share the environment and doctrees with the others 20 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 21 | 22 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext 23 | 24 | help: 25 | @echo "Please use \`make ' where is one of" 26 | @echo " html to make standalone HTML files" 27 | @echo " dirhtml to make HTML files named index.html in directories" 28 | @echo " singlehtml to make a single large HTML file" 29 | @echo " pickle to make pickle files" 30 | @echo " json to make JSON files" 31 | @echo " htmlhelp to make HTML files and a HTML help project" 32 | @echo " qthelp to make HTML files and a qthelp project" 33 | @echo " devhelp to make HTML files and a Devhelp project" 34 | @echo " epub to make an epub" 35 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 36 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 37 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" 38 | @echo " text to make text files" 39 | @echo " man to make manual pages" 40 | @echo " texinfo to make Texinfo files" 41 | @echo " info to make Texinfo files and run them through makeinfo" 42 | @echo " gettext to make PO message catalogs" 43 | @echo " changes to make an overview of all changed/added/deprecated items" 44 | @echo " xml to make Docutils-native XML files" 45 | @echo " pseudoxml to make pseudoxml-XML files for display purposes" 46 | @echo " linkcheck to check all external links for integrity" 47 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 48 | 49 | clean: 50 | rm -rf $(BUILDDIR)/* 51 | 52 | html: 53 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 54 | @echo 55 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 56 | 57 | dirhtml: 58 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 59 | @echo 60 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 61 | 62 | singlehtml: 63 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 64 | @echo 65 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 66 | 67 | pickle: 68 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 69 | @echo 70 | @echo "Build finished; now you can process the pickle files." 71 | 72 | json: 73 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 74 | @echo 75 | @echo "Build finished; now you can process the JSON files." 76 | 77 | htmlhelp: 78 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 79 | @echo 80 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 81 | ".hhp project file in $(BUILDDIR)/htmlhelp." 82 | 83 | qthelp: 84 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 85 | @echo 86 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 87 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 88 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/oppy.qhcp" 89 | @echo "To view the help file:" 90 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/oppy.qhc" 91 | 92 | devhelp: 93 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 94 | @echo 95 | @echo "Build finished." 96 | @echo "To view the help file:" 97 | @echo "# mkdir -p $$HOME/.local/share/devhelp/oppy" 98 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/oppy" 99 | @echo "# devhelp" 100 | 101 | epub: 102 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 103 | @echo 104 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 105 | 106 | latex: 107 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 108 | @echo 109 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 110 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 111 | "(use \`make latexpdf' here to do that automatically)." 112 | 113 | latexpdf: 114 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 115 | @echo "Running LaTeX files through pdflatex..." 116 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 117 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 118 | 119 | latexpdfja: 120 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 121 | @echo "Running LaTeX files through platex and dvipdfmx..." 122 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja 123 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 124 | 125 | text: 126 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 127 | @echo 128 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 129 | 130 | man: 131 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 132 | @echo 133 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 134 | 135 | texinfo: 136 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 137 | @echo 138 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 139 | @echo "Run \`make' in that directory to run these through makeinfo" \ 140 | "(use \`make info' here to do that automatically)." 141 | 142 | info: 143 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 144 | @echo "Running Texinfo files through makeinfo..." 145 | make -C $(BUILDDIR)/texinfo info 146 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 147 | 148 | gettext: 149 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 150 | @echo 151 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 152 | 153 | changes: 154 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 155 | @echo 156 | @echo "The overview file is in $(BUILDDIR)/changes." 157 | 158 | linkcheck: 159 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 160 | @echo 161 | @echo "Link check complete; look for any errors in the above output " \ 162 | "or in $(BUILDDIR)/linkcheck/output.txt." 163 | 164 | doctest: 165 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 166 | @echo "Testing of doctests in the sources finished, look at the " \ 167 | "results in $(BUILDDIR)/doctest/output.txt." 168 | 169 | xml: 170 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml 171 | @echo 172 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml." 173 | 174 | pseudoxml: 175 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml 176 | @echo 177 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." 178 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # oppy documentation build configuration file, created by 4 | # sphinx-quickstart on Fri Jan 16 00:51:37 2015. 5 | # 6 | # This file is execfile()d with the current directory set to its 7 | # containing dir. 8 | # 9 | # Note that not all possible configuration values are present in this 10 | # autogenerated file. 11 | # 12 | # All configuration values have a default; values that are commented out 13 | # serve to show the default. 14 | 15 | import sys 16 | import os 17 | 18 | # If extensions (or modules to document with autodoc) are in another directory, 19 | # add these directories to sys.path here. If the directory is relative to the 20 | # documentation root, use os.path.abspath to make it absolute, like shown here. 21 | sys.path.insert(0, os.path.abspath('../oppy')) 22 | 23 | # -- General configuration ------------------------------------------------ 24 | 25 | # If your documentation needs a minimal Sphinx version, state it here. 26 | #needs_sphinx = '1.0' 27 | 28 | # Add any Sphinx extension module names here, as strings. They can be 29 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 30 | # ones. 31 | extensions = [ 32 | 'sphinx.ext.autodoc', 33 | 'sphinx.ext.doctest', 34 | 'sphinx.ext.todo', 35 | 'sphinx.ext.coverage', 36 | 'sphinx.ext.viewcode', 37 | ] 38 | 39 | autodoc_member_order = 'bysource' 40 | autodoc_default_flags = [ 41 | 'members', 42 | 'show-inheritance', 43 | 'undoc-members', 44 | ] 45 | 46 | # Add any paths that contain templates here, relative to this directory. 47 | templates_path = ['_templates'] 48 | 49 | # The suffix of source filenames. 50 | source_suffix = '.rst' 51 | 52 | # The encoding of source files. 53 | #source_encoding = 'utf-8-sig' 54 | 55 | # The master toctree document. 56 | master_doc = 'index' 57 | 58 | # General information about the project. 59 | project = u'oppy' 60 | copyright = u'2015, Nik Kinkel' 61 | 62 | # The version info for the project you're documenting, acts as replacement for 63 | # |version| and |release|, also used in various other places throughout the 64 | # built documents. 65 | # 66 | # The short X.Y version. 67 | version = '0.1' 68 | # The full version, including alpha/beta/rc tags. 69 | release = '0.1' 70 | 71 | # The language for content autogenerated by Sphinx. Refer to documentation 72 | # for a list of supported languages. 73 | #language = None 74 | 75 | # There are two options for replacing |today|: either, you set today to some 76 | # non-false value, then it is used: 77 | #today = '' 78 | # Else, today_fmt is used as the format for a strftime call. 79 | #today_fmt = '%B %d, %Y' 80 | 81 | # List of patterns, relative to source directory, that match files and 82 | # directories to ignore when looking for source files. 83 | exclude_patterns = ['_build'] 84 | 85 | # The reST default role (used for this markup: `text`) to use for all 86 | # documents. 87 | #default_role = None 88 | 89 | # If true, '()' will be appended to :func: etc. cross-reference text. 90 | #add_function_parentheses = True 91 | 92 | # If true, the current module name will be prepended to all description 93 | # unit titles (such as .. function::). 94 | #add_module_names = True 95 | 96 | # If true, sectionauthor and moduleauthor directives will be shown in the 97 | # output. They are ignored by default. 98 | #show_authors = False 99 | 100 | # The name of the Pygments (syntax highlighting) style to use. 101 | pygments_style = 'sphinx' 102 | 103 | # A list of ignored prefixes for module index sorting. 104 | #modindex_common_prefix = [] 105 | 106 | # If true, keep warnings as "system message" paragraphs in the built documents. 107 | #keep_warnings = False 108 | 109 | 110 | # -- Options for HTML output ---------------------------------------------- 111 | 112 | # The theme to use for HTML and HTML Help pages. See the documentation for 113 | # a list of builtin themes. 114 | html_theme = 'default' 115 | 116 | # Theme options are theme-specific and customize the look and feel of a theme 117 | # further. For a list of options available for each theme, see the 118 | # documentation. 119 | #html_theme_options = {} 120 | 121 | # Add any paths that contain custom themes here, relative to this directory. 122 | #html_theme_path = [] 123 | 124 | # The name for this set of Sphinx documents. If None, it defaults to 125 | # " v documentation". 126 | #html_title = None 127 | 128 | # A shorter title for the navigation bar. Default is the same as html_title. 129 | #html_short_title = None 130 | 131 | # The name of an image file (relative to this directory) to place at the top 132 | # of the sidebar. 133 | #html_logo = None 134 | 135 | # The name of an image file (within the static path) to use as favicon of the 136 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 137 | # pixels large. 138 | #html_favicon = None 139 | 140 | # Add any paths that contain custom static files (such as style sheets) here, 141 | # relative to this directory. They are copied after the builtin static files, 142 | # so a file named "default.css" will overwrite the builtin "default.css". 143 | html_static_path = ['_static'] 144 | 145 | # Add any extra paths that contain custom files (such as robots.txt or 146 | # .htaccess) here, relative to this directory. These files are copied 147 | # directly to the root of the documentation. 148 | #html_extra_path = [] 149 | 150 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 151 | # using the given strftime format. 152 | #html_last_updated_fmt = '%b %d, %Y' 153 | 154 | # If true, SmartyPants will be used to convert quotes and dashes to 155 | # typographically correct entities. 156 | #html_use_smartypants = True 157 | 158 | # Custom sidebar templates, maps document names to template names. 159 | #html_sidebars = {} 160 | 161 | # Additional templates that should be rendered to pages, maps page names to 162 | # template names. 163 | #html_additional_pages = {} 164 | 165 | # If false, no module index is generated. 166 | #html_domain_indices = True 167 | 168 | # If false, no index is generated. 169 | #html_use_index = True 170 | 171 | # If true, the index is split into individual pages for each letter. 172 | #html_split_index = False 173 | 174 | # If true, links to the reST sources are added to the pages. 175 | #html_show_sourcelink = True 176 | 177 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 178 | #html_show_sphinx = True 179 | 180 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 181 | #html_show_copyright = True 182 | html_show_copyright = False 183 | 184 | # If true, an OpenSearch description file will be output, and all pages will 185 | # contain a tag referring to it. The value of this option must be the 186 | # base URL from which the finished HTML is served. 187 | #html_use_opensearch = '' 188 | 189 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 190 | #html_file_suffix = None 191 | 192 | # Output file base name for HTML help builder. 193 | htmlhelp_basename = 'oppydoc' 194 | 195 | 196 | # -- Options for LaTeX output --------------------------------------------- 197 | 198 | latex_elements = { 199 | # The paper size ('letterpaper' or 'a4paper'). 200 | #'papersize': 'letterpaper', 201 | 202 | # The font size ('10pt', '11pt' or '12pt'). 203 | #'pointsize': '10pt', 204 | 205 | # Additional stuff for the LaTeX preamble. 206 | #'preamble': '', 207 | } 208 | 209 | # Grouping the document tree into LaTeX files. List of tuples 210 | # (source start file, target name, title, 211 | # author, documentclass [howto, manual, or own class]). 212 | latex_documents = [ 213 | ('index', 'oppy.tex', u'oppy Documentation', 214 | u'Nik Kinkel', 'manual'), 215 | ] 216 | 217 | # The name of an image file (relative to this directory) to place at the top of 218 | # the title page. 219 | #latex_logo = None 220 | 221 | # For "manual" documents, if this is true, then toplevel headings are parts, 222 | # not chapters. 223 | #latex_use_parts = False 224 | 225 | # If true, show page references after internal links. 226 | #latex_show_pagerefs = False 227 | 228 | # If true, show URL addresses after external links. 229 | #latex_show_urls = False 230 | 231 | # Documents to append as an appendix to all manuals. 232 | #latex_appendices = [] 233 | 234 | # If false, no module index is generated. 235 | #latex_domain_indices = True 236 | 237 | 238 | # -- Options for manual page output --------------------------------------- 239 | 240 | # One entry per manual page. List of tuples 241 | # (source start file, name, description, authors, manual section). 242 | man_pages = [ 243 | ('index', 'oppy', u'oppy Documentation', 244 | [u'Nik Kinkel'], 1) 245 | ] 246 | 247 | # If true, show URL addresses after external links. 248 | #man_show_urls = False 249 | 250 | 251 | # -- Options for Texinfo output ------------------------------------------- 252 | 253 | # Grouping the document tree into Texinfo files. List of tuples 254 | # (source start file, target name, title, author, 255 | # dir menu entry, description, category) 256 | texinfo_documents = [ 257 | ('index', 'oppy', u'oppy Documentation', 258 | u'Nik Kinkel', 'oppy', 'One line description of project.', 259 | 'Miscellaneous'), 260 | ] 261 | 262 | # Documents to append as an appendix to all manuals. 263 | #texinfo_appendices = [] 264 | 265 | # If false, no module index is generated. 266 | #texinfo_domain_indices = True 267 | 268 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 269 | #texinfo_show_urls = 'footnote' 270 | 271 | # If true, do not generate a @detailmenu in the "Top" node's menu. 272 | #texinfo_no_detailmenu = False 273 | 274 | 275 | # Example configuration for intersphinx: refer to the Python standard library. 276 | intersphinx_mapping = {'http://docs.python.org/': None} 277 | -------------------------------------------------------------------------------- /docs/docs.rst: -------------------------------------------------------------------------------- 1 | Documentation 2 | ------------- 3 | 4 | .. toctree:: 5 | :maxdepth: 1 6 | 7 | docs/cell 8 | docs/circuit 9 | docs/connection 10 | docs/crypto 11 | docs/netstatus 12 | docs/path 13 | docs/socks 14 | docs/stream 15 | docs/util 16 | 17 | -------------------------------------------------------------------------------- /docs/docs/cell.rst: -------------------------------------------------------------------------------- 1 | cell 2 | ---- 3 | 4 | .. toctree:: 5 | :maxdepth: 1 6 | 7 | cell/cell 8 | cell/fixedlen 9 | cell/relay 10 | cell/varlen 11 | cell/exceptions 12 | cell/util 13 | 14 | -------------------------------------------------------------------------------- /docs/docs/cell/cell.rst: -------------------------------------------------------------------------------- 1 | cell 2 | ---- 3 | 4 | .. automodule:: cell.cell 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/cell/exceptions.rst: -------------------------------------------------------------------------------- 1 | exceptions 2 | ---------- 3 | 4 | .. automodule:: cell.exceptions 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/cell/fixedlen.rst: -------------------------------------------------------------------------------- 1 | fixedlen 2 | -------- 3 | 4 | .. automodule:: cell.fixedlen 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/cell/relay.rst: -------------------------------------------------------------------------------- 1 | relay 2 | ----- 3 | 4 | .. automodule:: cell.relay 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/cell/util.rst: -------------------------------------------------------------------------------- 1 | util 2 | ---- 3 | 4 | .. automodule:: cell.util 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/cell/varlen.rst: -------------------------------------------------------------------------------- 1 | varlen 2 | ------ 3 | 4 | .. automodule:: cell.varlen 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/circuit.rst: -------------------------------------------------------------------------------- 1 | circuit 2 | ------- 3 | 4 | .. toctree:: 5 | :maxdepth: 1 6 | 7 | circuit/circuitmanager 8 | circuit/circuit 9 | circuit/ntorfsm 10 | circuit/exceptions 11 | 12 | -------------------------------------------------------------------------------- /docs/docs/circuit/circuit.rst: -------------------------------------------------------------------------------- 1 | circuit 2 | ------- 3 | 4 | .. automodule:: circuit.circuit 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/circuit/circuitmanager.rst: -------------------------------------------------------------------------------- 1 | circuitmanager 2 | -------------- 3 | 4 | .. automodule:: circuit.circuitmanager 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/circuit/exceptions.rst: -------------------------------------------------------------------------------- 1 | exceptions 2 | ---------- 3 | 4 | .. automodule:: circuit.handshake.exceptions 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/circuit/ntorfsm.rst: -------------------------------------------------------------------------------- 1 | ntorfsm 2 | -------- 3 | 4 | .. automodule:: circuit.handshake.ntorfsm 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/connection.rst: -------------------------------------------------------------------------------- 1 | connection 2 | ---------- 3 | 4 | .. toctree:: 5 | :maxdepth: 1 6 | 7 | connection/connectionpool 8 | connection/connection 9 | connection/v3 10 | connection/exceptions 11 | 12 | -------------------------------------------------------------------------------- /docs/docs/connection/connection.rst: -------------------------------------------------------------------------------- 1 | connection 2 | ---------- 3 | 4 | .. automodule:: connection.connection 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/connection/connectionpool.rst: -------------------------------------------------------------------------------- 1 | connectionpool 2 | -------------- 3 | 4 | .. automodule:: connection.connectionpool 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/connection/exceptions.rst: -------------------------------------------------------------------------------- 1 | exceptions 2 | ---------- 3 | 4 | .. automodule:: connection.handshake.exceptions 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/connection/v3.rst: -------------------------------------------------------------------------------- 1 | v3 2 | -- 3 | 4 | .. automodule:: connection.handshake.v3 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/crypto.rst: -------------------------------------------------------------------------------- 1 | crypto 2 | ------ 3 | 4 | .. toctree:: 5 | :maxdepth: 1 6 | 7 | crypto/ntorhandshake 8 | crypto/relaycrypto 9 | crypto/util 10 | crypto/exceptions 11 | 12 | -------------------------------------------------------------------------------- /docs/docs/crypto/exceptions.rst: -------------------------------------------------------------------------------- 1 | exceptions 2 | ---------- 3 | 4 | .. automodule:: crypto.exceptions 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/crypto/ntorhandshake.rst: -------------------------------------------------------------------------------- 1 | ntorhandshake 2 | ------------- 3 | 4 | .. automodule:: crypto.ntorhandshake 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/crypto/relaycrypto.rst: -------------------------------------------------------------------------------- 1 | relaycrypto 2 | ----------- 3 | 4 | .. automodule:: crypto.relaycrypto 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/crypto/util.rst: -------------------------------------------------------------------------------- 1 | util 2 | ---- 3 | 4 | .. automodule:: crypto.util 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/netstatus.rst: -------------------------------------------------------------------------------- 1 | netstatus 2 | --------- 3 | 4 | .. toctree:: 5 | :maxdepth: 1 6 | 7 | netstatus/netstatus 8 | netstatus/exceptions 9 | 10 | -------------------------------------------------------------------------------- /docs/docs/netstatus/exceptions.rst: -------------------------------------------------------------------------------- 1 | exceptions 2 | ---------- 3 | 4 | .. automodule:: netstatus.exceptions 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/netstatus/netstatus.rst: -------------------------------------------------------------------------------- 1 | netstatus 2 | --------- 3 | 4 | .. automodule:: netstatus.netstatus 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/path.rst: -------------------------------------------------------------------------------- 1 | path 2 | ---- 3 | 4 | .. toctree:: 5 | :maxdepth: 1 6 | 7 | path/path 8 | path/exceptions 9 | 10 | -------------------------------------------------------------------------------- /docs/docs/path/exceptions.rst: -------------------------------------------------------------------------------- 1 | exceptions 2 | ---------- 3 | 4 | .. automodule:: path.exceptions 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/path/path.rst: -------------------------------------------------------------------------------- 1 | path 2 | ---- 3 | 4 | .. automodule:: path.path 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/socks.rst: -------------------------------------------------------------------------------- 1 | socks 2 | ----- 3 | 4 | .. toctree:: 5 | :maxdepth: 1 6 | 7 | socks/socks 8 | 9 | -------------------------------------------------------------------------------- /docs/docs/socks/socks.rst: -------------------------------------------------------------------------------- 1 | socks 2 | ----- 3 | 4 | .. automodule:: socks.socks 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/stream.rst: -------------------------------------------------------------------------------- 1 | stream 2 | ------ 3 | 4 | .. toctree:: 5 | :maxdepth: 1 6 | 7 | stream/stream 8 | 9 | -------------------------------------------------------------------------------- /docs/docs/stream/stream.rst: -------------------------------------------------------------------------------- 1 | stream 2 | ------ 3 | 4 | .. automodule:: stream.stream 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/util.rst: -------------------------------------------------------------------------------- 1 | util 2 | ---- 3 | 4 | .. toctree:: 5 | :maxdepth: 1 6 | 7 | util/exitrequest 8 | util/tools 9 | 10 | -------------------------------------------------------------------------------- /docs/docs/util/exitrequest.rst: -------------------------------------------------------------------------------- 1 | exitrequest 2 | ----------- 3 | 4 | .. automodule:: util.exitrequest 5 | 6 | -------------------------------------------------------------------------------- /docs/docs/util/tools.rst: -------------------------------------------------------------------------------- 1 | tools 2 | ----- 3 | 4 | .. automodule:: util.tools 5 | 6 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | Welcome to oppy's documentation! 2 | ================================ 3 | 4 | oppy is an Onion Proxy (OP) written in Python, aiming to implement the OP 5 | functionality of the Tor protocol as outlined in tor-spec. oppy does not 6 | implement Onion Routing (OR) functionality. Any further references to "Tor" or "tor" refer to the protocol, unless otherwise noted, and do not imply 7 | endorsement from The Tor Project organization. oppy is produced independently 8 | from the Tor® anonymity software and carries no guarantee from The Tor Project 9 | about quality, suitability or anything else. 10 | 11 | oppy is `free software `_, licensed under the "modified" 12 | (or 3-clause) BSD license. 13 | 14 | .. warning:: 15 | 16 | oppy is provided in the hope it will be useful, however **oppy will NOT 17 | provide strong anonymity**. If you need strong anonymity, please use the 18 | `official Tor software `_ 19 | from The Tor Project. 20 | 21 | A short, non-exhaustive list of the reasons you should not use oppy for 22 | anonymity purposes: 23 | 24 | - oppy is not well tested. It has bugs, probably lots of them, many 25 | probably severe. 26 | - oppy (probably) leaks DNS requests under some conditions. 27 | - oppy will leave you vulnerable to certain kinds of profiling 28 | attacks. 29 | - oppy does not safely handle cryptographic key material. 30 | 31 | Again, do **NOT** use oppy if you want anonymity. 32 | 33 | 34 | Contents: 35 | --------- 36 | 37 | .. toctree:: 38 | :maxdepth: 1 39 | 40 | overview 41 | installation 42 | usage 43 | simplifications 44 | roadmap 45 | docs 46 | 47 | 48 | Indices and tables 49 | ================== 50 | 51 | * :ref:`genindex` 52 | * :ref:`modindex` 53 | * :ref:`search` 54 | 55 | -------------------------------------------------------------------------------- /docs/installation.rst: -------------------------------------------------------------------------------- 1 | Installation 2 | ------------ 3 | 4 | First, clone the git repo:: 5 | 6 | git clone https://github.com/nskinkel/oppy 7 | 8 | oppy needs pynacl >= 0.3.0 to support the c.crypto_scalarmult() function. 9 | The version in pypi is old and does not support this function yet, so clone 10 | pynacl and follow the installation instructions in the repo:: 11 | 12 | git clone https://github.com/pyca/pynacl 13 | 14 | Then make sure you have the following packages installed (these can all be 15 | installed using pip):: 16 | 17 | twisted >= 14.0 18 | ipaddress 19 | stem 20 | hkdf 21 | pycrypto 22 | pyopenssl 23 | 24 | Finally, cd into the oppy directory and add oppy to your $PYTHONPATH:: 25 | 26 | export PYTHONPATH=$PYTHONPATH:$(pwd) 27 | 28 | oppy should be working now! From the oppy/oppy directory, run:: 29 | 30 | ./oppy 31 | 32 | To see the command line arguments, run:: 33 | 34 | ./oppy -h 35 | 36 | or see :ref:`usage `. 37 | 38 | Coming soon: a *setup.py* file and better installation process! 39 | 40 | -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | REM Command file for Sphinx documentation 4 | 5 | if "%SPHINXBUILD%" == "" ( 6 | set SPHINXBUILD=sphinx-build 7 | ) 8 | set BUILDDIR=_build 9 | set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . 10 | set I18NSPHINXOPTS=%SPHINXOPTS% . 11 | if NOT "%PAPER%" == "" ( 12 | set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% 13 | set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% 14 | ) 15 | 16 | if "%1" == "" goto help 17 | 18 | if "%1" == "help" ( 19 | :help 20 | echo.Please use `make ^` where ^ is one of 21 | echo. html to make standalone HTML files 22 | echo. dirhtml to make HTML files named index.html in directories 23 | echo. singlehtml to make a single large HTML file 24 | echo. pickle to make pickle files 25 | echo. json to make JSON files 26 | echo. htmlhelp to make HTML files and a HTML help project 27 | echo. qthelp to make HTML files and a qthelp project 28 | echo. devhelp to make HTML files and a Devhelp project 29 | echo. epub to make an epub 30 | echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter 31 | echo. text to make text files 32 | echo. man to make manual pages 33 | echo. texinfo to make Texinfo files 34 | echo. gettext to make PO message catalogs 35 | echo. changes to make an overview over all changed/added/deprecated items 36 | echo. xml to make Docutils-native XML files 37 | echo. pseudoxml to make pseudoxml-XML files for display purposes 38 | echo. linkcheck to check all external links for integrity 39 | echo. doctest to run all doctests embedded in the documentation if enabled 40 | goto end 41 | ) 42 | 43 | if "%1" == "clean" ( 44 | for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i 45 | del /q /s %BUILDDIR%\* 46 | goto end 47 | ) 48 | 49 | 50 | %SPHINXBUILD% 2> nul 51 | if errorlevel 9009 ( 52 | echo. 53 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 54 | echo.installed, then set the SPHINXBUILD environment variable to point 55 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 56 | echo.may add the Sphinx directory to PATH. 57 | echo. 58 | echo.If you don't have Sphinx installed, grab it from 59 | echo.http://sphinx-doc.org/ 60 | exit /b 1 61 | ) 62 | 63 | if "%1" == "html" ( 64 | %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html 65 | if errorlevel 1 exit /b 1 66 | echo. 67 | echo.Build finished. The HTML pages are in %BUILDDIR%/html. 68 | goto end 69 | ) 70 | 71 | if "%1" == "dirhtml" ( 72 | %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml 73 | if errorlevel 1 exit /b 1 74 | echo. 75 | echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. 76 | goto end 77 | ) 78 | 79 | if "%1" == "singlehtml" ( 80 | %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml 81 | if errorlevel 1 exit /b 1 82 | echo. 83 | echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. 84 | goto end 85 | ) 86 | 87 | if "%1" == "pickle" ( 88 | %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle 89 | if errorlevel 1 exit /b 1 90 | echo. 91 | echo.Build finished; now you can process the pickle files. 92 | goto end 93 | ) 94 | 95 | if "%1" == "json" ( 96 | %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json 97 | if errorlevel 1 exit /b 1 98 | echo. 99 | echo.Build finished; now you can process the JSON files. 100 | goto end 101 | ) 102 | 103 | if "%1" == "htmlhelp" ( 104 | %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp 105 | if errorlevel 1 exit /b 1 106 | echo. 107 | echo.Build finished; now you can run HTML Help Workshop with the ^ 108 | .hhp project file in %BUILDDIR%/htmlhelp. 109 | goto end 110 | ) 111 | 112 | if "%1" == "qthelp" ( 113 | %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp 114 | if errorlevel 1 exit /b 1 115 | echo. 116 | echo.Build finished; now you can run "qcollectiongenerator" with the ^ 117 | .qhcp project file in %BUILDDIR%/qthelp, like this: 118 | echo.^> qcollectiongenerator %BUILDDIR%\qthelp\oppy.qhcp 119 | echo.To view the help file: 120 | echo.^> assistant -collectionFile %BUILDDIR%\qthelp\oppy.ghc 121 | goto end 122 | ) 123 | 124 | if "%1" == "devhelp" ( 125 | %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp 126 | if errorlevel 1 exit /b 1 127 | echo. 128 | echo.Build finished. 129 | goto end 130 | ) 131 | 132 | if "%1" == "epub" ( 133 | %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub 134 | if errorlevel 1 exit /b 1 135 | echo. 136 | echo.Build finished. The epub file is in %BUILDDIR%/epub. 137 | goto end 138 | ) 139 | 140 | if "%1" == "latex" ( 141 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 142 | if errorlevel 1 exit /b 1 143 | echo. 144 | echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. 145 | goto end 146 | ) 147 | 148 | if "%1" == "latexpdf" ( 149 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 150 | cd %BUILDDIR%/latex 151 | make all-pdf 152 | cd %BUILDDIR%/.. 153 | echo. 154 | echo.Build finished; the PDF files are in %BUILDDIR%/latex. 155 | goto end 156 | ) 157 | 158 | if "%1" == "latexpdfja" ( 159 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 160 | cd %BUILDDIR%/latex 161 | make all-pdf-ja 162 | cd %BUILDDIR%/.. 163 | echo. 164 | echo.Build finished; the PDF files are in %BUILDDIR%/latex. 165 | goto end 166 | ) 167 | 168 | if "%1" == "text" ( 169 | %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text 170 | if errorlevel 1 exit /b 1 171 | echo. 172 | echo.Build finished. The text files are in %BUILDDIR%/text. 173 | goto end 174 | ) 175 | 176 | if "%1" == "man" ( 177 | %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man 178 | if errorlevel 1 exit /b 1 179 | echo. 180 | echo.Build finished. The manual pages are in %BUILDDIR%/man. 181 | goto end 182 | ) 183 | 184 | if "%1" == "texinfo" ( 185 | %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo 186 | if errorlevel 1 exit /b 1 187 | echo. 188 | echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. 189 | goto end 190 | ) 191 | 192 | if "%1" == "gettext" ( 193 | %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale 194 | if errorlevel 1 exit /b 1 195 | echo. 196 | echo.Build finished. The message catalogs are in %BUILDDIR%/locale. 197 | goto end 198 | ) 199 | 200 | if "%1" == "changes" ( 201 | %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes 202 | if errorlevel 1 exit /b 1 203 | echo. 204 | echo.The overview file is in %BUILDDIR%/changes. 205 | goto end 206 | ) 207 | 208 | if "%1" == "linkcheck" ( 209 | %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck 210 | if errorlevel 1 exit /b 1 211 | echo. 212 | echo.Link check complete; look for any errors in the above output ^ 213 | or in %BUILDDIR%/linkcheck/output.txt. 214 | goto end 215 | ) 216 | 217 | if "%1" == "doctest" ( 218 | %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest 219 | if errorlevel 1 exit /b 1 220 | echo. 221 | echo.Testing of doctests in the sources finished, look at the ^ 222 | results in %BUILDDIR%/doctest/output.txt. 223 | goto end 224 | ) 225 | 226 | if "%1" == "xml" ( 227 | %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml 228 | if errorlevel 1 exit /b 1 229 | echo. 230 | echo.Build finished. The XML files are in %BUILDDIR%/xml. 231 | goto end 232 | ) 233 | 234 | if "%1" == "pseudoxml" ( 235 | %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml 236 | if errorlevel 1 exit /b 1 237 | echo. 238 | echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. 239 | goto end 240 | ) 241 | 242 | :end 243 | -------------------------------------------------------------------------------- /docs/overview.rst: -------------------------------------------------------------------------------- 1 | Overview 2 | -------- 3 | 4 | oppy is an Onion Proxy written in Python, implementing the client 5 | functionality of the Tor protocol. Right now oppy is just a prototype, and 6 | it does not fully implement the Tor protocol due to a number of 7 | :ref:`simplifications `. 8 | 9 | oppy uses Twisted for asynchronous networking, Stem for parsing and 10 | representing network status documents and relay descriptors, and PyCrypto 11 | and PyOpenSSL operations. 12 | 13 | In general, oppy can be used the same way as a normal tor process. See 14 | :ref:`usage ` for more information. 15 | 16 | This documentation, like the rest of oppy, is still a work in progress :) 17 | 18 | -------------------------------------------------------------------------------- /docs/roadmap.rst: -------------------------------------------------------------------------------- 1 | Project Roadmap 2 | --------------- 3 | 4 | Coming Soon! 5 | 6 | -------------------------------------------------------------------------------- /docs/simplifications.rst: -------------------------------------------------------------------------------- 1 | .. _simplifications-label: 2 | 3 | Simplifications and Unimplemented Functionality 4 | ----------------------------------------------- 5 | 6 | Here we aim to document the major simplifications oppy makes and OP 7 | functionality that oppy does not yet implement. Some of the items listed here 8 | are **required** for Tor OPs, and some just reflect the behavior tor itself has 9 | when running as an OP. 10 | 11 | This is a living document, subject to (possibly frequent) change. It is not 12 | comprehensive; oppy still makes some simplifications that are not listed here. 13 | 14 | **Major user-facing simplifications** 15 | 16 | These are the major "noticeable" things that are simplified/not implented. 17 | These may also appear below in their appropriate section. 18 | 19 | - circuits and streams don't know how to recover from RelayEnd cells sent 20 | because of reasons other than CONNECTION_DONE. If we get a RelayEnd 21 | due to, say, reason EXIT_POLICY, oppy will just not be able to handle 22 | this request. oppy doesn't know how to try this request on a different 23 | circuit yet. 24 | - circuit build timeouts are not calculated. this means that sometimes 25 | circuits are slooooow to be built or may not be built at all. this also 26 | means that oppy sometimes gets slower circuits than it should. 27 | - oppy doesn't know how to tear-down a slow/broken circuit yet. If a 28 | circuit is just too slow to be usable or, at some point, just stops 29 | responding, oppy doesn't yet know that it should tear it down and 30 | build a new one. 31 | - oppy doesn't set a timeout on network status downloads, so sometimes 32 | these will just hang if we choose a bad V2Dir cache. 33 | 34 | **Cells** 35 | 36 | - oppy does not implement all types of cells - only (most of) the kinds 37 | that an OP needs. 38 | - RELAY_RESOLVE(D) cells are not implemented 39 | - oppy does not implement the "make()" helper method for all types of 40 | implemented cells - only those that we currently need to build (e.g. 41 | for backward only cells, oppy doesn't implement the helper method) 42 | 43 | **Circuits** 44 | 45 | - oppy does not rotate circuits 46 | - oppy does not attempt to recover from RelayEnd cells received due to 47 | reasons other than CONNECTION_DONE. For instance, if oppy receives a 48 | RelayEnd cell with reason EXIT_POLICY, oppy doesn't know how to try 49 | this connection on another circuit and just closes the stream. 50 | - oppy does not calculate circuit build timeouts 51 | - oppy does not tear-down slow circuits. sometimes circuits may be really 52 | slow or stop working properly. oppy doesn't know how to recover from this 53 | yet. 54 | - oppy does not support the TAP handshake. 55 | - oppy doesn't know how to build internal circuits and/or access hidden 56 | services. 57 | - oppy doesn't know how to rebuild circuits. If oppy receives a 58 | RelayTruncated cell, the circuit is just immediately destroyed. 59 | - oppy does not cannabalize circuits. 60 | - oppy does not take into account bandwidth usage/history when assigning 61 | new streams to open circuits. 62 | - oppy doesn't currently mark circuits as "clean" or "dirty". circuits 63 | are either "PENDING" (i.e. being built and currently trying to extend), 64 | "OPEN" (i.e. accepting new streams and forwarding traffic), or 65 | "BUFFERING" (waiting for a RelaySendMeCell), and that's the only real 66 | state information circuits have. 67 | - oppy does not know how to use RELAY_RESOLVE cells and, consequently, 68 | does not make any *resolve* circuits 69 | - oppy doesn't know how to build directory circuits 70 | 71 | **Connections** 72 | 73 | - oppy only knows how to talk Link Protocol Version 3 (although 74 | functionality for version 4 is mostly there, at least in cells, just not 75 | tested yet) 76 | - oppy does not use the "this_or_addresses" field in a received NetInfoCell 77 | to verify we've connected to the correct OR address 78 | 79 | **Crypto** 80 | 81 | - oppy doesn't handle clearing/wiping private keys properly (really, crypto 82 | should be handled in C modules) 83 | 84 | **Path Selection** 85 | 86 | - oppy does not take bandwidth into account when choosing paths 87 | - oppy always uses a default set of required flags for each node position 88 | in a path. these flags are probably not the correct flags to be using. 89 | - oppy only chooses relays that support the ntor handshake. 90 | - oppy does not use entry guards. 91 | - oppy does not mark relays as *down* if they are unreachable. 92 | 93 | **Network Status Documents** 94 | 95 | - oppy doesn't know how to build or use directory circuits, so all 96 | network status document requests are just HTTP requests to V2Dir caches 97 | or directory authorities 98 | - oppy just downloads all server descriptors at once instead of splitting 99 | up the downloads between multiple V2Dir caches 100 | - oppy does not check whether or not we have the "best" server descriptor 101 | before downloading new descriptors. Currently, oppy just downloads all 102 | server descriptors everytime it grabs a fresh consensus. 103 | - oppy does not schedule new consensus downloads at the correct time 104 | interval. currently oppy just downloads new network status documents 105 | every hour. 106 | 107 | **SOCKS** 108 | 109 | - oppy only supports the NO_AUTH method 110 | - oppy does not yet implement the tor socks extensions (e.g. for the 111 | RESOLVE command) 112 | - oppy does not implement the "HTTP-Resistance" that tor does 113 | - oppy does not support optimistic data 114 | 115 | **Streams** 116 | 117 | - streams do not check how many cells still need to be flushed before 118 | sending a RelaySendMeCell. streams just send a SendMe cell as soon as 119 | their window reaches the SendMe threshold. 120 | 121 | -------------------------------------------------------------------------------- /docs/usage.rst: -------------------------------------------------------------------------------- 1 | .. _usage-label: 2 | 3 | Usage 4 | ----- 5 | 6 | oppy aims to be a fully functional Tor client and can be used just the same 7 | way as a regular Tor client. 8 | 9 | oppy currently supports the following command line arguments:: 10 | 11 | -l --log-level python log level, defaults to INFO 12 | -f --log-file filename to write logs to, defaults to sys.stdout 13 | -p --SOCKS-port local port for SOCKS interface to listen on 14 | -h --help print these help options 15 | 16 | To run oppy at the DEBUG log level on port 10050, from the oppy/oppy directory 17 | run:: 18 | 19 | $ ./oppy -l debug -p 10050 20 | 21 | Now just configure any local application to use this SOCKS port like you 22 | would for a regular tor process. 23 | 24 | oppy will print some information as it gathers network status documents and 25 | starts building circuits. After the first circuit opens up, oppy will be 26 | listening on port 10050 for incoming SOCKS 5 connections. 27 | 28 | You can tell any application that can use a SOCKS 5 proxy to use oppy (e.g. 29 | SSH or Firefox) - just configure that application to use SOCKS 5 on localhost 30 | on the port that oppy is running on. 31 | 32 | You can also tell the Tor Browser to use oppy instead of its own Tor process. 33 | 34 | If you're using a web browser with oppy, browse to 35 | `Tor check `_ to verify oppy is working. 36 | 37 | .. warning:: 38 | 39 | You will **not** get strong anonymity by running, say, vanilla Firefox 40 | through a tor process and using "normal" browsing habits. See 41 | `a list of warnings `_ 42 | for some reasons why this is not sufficient for strong anonymity. 43 | 44 | -------------------------------------------------------------------------------- /oppy/__init__.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | import os 5 | base_dir = os.path.dirname(os.path.abspath(__file__)) 6 | data_dir = base_dir + "/../data/" 7 | -------------------------------------------------------------------------------- /oppy/cell/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/cell/__init__.py -------------------------------------------------------------------------------- /oppy/cell/cell.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel and David Johnston 2 | # See LICENSE for licensing information 3 | 4 | import abc 5 | import struct 6 | 7 | import oppy.cell.definitions as DEF 8 | 9 | from oppy.cell.exceptions import NotEnoughBytes, UnknownCellCommand 10 | 11 | 12 | class Cell(object): 13 | '''An abstract base class for other kinds of cells.''' 14 | 15 | __metaclass__ = abc.ABCMeta 16 | _subclass_map = None 17 | 18 | def getPayload(self): 19 | '''Return just the payload bytes of a cell. 20 | 21 | For fixed-length cells, pad with null bytes to the appropriate length 22 | according to Link Protocol version. Primarily useful for encrypting or 23 | decrypting cells. 24 | 25 | :returns: **str** cell payload bytes 26 | ''' 27 | start, _ = self.payloadRange() 28 | return self.getBytes()[start:] 29 | 30 | @abc.abstractmethod 31 | def payloadRange(self): 32 | '''Return the (start, end) indices of this cell's payload as a 33 | 2-tuple. 34 | 35 | :returns: **tuple, int** (start, end) payload indices.''' 36 | pass 37 | 38 | @staticmethod 39 | def enoughDataForCell(data, link_version=3): 40 | '''Return True iff the str **data** contains enough bytes to build a 41 | cell. 42 | 43 | The command byte is checked to determine the general type of 44 | cell to look for. For fixed-length cells, this is enough to know 45 | how much data is required. For variable-length cells, additionally 46 | check the length bytes. 47 | 48 | :param str data: The raw string to check. 49 | :param int link_version: Link Protocol version in use. In version 50 | 3, 512 bytes are required for a fixed-length cell. In version 51 | 4, 514 bytes are required. 52 | :returns: **bool** that's **True** iff data is long enough to 53 | build the type of cell indicated by the command byte. 54 | ''' 55 | if link_version < 4: 56 | fmt = "!HB" 57 | header_len = DEF.PAYLOAD_START_V3 58 | required_length = DEF.FIXED_LEN_V3_LEN 59 | else: 60 | fmt = "!IB" 61 | header_len = DEF.PAYLOAD_START_V4 62 | required_length = DEF.FIXED_LEN_V4_LEN 63 | 64 | if len(data) < header_len: 65 | return False 66 | _, cmd = struct.unpack(fmt, data[:header_len]) 67 | if cmd in DEF.FIXED_LEN_CMD_IDS: 68 | return len(data) >= required_length 69 | elif cmd in DEF.VAR_LEN_CMD_IDS: 70 | required_len = struct.unpack('!H', 71 | data[header_len:header_len + 2])[0] 72 | return len(data) >= required_len 73 | else: 74 | msg = "Unknown cell cmd: {}.".format(cmd) 75 | raise UnknownCellCommand(msg) 76 | 77 | # TODO: document exceptions that can be raised 78 | @staticmethod 79 | def parse(data, link_version=3, encrypted=False): 80 | '''Return an instance of a cell constructed from the str data. 81 | 82 | If encrypted is True and the type if cell is RELAY or RELAY_EARLY, 83 | don't try to parse the payload and just return a 84 | :class:`~oppy.cell.fixedlen.EncryptedCell`. 85 | Otherwise, instantiate and return the appropriate cell type. 86 | 87 | .. note:: *data* str is not modified. 88 | 89 | :param str data: raw bytes to parse and extract a cell from 90 | :param int link_version: Link Protocol version in use. For fixed- 91 | length cells, this parameter dictates whether we expect 512 92 | bytes (Link Protocol <= 3) or 514 bytes. 93 | :param bool encrypted: whether or not we think this cell is 94 | encrypted. If True and we see a RELAY or RELAY_EARLY command 95 | do not attempt to parse payload. 96 | 97 | :returns: instantiated cell type as dictated by the command byte, 98 | parsed and extracted from data. 99 | ''' 100 | if not 1 <= link_version <= 4: 101 | msg = "link_version must be leq 4, but found {} instead" 102 | raise ValueError(msg.format(link_version)) 103 | 104 | fmt = "!HB" if link_version <= 3 else "!IB" 105 | header_len = struct.calcsize(fmt) 106 | 107 | if len(data) < header_len: 108 | raise NotEnoughBytes() 109 | 110 | circ_id, cmd = struct.unpack(fmt, data[:header_len]) 111 | 112 | if cmd not in DEF.CELL_CMD_IDS: 113 | msg = "When parsing cell data, found an unknown cmd: {}." 114 | raise UnknownCellCommand(msg.format(cmd)) 115 | 116 | if cmd in DEF.VAR_LEN_CMD_IDS: 117 | from oppy.cell.varlen import VarLenCell 118 | cls = VarLenCell 119 | # only try to create a concrete relay cell subclass if payload 120 | # is not encrypted 121 | elif encrypted is False and (cmd == DEF.RELAY_CMD or cmd == DEF.RELAY_EARLY_CMD): 122 | from oppy.cell.relay import RelayCell 123 | cls = RelayCell 124 | else: 125 | from oppy.cell.fixedlen import FixedLenCell 126 | cls = FixedLenCell 127 | 128 | # Instantiate the appropriate kind of header, variable-length or fixed- 129 | # length. 130 | h = cls.Header(circ_id=circ_id, cmd=cmd, link_version=link_version) 131 | return cls._parse(data, h) 132 | 133 | @classmethod 134 | def _parse(cls, data, header): 135 | '''Use the given cell data and (partial) cell header information to 136 | instantiate a cell object of the appropriate type. 137 | 138 | .. note:: *header.cmd* and *header.link_version* must be set by the 139 | caller. 140 | 141 | This is expected to be called only by *Cell.parse()*. *cls* is 142 | expected to be one of the three abstract types of cells: 143 | 144 | - :class:`~oppy.cell.fixedlen.FixedLenCell` 145 | - :class:`~oppy.cell.varlen.VarLenCell` 146 | - :class:`~oppy.cell.relay.RelayCell` 147 | 148 | This function uses attributes of the *cls* object to parse 149 | the given data. 150 | 151 | :param str data: The data to be converted into a cell instance. 152 | :param :class:`~oppy.cell.cell.Cell.Header` header: header 153 | containing some previously parsed info (may be either a 154 | :class:`~oppy.cell.fixedlen.FixedLenCell.Header` or 155 | :class:`~oppy.cell.varlen.VarLenCell.Header`). 156 | ''' 157 | 158 | if not isinstance(header, cls.Header): 159 | raise TypeError("The given header object has the wrong type.") 160 | if header.cmd is None or header.link_version is None: 161 | raise ValueError("Fields of the given header object are invalid.") 162 | 163 | # Construct a cell of the appropriate concrete type. 164 | subclass = cls._getSubclass(header, data) 165 | cell = subclass(header) 166 | 167 | # Parse additional information from data and add it to the new cell. 168 | cell._parseHeader(data) 169 | if len(data) < len(cell): 170 | fmt = "Needed {} bytes to finish parsing data; only found {}." 171 | msg = fmt.format(len(cell), len(data)) 172 | raise NotEnoughBytes(msg) 173 | cell._parsePayload(data) 174 | return cell 175 | 176 | @classmethod 177 | def _getSubclass(cls, header, data): 178 | '''Use *header* to interpret the given cell data. 179 | 180 | A cell type which will be appropriate for encapsulating/representing 181 | this cell data is then selected and returned. 182 | 183 | :param cls.Header header: the header in use for this cell 184 | :param str data: raw str to parse 185 | :returns: Concrete subclass of *cls* 186 | ''' 187 | if cls._subclass_map is None: 188 | cls._initSubclassMap() 189 | return cls._subclass_map[header.cmd] 190 | 191 | @abc.abstractmethod 192 | def _parseHeader(self, data): 193 | '''Parse any remaining header information from *data*. 194 | ''' 195 | pass 196 | 197 | @abc.abstractmethod 198 | def _parsePayload(self, data): 199 | '''Parse payload information from *data*. 200 | 201 | This process depends upon the header-parsing process being complete. 202 | ''' 203 | pass 204 | 205 | def __repr__(self): 206 | fmt = type(self).__name__ + "(header={}, payload={})" 207 | return fmt.format(self.header, repr(self.payload)) 208 | 209 | def __len__(self): 210 | _, end = self.payloadRange() 211 | return end 212 | 213 | def __eq__(self, other): 214 | if type(self) is type(other): 215 | return self.__dict__ == other.__dict__ 216 | return False 217 | 218 | class Header(object): 219 | '''A dummy header type that exists only to be overridden by classes 220 | that inherit from :class:`~oppy.cell.cell.Cell`.''' 221 | def __init__(self): 222 | raise NotImplementedError("This is an abstract class.") 223 | -------------------------------------------------------------------------------- /oppy/cell/definitions.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel and David Johnston 2 | # See LICENSE for licensing information 3 | 4 | PAYLOAD_START_V3 = 3 5 | PAYLOAD_START_V4 = 5 6 | 7 | PAYLOAD_FIELD_LEN = 2 8 | 9 | MAX_PAYLOAD_LEN = 509 10 | MAX_RPAYLOAD_LEN = 498 11 | 12 | FIXED_LEN_V3_LEN = 512 13 | FIXED_LEN_V4_LEN = 514 14 | 15 | V3_CIRCUIT_LEN = 2 16 | V4_CIRCUIT_LEN = 4 17 | 18 | IPv4_ADDR_LEN = 4 19 | IPv6_ADDR_LEN = 16 20 | 21 | HOSTNAME_TYPE = 0x00 22 | IPv4_ADDR_TYPE = 0x04 23 | IPv6_ADDR_TYPE = 0x06 24 | 25 | # Cell command fields 26 | PADDING_CMD = 0 27 | CREATE_CMD = 1 28 | CREATED_CMD = 2 29 | RELAY_CMD = 3 30 | DESTROY_CMD = 4 31 | CREATE_FAST_CMD = 5 32 | CREATED_FAST_CMD = 6 33 | VERSIONS_CMD = 7 34 | NETINFO_CMD = 8 35 | RELAY_EARLY_CMD = 9 36 | CREATE2_CMD = 10 37 | CREATED2_CMD = 11 38 | VPADDING_CMD = 128 39 | CERTS_CMD = 129 40 | AUTH_CHALLENGE_CMD = 130 41 | AUTHENTICATE_CMD = 131 42 | AUTHORIZE_CMD = 132 43 | 44 | PADDING_CMD_IDS = ( 45 | PADDING_CMD, 46 | VPADDING_CMD, 47 | ) 48 | 49 | # Commands used in fixed-length cells 50 | FIXED_LEN_CMD_IDS = ( 51 | PADDING_CMD, 52 | CREATE_CMD, 53 | CREATED_CMD, 54 | RELAY_CMD, 55 | DESTROY_CMD, 56 | CREATE_FAST_CMD, 57 | CREATED_FAST_CMD, 58 | NETINFO_CMD, 59 | RELAY_EARLY_CMD, 60 | CREATE2_CMD, 61 | CREATED2_CMD, 62 | ) 63 | 64 | # Commands used in variable-length cells 65 | VAR_LEN_CMD_IDS = ( 66 | VERSIONS_CMD, 67 | VPADDING_CMD, 68 | CERTS_CMD, 69 | AUTH_CHALLENGE_CMD, 70 | AUTHENTICATE_CMD, 71 | AUTHORIZE_CMD, 72 | ) 73 | 74 | CELL_CMD_IDS = FIXED_LEN_CMD_IDS + VAR_LEN_CMD_IDS 75 | 76 | # Relay cell commands 77 | RELAY_BEGIN_CMD = 1 78 | RELAY_DATA_CMD = 2 79 | RELAY_END_CMD = 3 80 | RELAY_CONNECTED_CMD = 4 81 | RELAY_SENDME_CMD = 5 82 | RELAY_EXTEND_CMD = 6 83 | RELAY_EXTENDED_CMD = 7 84 | RELAY_TRUNCATE_CMD = 8 85 | RELAY_TRUNCATED_CMD = 9 86 | RELAY_DROP_CMD = 10 87 | RELAY_RESOLVE_CMD = 11 88 | RELAY_RESOLVED_CMD = 12 89 | RELAY_BEGIN_DIR_CMD = 13 90 | RELAY_EXTEND2_CMD = 14 91 | RELAY_EXTENDED2_CMD = 15 92 | 93 | RELAY_CELL_CMDS = ( 94 | RELAY_BEGIN_CMD, 95 | RELAY_DATA_CMD, 96 | RELAY_END_CMD, 97 | RELAY_CONNECTED_CMD, 98 | RELAY_SENDME_CMD, 99 | RELAY_EXTEND_CMD, 100 | RELAY_EXTENDED_CMD, 101 | RELAY_TRUNCATE_CMD, 102 | RELAY_TRUNCATED_CMD, 103 | RELAY_DROP_CMD, 104 | RELAY_RESOLVE_CMD, 105 | RELAY_RESOLVED_CMD, 106 | RELAY_BEGIN_DIR_CMD, 107 | RELAY_EXTEND2_CMD, 108 | RELAY_EXTENDED2_CMD, 109 | ) 110 | 111 | # Reasons a RelayEndCell may be sent or received 112 | REASON_MISC = 1 113 | REASON_RESOLVEFAILED = 2 114 | REASON_CONNECTREFUSED = 3 115 | REASON_EXITPOLICY = 4 116 | REASON_DESTROY = 5 117 | REASON_DONE = 6 118 | REASON_TIMEOUT = 7 119 | REASON_NOROUTE = 8 120 | REASON_HIBERNATING = 9 121 | REASON_INTERNAL = 10 122 | REASON_RESOURCELIMIT = 11 123 | REASON_CONNRESET = 12 124 | REASON_TORPROTOCOL = 13 125 | REASON_NOTDIRECTORY = 14 126 | 127 | # Reasons a DestroyCell or a RelayTruncatedCell may be sent or received 128 | DESTROY_NONE = 0 129 | DESTROY_PROTOCOL = 1 130 | DESTROY_INTERNAL = 2 131 | DESTROY_REQUESTED = 3 132 | DESTROY_HIBERNATING = 4 133 | DESTROY_RESOURCELIMIT = 5 134 | DESTROY_CONNECTFAILED = 6 135 | DESTROY_OR_IDENTITY = 7 136 | DESTROY_OR_CONN_CLOSED = 8 137 | DESTROY_FINISHED = 9 138 | DESTROY_TIMEOUT = 10 139 | DESTROY_DESTROYED = 11 140 | DESTROY_NOSUCHSERVICE = 12 141 | 142 | DESTROY_TRUNCATE_REASONS = ( 143 | DESTROY_NONE, 144 | DESTROY_PROTOCOL, 145 | DESTROY_INTERNAL, 146 | DESTROY_REQUESTED, 147 | DESTROY_HIBERNATING, 148 | DESTROY_RESOURCELIMIT, 149 | DESTROY_CONNECTFAILED, 150 | DESTROY_OR_IDENTITY, 151 | DESTROY_OR_CONN_CLOSED, 152 | DESTROY_FINISHED, 153 | DESTROY_TIMEOUT, 154 | DESTROY_DESTROYED, 155 | DESTROY_NOSUCHSERVICE, 156 | ) 157 | 158 | FORWARD_CELLS = ( 159 | RELAY_BEGIN_CMD, 160 | RELAY_DATA_CMD, 161 | RELAY_END_CMD, 162 | RELAY_SENDME_CMD, 163 | RELAY_EXTEND_CMD, 164 | RELAY_TRUNCATE_CMD, 165 | RELAY_DROP_CMD, 166 | RELAY_RESOLVE_CMD, 167 | RELAY_BEGIN_DIR_CMD, 168 | RELAY_EXTEND2_CMD, 169 | ) 170 | 171 | BEGIN_FLAG_IPv6_OK = 1 172 | BEGIN_FLAG_IPv4_NOT_OK = 2 173 | BEGIN_FLAG_IPv6_PREFERRED = 3 174 | 175 | RELAY_BEGIN_FLAGS = ( 176 | BEGIN_FLAG_IPv6_OK, 177 | BEGIN_FLAG_IPv4_NOT_OK, 178 | BEGIN_FLAG_IPv6_PREFERRED, 179 | ) 180 | 181 | NTOR_HTYPE = 2 182 | NTOR_HLEN = 84 183 | 184 | LSTYPE_IPv4 = 0 185 | LSTYPE_IPv6 = 1 186 | LSTYPE_LEGACY = 2 187 | LSLEN_IPv4 = 6 188 | LSLEN_IPv6 = 18 189 | LSLEN_LEGACY = 20 190 | 191 | RECOGNIZED = "\x00\x00" 192 | EMPTY_DIGEST = "\x00\x00\x00\x00" 193 | -------------------------------------------------------------------------------- /oppy/cell/exceptions.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel and David Johnston 2 | # See LICENSE for licensing information 3 | 4 | class NotEnoughBytes(Exception): 5 | pass 6 | 7 | 8 | class UnknownCellCommand(Exception): 9 | pass 10 | 11 | 12 | class BadCellPayloadLength(Exception): 13 | pass 14 | 15 | 16 | class BadPayloadData(Exception): 17 | pass 18 | 19 | 20 | class BadLinkSpecifier(Exception): 21 | pass 22 | 23 | 24 | class BadCellHeader(Exception): 25 | pass 26 | 27 | 28 | class BadRelayCellHeader(Exception): 29 | pass 30 | -------------------------------------------------------------------------------- /oppy/cell/util.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel and David Johnston 2 | # See LICENSE for licensing information 3 | 4 | import ipaddress 5 | import struct 6 | 7 | import oppy.cell.definitions as DEF 8 | 9 | from oppy.cell.exceptions import BadLinkSpecifier, BadPayloadData 10 | from oppy.util.tools import decodeMicrodescriptorIdentifier 11 | 12 | 13 | class LinkSpecifier(object): 14 | '''.. note:: tor-spec, Section 5.1.2''' 15 | 16 | def __init__(self, path_node, legacy=False): 17 | ''' 18 | :param bool legacy: if **True**, make a legacy link specifier. 19 | make an IPv4 or IPv6 link specifier otherwise according to 20 | the relay's public IP address. 21 | ''' 22 | if legacy is True: 23 | self.lstype = DEF.LSTYPE_LEGACY 24 | self.lslen = DEF.LSLEN_LEGACY 25 | self.lspec = decodeMicrodescriptorIdentifier( 26 | path_node.microdescriptor) 27 | else: 28 | address = path_node.router_status_entry.address 29 | addr = ipaddress.ip_address(unicode(address)) 30 | port = path_node.router_status_entry.or_port 31 | if isinstance(addr, ipaddress.IPv4Address): 32 | self.lstype = DEF.LSTYPE_IPv4 33 | self.lslen = DEF.LSLEN_IPv4 34 | else: 35 | self.lstype = DEF.LSTYPE_IPv6 36 | self.lslen = DEF.LSLEN_IPv6 37 | 38 | self.lspec = addr.packed + struct.pack('!H', port) 39 | 40 | if len(self.lspec) != self.lslen: 41 | raise BadLinkSpecifier() 42 | 43 | def getBytes(self): 44 | '''Build and construct the raw byte string represented by this 45 | link specifier. 46 | 47 | :returns: **str** raw byte string this link specifier represents 48 | ''' 49 | ret = struct.pack('!B', self.lstype) 50 | ret += struct.pack('!B', self.lslen) 51 | ret += self.lspec 52 | return ret 53 | 54 | def __len__(self): 55 | # lstype and lslen fields are one byte each 56 | return 1 + 1 + self.lslen 57 | 58 | 59 | TLV_ADDR_TYPE_LEN = 1 60 | TLV_ADDR_LEN_LEN = 1 61 | 62 | 63 | TLV_ERROR_TRANSIENT = 0xF0 64 | TLV_ERROR_NONTRANSIENT = 0xF1 65 | 66 | 67 | # XXX what about TTL? 68 | class TLVTriple(object): 69 | '''.. note:: tor-spec, Section 6.4 70 | 71 | .. todo:: Handle the hostname type properly. TLVTriple's currently 72 | don't know how to deal with a hostname type. Additionally, they 73 | don't know how to handle a TTL or the various errors that can 74 | occur. 75 | ''' 76 | 77 | def __init__(self, addr): 78 | ''' 79 | :param str addr: IP address for this TLVTriple 80 | ''' 81 | addr = ipaddress.ip_address(unicode(addr)) 82 | if isinstance(addr, ipaddress.IPv4Address): 83 | self.addr_type = DEF.IPv4_ADDR_TYPE 84 | self.addr_len = DEF.IPv4_ADDR_LEN 85 | elif isinstance(addr, ipaddress.IPv6Address): 86 | self.addr_type = DEF.IPv6_ADDR_TYPE 87 | self.addr_len = DEF.IPv6_ADDR_LEN 88 | else: 89 | msg = 'TLVTriple can only handle IPv4 and IPv6 type/length/value ' 90 | msg += 'triples for now.' 91 | raise ValueError(msg) 92 | 93 | self.value = addr.packed 94 | 95 | # XXX addr_len is currently ignored because we can only currently handle 96 | # IPv4 and IPv6 TLV's and not hostnames. When RELAY_RESOLVE/D cells 97 | # are implemented, this should be changed to handle hostnames of 98 | # different lengths 99 | @staticmethod 100 | def parse(data, offset): 101 | '''Parse and extract TLVTriple fields from a byte string. 102 | 103 | :param str data: byte string to parse 104 | :param int offset: offset in str data where we should start 105 | reading 106 | :returns: :class:`~oppy.cell.util.TLVTriple` 107 | ''' 108 | addr_type = struct.unpack('!B', data[offset:offset + 109 | TLV_ADDR_TYPE_LEN])[0] 110 | offset += TLV_ADDR_TYPE_LEN 111 | 112 | # use addr_len for hostname types 113 | addr_len = struct.unpack('!B', data[offset:offset + 114 | TLV_ADDR_LEN_LEN])[0] 115 | offset += TLV_ADDR_LEN_LEN 116 | 117 | if addr_type == DEF.IPv4_ADDR_TYPE: 118 | value = data[offset:offset + DEF.IPv4_ADDR_LEN] 119 | offset += DEF.IPv4_ADDR_LEN 120 | value = ipaddress.ip_address(value).exploded 121 | elif addr_type == DEF.IPv6_ADDR_TYPE: 122 | value = data[offset:offset + DEF.IPv6_ADDR_LEN] 123 | offset += DEF.IPv6_ADDR_LEN 124 | value = ipaddress.ip_address(value).exploded 125 | else: 126 | msg = "TLVTriple can't parse type {0} yet.".format(addr_type) 127 | raise ValueError(msg) 128 | 129 | return TLVTriple(value) 130 | 131 | # XXX handle hostname types and errors properly 132 | def getBytes(self): 133 | '''Construct and return the raw byte string this TLVTriple 134 | represents. 135 | 136 | :returns: **str** raw byte string this TLVTriple represents. 137 | ''' 138 | ret = struct.pack('!B', self.addr_type) 139 | ret += struct.pack('!B', self.addr_len) 140 | ret += self.value 141 | return ret 142 | 143 | def __len__(self): 144 | return TLV_ADDR_TYPE_LEN + TLV_ADDR_LEN_LEN + len(self.value) 145 | 146 | def __repr__(self): 147 | fmt = "TLVTriple(addr={})" 148 | return fmt.format(repr(ipaddress.ip_address(self.value).exploded)) 149 | 150 | def __eq__(self, other): 151 | if type(other) is type(self): 152 | return self.__dict__ == other.__dict__ 153 | return False 154 | 155 | CERT_TYPE_LEN = 1 156 | CERT_LEN_LEN = 2 157 | SUPPORTED_CERT_TYPES = (1, 2, 3,) 158 | 159 | 160 | class CertsCellPayloadItem(object): 161 | 162 | __slots__ = ('cert_type', 'cert_len', 'cert') 163 | 164 | def __init__(self, cert_type, cert_len, cert): 165 | self.cert_type = cert_type 166 | self.cert_len = cert_len 167 | self.cert = cert 168 | 169 | def getBytes(self): 170 | ret = struct.pack('!B', self.cert_type) 171 | ret += struct.pack('!H', self.cert_len) 172 | ret += self.cert 173 | return ret 174 | 175 | @staticmethod 176 | def parse(data, offset): 177 | cert_type = struct.unpack('!B', 178 | data[offset: offset + CERT_TYPE_LEN])[0] 179 | offset += CERT_TYPE_LEN 180 | if cert_type not in SUPPORTED_CERT_TYPES: 181 | msg = "Got cert type {}, but oppy only supports cert types {}." 182 | raise BadPayloadData(msg.format(cert_type, SUPPORTED_CERT_TYPES)) 183 | 184 | cert_len = struct.unpack('!H', data[offset: offset + CERT_LEN_LEN])[0] 185 | offset += CERT_LEN_LEN 186 | 187 | cert = data[offset: offset + cert_len] 188 | 189 | return CertsCellPayloadItem(cert_type, cert_len, cert) 190 | 191 | def __repr__(self): 192 | fmt = "CertsCellPayloadItem(cert_type={}, cert_len={}, cert={})" 193 | return fmt.format(repr(self.cert_type), repr(self.cert_len), 194 | repr(self.cert)) 195 | 196 | def __len__(self): 197 | return 1 + 2 + len(self.cert) 198 | 199 | def __eq__(self, other): 200 | if type(other) is type(self): 201 | equal = True 202 | equal &= (self.cert_type == other.cert_type) 203 | equal &= (self.cert_len == other.cert_len) 204 | equal &= (self.cert == other.cert) 205 | return equal 206 | return False 207 | -------------------------------------------------------------------------------- /oppy/circuit/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/circuit/__init__.py -------------------------------------------------------------------------------- /oppy/circuit/circuitbuildtask.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | # TODO: fix imports 5 | import logging 6 | 7 | from twisted.internet import defer 8 | from twisted.python.failure import Failure 9 | 10 | import oppy.crypto.util as crypto 11 | import oppy.path.path as path 12 | import oppy.crypto.ntor as ntor 13 | 14 | from oppy.cell.fixedlen import Create2Cell, Created2Cell, DestroyCell 15 | from oppy.cell.relay import RelayExtend2Cell, RelayExtended2Cell 16 | from oppy.cell.util import LinkSpecifier 17 | from oppy.circuit.circuit import Circuit 18 | from oppy.circuit.definitions import CircuitType 19 | 20 | 21 | # Major TODO's: 22 | # - catch/handle crypto exceptions explicitly 23 | # - catch/handle connection.send exceptions explicitly 24 | # - catch/handle specific getPath exceptions 25 | # - handle cells with unexpected origins 26 | # - docs 27 | # - figure out where alreadyCalledError is coming from when 28 | # building a path fails 29 | class CircuitBuildTask(object): 30 | 31 | def __init__(self, connection_manager, circuit_manager, netstatus, 32 | guard_manager, _id, circuit_type=None, request=None, 33 | autobuild=True): 34 | self._connection_manager = connection_manager 35 | self._circuit_manager = circuit_manager 36 | self._netstatus = netstatus 37 | self._guard_manager = guard_manager 38 | self.circuit_id = _id 39 | self.circuit_type = circuit_type 40 | self.request = request 41 | self._hs_state = None 42 | self._path = None 43 | self._conn = None 44 | self._crypt_path = [] 45 | self._read_queue = defer.DeferredQueue() 46 | self._autobuild = autobuild 47 | self._tasks = None 48 | self._building = False 49 | self._current_task = None 50 | 51 | if autobuild is True: 52 | self.build() 53 | 54 | def build(self): 55 | if self._building is True: 56 | msg = "Circuit {} already started build process." 57 | raise RuntimeError(msg.format(self.circuit_id)) 58 | 59 | try: 60 | # TODO: update for stable/fast flags based on circuit_type 61 | self._tasks = path.getPath(self._netstatus, self._guard_manager, 62 | exit_request=self.request) 63 | except Exception as e: 64 | self._buildFailed(e) 65 | return 66 | 67 | self._current_task = self._tasks 68 | self._tasks.addCallback(self._build) 69 | self._tasks.addCallback(self._buildSucceeded) 70 | self._tasks.addErrback(self._buildFailed) 71 | self._building = True 72 | 73 | def canHandleRequest(self, request): 74 | if self._path is None: 75 | if request.is_host: 76 | return True 77 | elif request.is_ipv4: 78 | return self.circuit_type == CircuitType.IPv4 79 | else: 80 | return self.circuit_type == CircuitType.IPv6 81 | else: 82 | return self._path.exit.microdescriptor.exit_policy.can_exit_to(port=request.port) 83 | 84 | def recv(self, cell): 85 | self._read_queue.put(cell) 86 | 87 | def destroyCircuitFromManager(self): 88 | msg = "CircuitBuildTask {} destroyed from manager." 89 | msg = msg.format(self.circuit_id) 90 | self._current_task.errback(Failure(Exception(msg))) 91 | 92 | def destroyCircuitFromConnection(self): 93 | msg = "CircuitBuildTask {} destroyed from connection." 94 | msg = msg.format(self.circuit_id) 95 | self._current_task.errback(Failure(Exception(msg))) 96 | 97 | def _recvCell(self, _): 98 | self._current_task = self._read_queue.get() 99 | return self._current_task 100 | 101 | # NOTE: no errbacks are added because exceptions thrown in this inner 102 | # deferred will fire the errback added to the outer deferred 103 | def _build(self, cpath): 104 | self._path = cpath 105 | d = self._getConnection(self._path.entry) 106 | self._current_task = d 107 | d.addCallback(self._sendCreate2Cell, self._path.entry) 108 | d.addCallback(self._recvCell) 109 | d.addCallback(self._deriveCreate2CellSecrets, self._path.entry) 110 | for path_node in self._path[1:]: 111 | d.addCallback(self._sendExtend2Cell, path_node) 112 | d.addCallback(self._recvCell) 113 | d.addCallback(self._deriveExtend2CellSecrets, path_node) 114 | return d 115 | 116 | def _getConnection(self, path_node): 117 | 118 | d = self._connection_manager.getConnection(path_node.router_status_entry) 119 | self._current_task = d 120 | def addCirc(res): 121 | self._conn = res 122 | self._conn.addCircuit(self) 123 | return res 124 | d.addCallback(addCirc) 125 | return d 126 | 127 | def _sendCreate2Cell(self, _, path_node): 128 | self._hs_state = ntor.NTorState(path_node.microdescriptor) 129 | onion_skin = ntor.createOnionSkin(self._hs_state) 130 | create2 = Create2Cell.make(self.circuit_id, hdata=onion_skin) 131 | self._conn.send(create2) 132 | 133 | def _deriveCreate2CellSecrets(self, response, path_node): 134 | if isinstance(response, DestroyCell): 135 | msg = ("DestroyCell received from {}." 136 | .format(path_node.router_status_entry.fingerprint)) 137 | raise ValueError(msg) 138 | if not isinstance(response, Created2Cell): 139 | msg = ("Unexpected cell {} received from {}." 140 | .format(response, 141 | path_node.router_status_entry.fingerprint)) 142 | destroy = DestroyCell.make(self.circuit_id) 143 | self._conn.send(destroy) 144 | raise ValueError(msg) 145 | 146 | self._crypt_path.append(ntor.deriveRelayCrypto(self._hs_state, 147 | response)) 148 | # TODO: implement this 149 | #self._hs_state.memwipe() 150 | self._hs_state = None 151 | 152 | def _sendExtend2Cell(self, _, path_node): 153 | lspecs = [LinkSpecifier(path_node), 154 | LinkSpecifier(path_node, legacy=True)] 155 | self._hs_state = ntor.NTorState(path_node.microdescriptor) 156 | onion_skin = ntor.createOnionSkin(self._hs_state) 157 | extend2 = RelayExtend2Cell.make(self.circuit_id, nspec=len(lspecs), 158 | lspecs=lspecs, hdata=onion_skin) 159 | crypt_cell = crypto.encryptCell(extend2, self._crypt_path, 160 | early=True) 161 | self._conn.send(crypt_cell) 162 | 163 | def _deriveExtend2CellSecrets(self, response, path_node): 164 | if isinstance(response, DestroyCell): 165 | msg = ("Destroy cell received from {} on pending circuit {}." 166 | .format(path_node.router_status_entry.fingerprint, 167 | self.circuit_id)) 168 | raise ValueError(msg) 169 | 170 | cell, _ = crypto.decryptCell(response, self._crypt_path) 171 | 172 | if not isinstance(cell, RelayExtended2Cell): 173 | msg = ("CircuitBuildTask {} received an unexpected cell: {}. " 174 | "Destroying the circuit." 175 | .format(self.circuit_id, type(cell))) 176 | destroy = DestroyCell.make(self.circuit_id) 177 | self._conn.send(destroy) 178 | raise ValueError(msg) 179 | 180 | self._crypt_path.append(ntor.deriveRelayCrypto(self._hs_state, cell)) 181 | # TODO: implement this 182 | #self._hs_state.memwipe() 183 | self._hs = None 184 | 185 | def _buildSucceeded(self, _): 186 | circuit = Circuit(self._circuit_manager, self.circuit_id, self._conn, 187 | self.circuit_type, self._path, self._crypt_path) 188 | self._conn.addCircuit(circuit) 189 | self._circuit_manager.circuitOpened(circuit) 190 | 191 | def _buildFailed(self, reason): 192 | msg = ("Pending circuit {} failed. Reason: {}." 193 | .format(self.circuit_id, reason)) 194 | logging.debug(msg) 195 | if self._conn is not None: 196 | self._conn.removeCircuit(self.circuit_id) 197 | self._circuit_manager.circuitDestroyed(self) 198 | -------------------------------------------------------------------------------- /oppy/circuit/definitions.py: -------------------------------------------------------------------------------- 1 | from oppy.cell.fixedlen import DestroyCell, EncryptedCell 2 | from oppy.cell.relay import ( 3 | RelayDataCell, 4 | RelayEndCell, 5 | RelayConnectedCell, 6 | RelaySendMeCell, 7 | RelayExtendedCell, 8 | RelayExtended2Cell, 9 | RelayTruncatedCell, 10 | RelayDropCell, 11 | RelayResolvedCell, 12 | RelayExtended2Cell, 13 | ) 14 | from oppy.util.tools import enum 15 | 16 | 17 | CIRCUIT_WINDOW_THRESHOLD_INIT = 1000 18 | SENDME_THRESHOLD = 900 19 | WINDOW_SIZE = 100 20 | 21 | 22 | CState = enum( 23 | OPEN=0, 24 | BUFFERING=1, 25 | ) 26 | 27 | 28 | CircuitType = enum( 29 | IPv4=0, 30 | IPv6=1, 31 | ) 32 | 33 | 34 | BACKWARD_CELL_TYPES = ( 35 | DestroyCell, 36 | EncryptedCell, 37 | ) 38 | 39 | 40 | BACKWARD_RELAY_CELL_TYPES = ( 41 | RelayDataCell, 42 | RelayEndCell, 43 | RelayConnectedCell, 44 | RelaySendMeCell, 45 | RelayExtendedCell, 46 | RelayExtended2Cell, 47 | RelayTruncatedCell, 48 | RelayDropCell, 49 | RelayResolvedCell, 50 | RelayExtended2Cell, 51 | ) 52 | 53 | 54 | DEFAULT_OPEN_IPv4 = 4 55 | DEFAULT_OPEN_IPv6 = 1 56 | 57 | 58 | MAX_STREAMS_V3 = 65535 59 | -------------------------------------------------------------------------------- /oppy/connection/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/connection/__init__.py -------------------------------------------------------------------------------- /oppy/connection/connection.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | ''' 4 | .. topic:: Details 5 | 6 | Connection objects represent TLS connections to Tor entry nodes. Connection 7 | objects have a few important jobs: 8 | 9 | - Do the initial handshake to authenticate the entry node and negotiate 10 | a Link Protocl version (currently oppy only supports Link Protocol 11 | Version 3) 12 | - Extract cells from incoming data streams and pass them to the 13 | appropriate circuit (based on circuit ID) 14 | - Write cells from circuits to entry nodes 15 | - Notify all associated circuits when the connection goes down 16 | 17 | ''' 18 | import logging 19 | 20 | from twisted.internet.protocol import Protocol 21 | 22 | from oppy.cell.cell import Cell 23 | from oppy.cell.definitions import PADDING_CMD_IDS 24 | from oppy.cell.exceptions import NotEnoughBytes 25 | 26 | 27 | class Connection(Protocol): 28 | '''A TLS connection to an entry node.''' 29 | 30 | def __init__(self, connection_manager, connection_task): 31 | ''' 32 | ''' 33 | self._connection_manager = connection_manager 34 | self.micro_status_entry = connection_task.micro_status_entry 35 | self._circuit_dict = {} 36 | self._buffer = '' 37 | self._closed = False 38 | 39 | def send(self, cell): 40 | self.transport.write(cell.getBytes()) 41 | 42 | def dataReceived(self, data): 43 | '''We received data from the remote connection. 44 | 45 | Extract cells from the data stream and send them along to be 46 | processed. 47 | 48 | :param str data: data received from remote end 49 | ''' 50 | self._buffer += data 51 | 52 | while Cell.enoughDataForCell(self._buffer): 53 | try: 54 | cell = Cell.parse(self._buffer, encrypted=True) 55 | self._recv(cell) 56 | self._buffer = self._buffer[len(cell):] 57 | except NotEnoughBytes as e: 58 | logging.debug(e) 59 | break 60 | # TODO: remove len(unimplemented cell bytes) from buffer 61 | except NotImplementedError: 62 | logging.debug("Received a cell we can't handle yet.") 63 | logging.debug('buffer contents:\n') 64 | logging.debug([ord(i) for i in self._buffer]) 65 | raise 66 | 67 | def _recv(self, cell): 68 | # just drop padding cells 69 | if cell.header.cmd in PADDING_CMD_IDS: 70 | return 71 | 72 | try: 73 | self._circuit_dict[cell.header.circ_id].recv(cell) 74 | except KeyError: 75 | msg = ("Connection to {} received a {} cell for nonexistent " 76 | "circuit {}. Dropping cell." 77 | .format(self.micro_status_entry.fingerprint, type(cell), 78 | cell.header.circ_id)) 79 | logging.debug(msg) 80 | 81 | def addCircuit(self, circuit): 82 | '''Add new a new circuit to the circuit map for this connection. 83 | 84 | :param oppy.circuit.circuit.Circuit circuit: circuit to add to this 85 | connection's circuit map 86 | ''' 87 | self._circuit_dict[circuit.circuit_id] = circuit 88 | 89 | def closeConnection(self): 90 | '''Close this connection and all associated circuits; notify the 91 | connection manager. 92 | ''' 93 | msg = ("Closing connection to {}." 94 | .format(self.micro_status_entry.address)) 95 | logging.debug(msg) 96 | self._closed = True 97 | self._destroyAllCircuits() 98 | self._connection_manager.removeConnection(self) 99 | self.transport.loseConnection() 100 | 101 | def connectionLost(self, reason): 102 | '''Connection to relay has been lost; close this connection and 103 | all associated circuits; notify connection pool. 104 | 105 | :param reason reason: reason this connection was lost 106 | ''' 107 | if self._closed is True: 108 | return 109 | 110 | self._closed = True 111 | msg = ("Connection to {} lost. Reason: {}." 112 | .format(self.micro_status_entry.fingerprint, reason)) 113 | logging.debug(msg) 114 | self._destroyAllCircuits() 115 | self._connection_manager.removeConnection(self) 116 | 117 | def _destroyAllCircuits(self): 118 | '''Destroy all circuits associated with this connection. 119 | ''' 120 | for circuit in self._circuit_dict.values(): 121 | circuit.destroyCircuitFromConnection() 122 | 123 | def removeCircuit(self, circuit): 124 | '''The circuit with *circuit_id* has been destroyed. 125 | 126 | Remove this circuit from this connection's circuit map if we know 127 | about it. If there are no remaining circuit's using this connection, 128 | ask the connection pool if this connection should be closed and, if 129 | so, close this connection. 130 | 131 | :param int circuit_id: id of the circuit that was destroyed 132 | ''' 133 | try: 134 | del self._circuit_dict[circuit.circuit_id] 135 | except KeyError: 136 | msg = ("Connection to {} notified circuit {} was destroyed, but " 137 | "the connection has no reference to that circuit." 138 | .format(self.micro_status_entry.fingerprint, 139 | circuit.circuit_id)) 140 | logging.debug(msg) 141 | return 142 | 143 | if len(self._circuit_dict) == 0: 144 | if self._connection_manager.shouldDestroyConnection(self) is True: 145 | self.closeConnection() 146 | -------------------------------------------------------------------------------- /oppy/connection/connectionmanager.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | ''' 5 | .. topic:: Details 6 | 7 | The ConnectionPool manages a pool of TLS connections to entry nodes. The 8 | main job of the ConnectionPool is to hand out TLS connections to 9 | requesting circuits and keep track of all the open connections. TLS 10 | connections to the same entry nodes are shared among circuits. 11 | 12 | ''' 13 | import logging 14 | 15 | from twisted.internet import defer, endpoints 16 | from twisted.internet.ssl import ClientContextFactory 17 | 18 | from OpenSSL import SSL 19 | 20 | from oppy.connection.connection import Connection 21 | from oppy.connection.connectionbuildtask import ConnectionBuildTask 22 | from oppy.connection.definitions import V3_CIPHER_STRING 23 | 24 | 25 | class TLSClientContextFactory(ClientContextFactory): 26 | 27 | isClient = 1 28 | method = SSL.TLSv1_METHOD 29 | _contextFactory = SSL.Context 30 | 31 | def getContext(self): 32 | context = self._contextFactory(self.method) 33 | context.set_cipher_list(V3_CIPHER_STRING) 34 | return context 35 | 36 | 37 | # TODO: when things are shut down from CTRL-C, it's ugly and should be fixed 38 | class ConnectionManager(object): 39 | '''A pool of TLS connections to entry nodes.''' 40 | 41 | def __init__(self): 42 | logging.debug('Connection manager created.') 43 | self._connection_dict = {} 44 | self._pending_request_dict = {} 45 | 46 | # TODO: fix docs (change relay to router_status_entry) 47 | def getConnection(self, micro_status_entry): 48 | '''Return a deferred which will fire (if connection attempt is 49 | successful) with a Connection Protocol made to *relay*. 50 | 51 | There are three general cases to handle for incoming connection 52 | requests: 53 | 54 | 1. We already have an open TLS connection to the requested relay. 55 | In this case, immediately callback the deferred with the 56 | open connection. 57 | 2. We're currently trying to connect to this relay. In this case, 58 | add the request to a pending request list for this relay. 59 | When the connection is made successfully, callback all 60 | pending request deferreds with the Connection, or errback 61 | all pending request deferreds on failure. 62 | 3. We have no open or pending connections to this relay (i.e. 63 | this is the first connection request to this relay). In this 64 | case, create a new list of pending requests for this connection 65 | and add the current request. Create an SSL endpoint and add an 66 | appropriate callback and errback. If the request is successful, 67 | callback all pending requests with the open connection when 68 | it opens; errback all pending requests on failure. 69 | 70 | :param stem.descriptor.server_descriptor.RelayDescriptor relay: 71 | relay to make a TLS connection to 72 | :returns: **twisted.internet.defer.Deferred** which, on success, will 73 | callback with an oppy.connection.connection.Connection Protocol 74 | object 75 | 76 | XXX raises: 77 | ''' 78 | from twisted.internet import reactor 79 | 80 | m = micro_status_entry 81 | 82 | d = defer.Deferred() 83 | 84 | if m.fingerprint in self._connection_dict: 85 | d.callback(self._connection_dict[m.fingerprint]) 86 | elif m.fingerprint in self._pending_request_dict: 87 | self._pending_request_dict[m.fingerprint].append(d) 88 | else: 89 | try: 90 | f = endpoints.connectProtocol(endpoints.SSL4ClientEndpoint( 91 | reactor, m.address, m.or_port, TLSClientContextFactory()), 92 | ConnectionBuildTask(self, m)) 93 | f.addErrback(self._initialConnectionFailed, m.fingerprint) 94 | self._pending_request_dict[m.fingerprint] = [d] 95 | except Exception as e: 96 | self._initialConnectionFailed(e, m.fingerprint) 97 | 98 | return d 99 | 100 | # called when the initial connection fails 101 | # TODO: track these and timeout if we get too many failures 102 | def _initialConnectionFailed(self, reason, fingerprint): 103 | self.connectionTaskFailed(None, reason, fingerprint) 104 | 105 | def connectionTaskSucceeded(self, connection_task): 106 | '''For every pending request for this connection, callback the request 107 | deferred with this open connection, then remove this connection 108 | from the pending map and add to the connection map. 109 | 110 | Called when the TLS connection to the IP of relay with 111 | *fingerprint* opens successfully. 112 | 113 | :param oppy.connection.connection.Connection result: the successfully 114 | opened connection 115 | :param str fingerprint: fingerprint of relay we have connected to 116 | ''' 117 | fprint = connection_task.micro_status_entry.fingerprint 118 | 119 | if fprint not in self._pending_request_dict: 120 | msg = ("ConnectionManager notified that a connection to {} " 121 | "was made successfully, but ConnectionManager has no " 122 | "reference to this connection. Dropping.".format(fprint)) 123 | logging.debug(msg) 124 | return 125 | 126 | connection = Connection(self, connection_task) 127 | self._connection_dict[fprint] = connection 128 | # We need to re-assign the transport from the ConnectionBuildTask 129 | # to the new Connection. This is fragile and should be updated after 130 | # Twisted bug #3204 is fixed: http://twistedmatrix.com/trac/ticket/3204 131 | connection.transport = connection_task.transport 132 | connection_task.transport = None 133 | connection.transport.wrappedProtocol = connection 134 | 135 | for request in self._pending_request_dict[fprint]: 136 | request.callback(connection) 137 | del self._pending_request_dict[fprint] 138 | 139 | def connectionTaskFailed(self, connection_task, reason, fprint=None): 140 | '''For every pending request for this connection, errback the request 141 | deferred. Remove this connection from the pending map. 142 | 143 | Called when the TLS connection to the IP of relay with 144 | *fingerprint* fails. 145 | 146 | :param reason reason: reason this connection failed 147 | :param str fingerprint: fingerprint of the relay this connection 148 | failed to 149 | ''' 150 | # XXX update what args we're calling the errback here with 151 | fprint = fprint or connection_task.micro_status_entry.fingerprint 152 | try: 153 | for request in self._pending_request_dict[fprint]: 154 | request.errback(reason) 155 | del self._pending_request_dict[fprint] 156 | except KeyError: 157 | msg = ("ConnectionManager notified that a connection to {} " 158 | "failed, but ConnectionManager has no reference to this " 159 | "connection. Dropping.".format(fprint)) 160 | logging.debug(msg) 161 | 162 | def removeConnection(self, connection): 163 | '''Remove the connection to relay with *fingerprint* from the 164 | connection pool. 165 | 166 | :param str fingerprint: fingerprint of connection to remove 167 | ''' 168 | fprint = connection.micro_status_entry.fingerprint 169 | try: 170 | del self._connection_dict[connection.micro_status_entry.fingerprint] 171 | msg = "ConnectionManager removed a connection to {}".format(fprint) 172 | logging.debug(msg) 173 | except KeyError: 174 | msg = ("ConnectionManager received a request to remove connection " 175 | "to {}, but CircuitManager has no reference to that " 176 | "connection.".format(fprint)) 177 | logging.debug(msg) 178 | 179 | def shouldDestroyConnection(self, connection): 180 | '''Return **True** if ConnectionPool thinks we should destroy the 181 | TLS connection to relay with *fingerprint*. 182 | 183 | Called when the number of circuits on a connection drops to zero. 184 | 185 | .. note:: For now, we always return True. Eventually, we may 186 | want to maintain a connection to any guards, even if there are 187 | no currently open circuits. 188 | 189 | :param str fingerprint: fingerprint of connection to check 190 | :returns: **bool** **True** if we think this connection should be 191 | destroyed 192 | ''' 193 | return True 194 | -------------------------------------------------------------------------------- /oppy/connection/definitions.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | V3_CIPHER_STRING = 'ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-CAMELLIA256-SHA:DHE-DSS-CAMELLIA256-SHA:DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:ECDH-RSA-AES256-SHA:ECDH-ECDSA-AES256-SHA:CAMELLIA256-SHA:AES256-SHA:ECDHE-ECDSA-RC4-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-RC4-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-DSS-CAMELLIA128-SHA:DHE-RSA-AES128-SHA256:DHE-DSS-AES128-SHA256:ECDH-RSA-RC4-SHA:ECDH-RSA-AES128-SHA:ECDH-ECDSA-RC4-SHA:ECDH-ECDSA-AES128-SHA:SEED-SHA:CAMELLIA128-SHA:RC4-MD5:RC4-SHA:AES128-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:DHE-RSA-DES-CBC3-SHA:DHE-DSS-DES-CBC3-SHA:ECDH-RSA-DES-CBC3-SHA:ECDH-ECDSA-DES-CBC3-SHA:DES-CBC3-SHA' 5 | 6 | CONNECTION_TIMEOUT = 30 7 | 8 | V3_KEY_BITS = 1024 9 | 10 | OPENSSL_RSA_KEY_TYPE = 6 11 | 12 | KNOWN_LINK_PROTOCOLS = (1, 2, 3, 4) 13 | SUPPORTED_LINK_PROTOCOLS = (3,) 14 | 15 | LINK_CERT_TYPE = 1 16 | ID_CERT_TYPE = 2 17 | -------------------------------------------------------------------------------- /oppy/crypto/.gitignore: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/crypto/.gitignore -------------------------------------------------------------------------------- /oppy/crypto/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/crypto/__init__.py -------------------------------------------------------------------------------- /oppy/crypto/ntor.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | ''' 5 | .. topic:: Details 6 | 7 | NTorHandsake objects provide the methods for doing the ntor handshake 8 | key derivations and crypto operations. NTorHandshakes objects do the 9 | following jobs: 10 | 11 | - Create temporary public/private Curve 25519 keys 12 | - Create the initial onion skin 13 | - Derive key material from a Created2 or Extended2 cell 14 | - Create and initialize a RelayCrypto object, ready for use by 15 | a circuit (RelayCrypto objects are just wrappers around AES128-CTR 16 | ciphers and SHA-1 running digests, initialized with the derived key 17 | material) 18 | 19 | 20 | .. warning:: NTorHandshakes do not safely erase/clear memory of private keys. 21 | 22 | ''' 23 | import base64 24 | import hashlib 25 | import hkdf 26 | 27 | from collections import namedtuple 28 | 29 | from nacl import bindings 30 | from nacl.public import PrivateKey, PublicKey 31 | 32 | from oppy.crypto import util 33 | from oppy.util.tools import decodeMicrodescriptorIdentifier 34 | 35 | 36 | PROTOID = "ntor-curve25519-sha256-1" 37 | T_MAC = PROTOID + ":mac" 38 | T_KEY = PROTOID + ":key_extract" 39 | T_VERIFY = PROTOID + ":verify" 40 | M_EXPAND = PROTOID + ":key_expand" 41 | SERVER_STR = "Server" 42 | 43 | CURVE25519_PUBKEY_LEN = 32 44 | DIGEST_LEN = 20 45 | KEY_LEN = 16 46 | 47 | 48 | class KeyDerivationFailed(Exception): 49 | pass 50 | 51 | 52 | class NTorState(namedtuple('NTorState', 53 | ('relay_identity', 'relay_ntor_onion_key', 'secret_key', 'public_key'))): 54 | 55 | __slots__ = () 56 | 57 | def __new__(cls, microdescriptor): 58 | relay_identity = decodeMicrodescriptorIdentifier(microdescriptor) 59 | relay_ntor_onion_key = base64.b64decode(microdescriptor.ntor_onion_key) 60 | secret_key = PrivateKey.generate() 61 | public_key = secret_key.public_key 62 | return super(NTorState, cls).__new__( 63 | cls, relay_identity, relay_ntor_onion_key, secret_key, public_key) 64 | 65 | # TODO: everything 66 | def memwipe(self): 67 | raise NotImplementedError() 68 | 69 | 70 | def createOnionSkin(ntorstate): 71 | '''Build and return an *onion skin* to this handshake's relay. 72 | 73 | .. note:: See tor-spec Section 5.1.4 for more information. 74 | 75 | :returns: **str** raw byte string for this *onion skin* 76 | ''' 77 | ret = ntorstate.relay_identity 78 | ret += ntorstate.relay_ntor_onion_key 79 | ret += bytes(ntorstate.public_key) 80 | return ret 81 | 82 | 83 | def deriveRelayCrypto(ntorstate, cell): 84 | '''Derive shared key material for this ntor handshake; create and 85 | return actual cipher and hash instances inside a RelayCrypto object. 86 | 87 | .. note:: See tor-spec Section 5.1.4, 5.2.2 for more details. 88 | 89 | :param cell cell: Created2 cell or Extended2 cell used to derive 90 | shared keys 91 | :returns: **oppy.crypto.relaycrypto.RelayCrypto** object initialized 92 | with the derived key material. 93 | ''' 94 | is_bad = False 95 | hdata = cell.hdata 96 | 97 | relay_pubkey = cell.hdata[:CURVE25519_PUBKEY_LEN] 98 | AUTH = hdata[CURVE25519_PUBKEY_LEN:CURVE25519_PUBKEY_LEN+DIGEST_LEN] 99 | 100 | secret_input, bad = _buildSecretInput(ntorstate, relay_pubkey) 101 | is_bad |= bad 102 | 103 | verify = util.makeHMACSHA256(msg=secret_input, key=T_VERIFY) 104 | auth_input = _buildAuthInput(ntorstate, verify, relay_pubkey) 105 | auth_input = util.makeHMACSHA256(msg=auth_input, key=T_MAC) 106 | 107 | is_bad |= util.constantStrEqual(AUTH, auth_input) 108 | 109 | ret = _makeRelayCrypto(secret_input) 110 | # don't fail until the very end to avoid leaking timing information 111 | # (this might be unnecessary) 112 | if is_bad is True: 113 | raise KeyDerivationFailed() 114 | return ret 115 | 116 | 117 | def _buildSecretInput(ntorstate, relay_pubkey): 118 | '''Build and return secret input as a byte string. 119 | 120 | .. note:: See tor-spec Section 5.1.4 for more details. 121 | 122 | :param relay_pubkey: the remote relay's CURVE_25519 public key 123 | received in the Created2/Extended2 cell 124 | :returns: **str** secret_input 125 | ''' 126 | is_bad = False 127 | v, bad = _EXP(ntorstate.secret_key, relay_pubkey) 128 | is_bad |= bad 129 | b = v 130 | 131 | v, bad = _EXP(ntorstate.secret_key, ntorstate.relay_ntor_onion_key) 132 | is_bad |= bad 133 | b += v 134 | 135 | b += ntorstate.relay_identity 136 | b += ntorstate.relay_ntor_onion_key 137 | b += bytes(ntorstate.public_key) 138 | b += relay_pubkey 139 | b += PROTOID 140 | return (b, is_bad) 141 | 142 | 143 | def _buildAuthInput(ntorstate, verify, relay_pubkey): 144 | '''Build and return auth input as a byte string. 145 | 146 | .. note:: See tor-spec Section 5.1.4 for more details. 147 | 148 | :param str verify: the verification data derived from secret_input 149 | :param str relay_pubkey: the remote relay's CURVE_25519 public 150 | key received in the Created2/Extended2 cell 151 | :returns: **str** auth_input 152 | ''' 153 | b = verify 154 | b += ntorstate.relay_identity 155 | b += ntorstate.relay_ntor_onion_key 156 | b += relay_pubkey 157 | b += bytes(ntorstate.public_key) 158 | b += PROTOID 159 | b += SERVER_STR 160 | return b 161 | 162 | 163 | def _makeRelayCrypto(secret_input): 164 | '''Derive shared key material using HKDF from secret_input. 165 | 166 | :returns: **oppy.crypto.relaycrypto.RelayCrypto** initialized with 167 | shared key data 168 | ''' 169 | prk = hkdf.hkdf_extract(salt=T_KEY, input_key_material=secret_input, 170 | hash=hashlib.sha256) 171 | km = hkdf.hkdf_expand(pseudo_random_key=prk, info=M_EXPAND, 172 | length=72, hash=hashlib.sha256) 173 | 174 | df = km[:DIGEST_LEN] 175 | db = km[DIGEST_LEN:DIGEST_LEN*2] 176 | kf = km[DIGEST_LEN*2:DIGEST_LEN*2+KEY_LEN] 177 | kb = km[DIGEST_LEN*2+KEY_LEN:DIGEST_LEN*2+KEY_LEN*2] 178 | 179 | f_digest = hashlib.sha1(df) 180 | b_digest = hashlib.sha1(db) 181 | f_cipher = util.makeAES128CTRCipher(kf) 182 | b_cipher = util.makeAES128CTRCipher(kb) 183 | 184 | ret = util.RelayCrypto(forward_digest=f_digest, backward_digest=b_digest, 185 | forward_cipher=f_cipher, backward_cipher=b_cipher) 186 | return ret 187 | 188 | 189 | def _EXP(n, p): 190 | ''' 191 | .. note:: See tor-spec Section 5.1.4 for why this is an adequate 192 | replacement for checking that none of the EXP() operations produced 193 | the point at infinity. 194 | 195 | :returns: **str** result 196 | ''' 197 | ret = bindings.crypto_scalarmult(bytes(n), bytes(PublicKey(p))) 198 | bad = util.constantStrAllZero(ret) 199 | return (ret, bad) 200 | -------------------------------------------------------------------------------- /oppy/crypto/util.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | ''' 5 | .. topic:: Details 6 | 7 | Crypto utility functions. Includes: 8 | 9 | - Constant time string comparisons 10 | - Wrappers around encryption/decryption operations 11 | - The "recognized" check for incoming cells 12 | - A couple methods for verifying TLS certificate properties (signatures 13 | and times) 14 | 15 | ''' 16 | import hashlib 17 | import hmac 18 | import struct 19 | 20 | from collections import namedtuple 21 | from datetime import datetime 22 | from itertools import izip 23 | 24 | import OpenSSL 25 | 26 | from Crypto.Cipher import AES 27 | from Crypto.Util import asn1, Counter 28 | 29 | from oppy.cell.cell import Cell 30 | from oppy.cell.definitions import RECOGNIZED, EMPTY_DIGEST 31 | from oppy.cell.fixedlen import EncryptedCell 32 | 33 | 34 | class UnrecognizedCell(Exception): 35 | pass 36 | 37 | 38 | RelayCrypto = namedtuple("RelayCrypto", ("forward_digest", 39 | "backward_digest", 40 | "forward_cipher", 41 | "backward_cipher")) 42 | 43 | 44 | def constantStrEqual(str1, str2): 45 | '''Do a constant-time comparison of str1 and str2, returning **True** 46 | if they are equal, **False** otherwise. 47 | 48 | :param str str1: first string to compare 49 | :param str str2: second string to compare 50 | :returns: **bool** **True** if str1 == str2, **False** otherwise 51 | ''' 52 | try: 53 | from hmac import compare_digest 54 | return compare_digest(str1, str2) 55 | except ImportError: 56 | pass 57 | 58 | if len(str1) != len(str2): 59 | # we've already failed at this point, but loop anyway 60 | res = 1 61 | comp1 = bytearray(str2) 62 | comp2 = bytearray(str2) 63 | else: 64 | res = 0 65 | comp1 = bytearray(str1) 66 | comp2 = bytearray(str2) 67 | 68 | for a, b in izip(comp1, comp2): 69 | res |= a ^ b 70 | 71 | return res == 0 72 | 73 | 74 | def constantStrAllZero(s): 75 | '''Check if *s* consists of all zero bytes. 76 | 77 | :param str s: string to check 78 | :returns: **bool** **True** if *s* contains all zero bytes, **False** 79 | otherwise 80 | ''' 81 | return constantStrEqual(s, '\x00' * len(s)) 82 | 83 | 84 | def makeAES128CTRCipher(key, initial_value=0): 85 | '''Create and return a new AES128-CTR cipher instance. 86 | 87 | :param str key: key to use for this cipher 88 | :param initial_value: initial_value to use 89 | :returns: **Crypto.Cipher.AES.AES** 90 | ''' 91 | ctr = Counter.new(128, initial_value=initial_value) 92 | return AES.new(key, AES.MODE_CTR, counter=ctr) 93 | 94 | 95 | def makeHMACSHA256(msg, key): 96 | '''Make a new HMAC-SHA256 with *msg* and *key* and return digest byte 97 | string. 98 | 99 | :param str msg: msg 100 | :param str key: key to use 101 | :returns: **str** HMAC digest 102 | ''' 103 | t = hmac.new(msg=msg, key=key, digestmod=hashlib.sha256) 104 | return t.digest() 105 | 106 | 107 | def _makePayloadWithDigest(payload, digest=EMPTY_DIGEST): 108 | '''Make a new payload with *digest* inserted in the correct position. 109 | 110 | :param str payload: payload in which to insert digest 111 | :param str digest: digest to insert 112 | :returns: **str** payload with digest inserted into correct position 113 | ''' 114 | assert len(payload) >= 9 and len(digest) == 4 115 | DIGEST_START = 5 116 | DIGEST_END = 9 117 | return payload[:DIGEST_START] + digest + payload[DIGEST_END:] 118 | 119 | 120 | # TODO: fix documentation 121 | def encryptCell(cell, crypt_path, early=False): 122 | '''Encrypt *cell* to the *target* relay in *crypt_path* and update 123 | the appropriate forward digest. 124 | 125 | :param cell cell: cell to encrypt 126 | :param list crypt_path: list of RelayCrypto instances available for 127 | encryption 128 | :param int target: target node to encrypt to 129 | :param bool early: if **True**, use a RELAY_EARLY cmd instead of a 130 | RELAY cmd 131 | :returns: **oppy.cell.fixedlen.EncryptedCell** 132 | ''' 133 | assert cell.rheader.digest == EMPTY_DIGEST 134 | 135 | # 1) update f_digest with cell payload bytes 136 | crypt_path[-1].forward_digest.update(cell.getPayload()) 137 | # 2) insert first four bytes into new digest position 138 | cell.rheader.digest = crypt_path[-1].forward_digest.digest()[:4] 139 | # 3) encrypt payload 140 | payload = cell.getPayload() 141 | for node in reversed(crypt_path): 142 | payload = node.forward_cipher.encrypt(payload) 143 | # 4) return encrypted relay cell with new payload 144 | return EncryptedCell.make(cell.header.circ_id, payload, early=early) 145 | 146 | 147 | def _cellRecognized(payload, relay_crypto): 148 | '''Return **True** if this payload is *recognized*. 149 | 150 | .. note:: See tor-spec Section 6.1 for details about what it means for a 151 | cell to be *recognized*. 152 | 153 | :param str payload: payload to check if recognized 154 | :param oppy.crypto.relaycrypto.RelayCrypto relay_crypto: RelayCrypto 155 | instance to use for checking if payload is recognized 156 | :returns: **bool** **True** if this payload is recognized, **False** 157 | otherwise 158 | ''' 159 | if len(payload) < 9 or payload[2:4] != RECOGNIZED: 160 | return False 161 | digest = payload[5:9] 162 | test_payload = _makePayloadWithDigest(payload) 163 | test_digest = relay_crypto.backward_digest.copy() 164 | test_digest.update(test_payload) 165 | # no danger of timing attack here since we just 166 | # drop the cell if it's not recognized 167 | return test_digest.digest()[:4] == digest 168 | 169 | 170 | # TODO: fix documentation 171 | def decryptCell(cell, crypt_path): 172 | '''Decrypt *cell* until it is recognized or we've tried all RelayCrypto's 173 | in *crypt_path*. 174 | 175 | Attempt to decrypt the cell one hop at a time. Stop if the cell is 176 | recognized. Raise an exception if the cell is not recognized at all. 177 | 178 | :param cell cell: cell to decrypt 179 | :param list, oppy.crypto.relaycrypto.RelayCrypto crypt_path: list of 180 | RelayCrypto instances to use for decryption 181 | :param int origin: the originating hop we think this cell came from 182 | :returns: the concrete RelayCell type of this decrypted cell 183 | ''' 184 | origin = 0 185 | recognized = False 186 | payload = cell.getPayload() 187 | 188 | for node in crypt_path: 189 | payload = node.backward_cipher.decrypt(payload) 190 | if _cellRecognized(payload, node): 191 | recognized = True 192 | break 193 | origin += 1 194 | 195 | if not recognized: 196 | raise UnrecognizedCell() 197 | 198 | updated_payload = _makePayloadWithDigest(payload) 199 | crypt_path[origin].backward_digest.update(updated_payload) 200 | if cell.header.link_version < 4: 201 | cid = struct.pack('!H', cell.header.circ_id) 202 | else: 203 | cid = struct.pack('!I', cell.header.circ_id) 204 | cmd = struct.pack('!B', cell.header.cmd) 205 | 206 | dec = Cell.parse(cid + cmd + payload) 207 | return (dec, origin) 208 | 209 | 210 | def verifyCertSig(id_cert, cert_to_verify, algo='sha1'): 211 | '''Verify that the SSL certificate *id_cert* has signed the TLS cert 212 | *cert_to_verify*. 213 | 214 | :param id_cert: Identification Certificate 215 | :type id_cert: OpenSSL.crypto.X509 216 | :param cert_to_verify: certificate to verify signature on 217 | :type cert_to_verify: OpenSSL.crypto.X509 218 | :param algo: algorithm to use for certificate verification 219 | :type algo: str 220 | 221 | :returns: **bool** **True** if the signature of *cert_to_verify* can be 222 | verified from *id_cert*, **False** otherwise 223 | ''' 224 | cert_to_verify_ASN1 = OpenSSL.crypto.dump_certificate( 225 | OpenSSL.crypto.FILETYPE_ASN1, cert_to_verify) 226 | 227 | der = asn1.DerSequence() 228 | der.decode(cert_to_verify_ASN1) 229 | cert_to_verify_DER = der[0] 230 | cert_to_verify_ALGO = der[1] 231 | cert_to_verify_SIG = der[2] 232 | 233 | sig_DER = asn1.DerObject() 234 | sig_DER.decode(cert_to_verify_SIG) 235 | 236 | sig = sig_DER.payload 237 | 238 | # first byte is number of unused bytes. should be zero 239 | if sig[0] != '\x00': 240 | return False 241 | 242 | sig = sig[1:] 243 | 244 | try: 245 | OpenSSL.crypto.verify(id_cert, sig, cert_to_verify_DER, algo) 246 | return True 247 | except OpenSSL.crypto.Error: 248 | return False 249 | 250 | 251 | # XXX should we check that the time is not later than the current time? 252 | def validCertTime(cert): 253 | '''Verify that TLS certificate *cert*'s time is not earlier than 254 | cert.notBefore and not later than cert.notAfter. 255 | 256 | :param OpenSSL.crypto.X509 cert: TLS Certificate to verify times of 257 | :returns: **bool** **True** if cert.notBefore < now < cert.notAfter, 258 | **False** otherwise 259 | ''' 260 | now = datetime.now() 261 | try: 262 | validAfter = datetime.strptime(cert.get_notBefore(), '%Y%m%d%H%M%SZ') 263 | validUntil = datetime.strptime(cert.get_notAfter(), '%Y%m%d%H%M%SZ') 264 | return validAfter < now < validUntil 265 | except ValueError: 266 | return False 267 | -------------------------------------------------------------------------------- /oppy/history/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/history/__init__.py -------------------------------------------------------------------------------- /oppy/history/guards.py: -------------------------------------------------------------------------------- 1 | ''' 2 | 5. Guard nodes 3 | 4 | We use Guard nodes (also called "helper nodes" in the literature) to 5 | prevent certain profiling attacks. Here's the risk: if we choose entry and 6 | exit nodes at random, and an attacker controls C out of N relays 7 | (ignoring bandwidth), then the 8 | attacker will control the entry and exit node of any given circuit with 9 | probability (C/N)^2. But as we make many different circuits over time, 10 | then the probability that the attacker will see a sample of about (C/N)^2 11 | of our traffic goes to 1. Since statistical sampling works, the attacker 12 | can be sure of learning a profile of our behavior. 13 | 14 | If, on the other hand, we picked an entry node and held it fixed, we would 15 | have probability C/N of choosing a bad entry and being profiled, and 16 | probability (N-C)/N of choosing a good entry and not being profiled. 17 | 18 | When guard nodes are enabled, Tor maintains an ordered list of entry nodes 19 | as our chosen guards, and stores this list persistently to disk. If a Guard 20 | node becomes unusable, rather than replacing it, Tor adds new guards to the 21 | end of the list. When choosing the first hop of a circuit, Tor 22 | chooses at 23 | random from among the first NumEntryGuards (default 3) usable guards on the 24 | list. If there are not at least 2 usable guards on the list, Tor adds 25 | routers until there are, or until there are no more usable routers to add. 26 | 27 | A guard is unusable if any of the following hold: 28 | - it is not marked as a Guard by the networkstatuses, 29 | - it is not marked Valid (and the user hasn't set AllowInvalid entry) 30 | - it is not marked Running 31 | - Tor couldn't reach it the last time it tried to connect 32 | 33 | A guard is unusable for a particular circuit if any of the rules for path 34 | selection in 2.2 are not met. In particular, if the circuit is "fast" 35 | and the guard is not Fast, or if the circuit is "stable" and the guard is 36 | not Stable, or if the guard has already been chosen as the exit node in 37 | that circuit, Tor can't use it as a guard node for that circuit. 38 | 39 | If the guard is excluded because of its status in the networkstatuses for 40 | over 30 days, Tor removes it from the list entirely, preserving order. 41 | 42 | If Tor fails to connect to an otherwise usable guard, it retries 43 | periodically: every hour for six hours, every 4 hours for 3 days, every 44 | 18 hours for a week, and every 36 hours thereafter. Additionally, Tor 45 | retries unreachable guards the first time it adds a new guard to the list, 46 | since it is possible that the old guards were only marked as unreachable 47 | because the network was unreachable or down. 48 | 49 | Tor does not add a guard persistently to the list until the first time we 50 | have connected to it successfully. 51 | ''' 52 | import random 53 | 54 | from twisted.internet import defer 55 | 56 | from stem import Flag 57 | 58 | 59 | class GuardManager(object): 60 | 61 | def __init__(self, netstatus): 62 | self._netstatus = netstatus 63 | 64 | # this is a stub. just pick some random guards. 65 | @defer.inlineCallbacks 66 | def getUsableGuards(self): 67 | microconsensus = yield self._netstatus.getMicroconsensus() 68 | 69 | guards = [n for n in microconsensus.routers.values() 70 | if (Flag.GUARD in n.flags and\ 71 | Flag.RUNNING in n.flags and\ 72 | Flag.FAST in n.flags and\ 73 | Flag.STABLE in n.flags)] 74 | random.shuffle(guards) 75 | defer.returnValue([g.fingerprint for g in guards[:20]]) 76 | -------------------------------------------------------------------------------- /oppy/netstatus/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/netstatus/__init__.py -------------------------------------------------------------------------------- /oppy/netstatus/microconsensusmanager.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | import logging 5 | import random 6 | import time 7 | import zlib 8 | 9 | from twisted.internet import defer 10 | from twisted.web.client import getPage 11 | 12 | from stem import Flag 13 | from stem.descriptor.networkstatus import NetworkStatusDocumentV3 14 | from stem.descriptor.remote import get_authorities 15 | 16 | from oppy import data_dir 17 | 18 | 19 | MICRO_CONSENSUS_PATH = '/tor/status-vote/current/consensus-microdesc.z' 20 | MICRO_CONSENSUS_CACHE_FILE = data_dir + 'cached-microdesc-consensus' 21 | SLEEP = 30 22 | 23 | 24 | # Major TODO's: 25 | # - check consensus signatures are valid before accepting 26 | # - better logging 27 | # - docs 28 | # - add download timeout 29 | # - check if cached consensus is still valid and don't download if it is 30 | 31 | class ConsensusDownloadFailed(Exception): 32 | pass 33 | 34 | 35 | class MicroconsensusManager(object): 36 | 37 | def __init__(self, autostart=True): 38 | logging.debug("MicroconsensusManager created.") 39 | self._consensus = None 40 | self._pending_consensus_requests = [] 41 | self._consensus_download_callbacks = set() 42 | 43 | if autostart is True: 44 | self.start() 45 | 46 | def start(self, initial=True): 47 | self._scheduledConsensusUpdate(initial) 48 | 49 | # TODO: add to callback list or something to be nicer 50 | def getMicroconsensus(self): 51 | d = defer.Deferred() 52 | if self._consensus: 53 | d.callback(self._consensus) 54 | else: 55 | self._pending_consensus_requests.append(d) 56 | 57 | return d 58 | 59 | def addMicroconsensusDownloadCallback(self, callback): 60 | self._consensus_download_callbacks.add(callback) 61 | 62 | def removeMicroconsensusDownloadCallback(self, callback): 63 | try: 64 | self._consensus_download_callbacks.remove(callback) 65 | except KeyError: 66 | msg = ("MicroconsensusManager got request to remove callback " 67 | "{} but has no reference to this function." 68 | .format(callback)) 69 | logging.debug(msg) 70 | 71 | def _servePendingRequests(self): 72 | for request in self._pending_consensus_requests: 73 | request.callback(self._consensus) 74 | self._pending_consensus_requests = [] 75 | 76 | def _serveConsensusDownloadCallbacks(self): 77 | for callback in self._consensus_download_callbacks: 78 | callback() 79 | 80 | @defer.inlineCallbacks 81 | def _scheduledConsensusUpdate(self, initial=False): 82 | logging.debug("MicroconsensusManager running scheduled consensus " 83 | "update.") 84 | if initial is True or self._consensus is None: 85 | v2dirs = _readV2DirsFromCacheFile() or get_authorities().values() 86 | else: 87 | v2dirs = getV2DirsFromConsensus(self._consensus) 88 | 89 | try: 90 | self._consensus = yield self._downloadMicroconsensus(v2dirs) 91 | except ConsensusDownloadFailed as e: 92 | logging.debug(e) 93 | from twisted.internet import reactor 94 | reactor.callLater(SLEEP, self._scheduledConsensusUpdate, initial) 95 | return 96 | 97 | self._scheduleNextConsensusDownload() 98 | self._servePendingRequests() 99 | self._serveConsensusDownloadCallbacks() 100 | 101 | @defer.inlineCallbacks 102 | def _downloadMicroconsensus(self, v2dirs): 103 | random.shuffle(v2dirs) 104 | for dc in v2dirs: 105 | try: 106 | host = "http://" + str(dc.address) + ":" + str(dc.dir_port) 107 | raw = yield getPage(str(host+MICRO_CONSENSUS_PATH)) 108 | # TODO: validate signatures 109 | consensus = _processRawMicroconsensus(raw) 110 | defer.returnValue(consensus) 111 | except Exception as e: 112 | msg = "Error downloading consensus: {}. Retrying.".format(e) 113 | logging.debug(msg) 114 | 115 | raise ConsensusDownloadFailed("Failed to download fresh consensus.") 116 | 117 | def _scheduleNextConsensusDownload(self): 118 | from twisted.internet import reactor 119 | 120 | va = time.mktime(self._consensus.valid_after.utctimetuple()) 121 | fu = time.mktime(self._consensus.fresh_until.utctimetuple()) 122 | vu = time.mktime(self._consensus.valid_until.utctimetuple()) 123 | i1 = (fu - va) * (3.0/4.0) 124 | i2 = (vu - (fu +i1)) * (7.0/8.0) 125 | 126 | seconds = random.randrange(int(i1), int(i2)) 127 | 128 | reactor.callLater(seconds, self._scheduledConsensusUpdate) 129 | 130 | 131 | def getV2DirsFromConsensus(consensus): 132 | return [r for r in consensus.routers.values() if Flag.V2DIR in r.flags] 133 | 134 | 135 | def _readV2DirsFromCacheFile(): 136 | try: 137 | with open(MICRO_CONSENSUS_CACHE_FILE, 'rb') as f: 138 | data = f.read() 139 | consensus = NetworkStatusDocumentV3(data) 140 | return getV2DirsFromConsensus(consensus) 141 | except Exception as e: 142 | msg = ("Failed to read cached-consensus-microdesc. Reason: {}." 143 | .format(e)) 144 | logging.debug(msg) 145 | return None 146 | 147 | 148 | def _writeConsensusCacheFile(consensus): 149 | try: 150 | with open(MICRO_CONSENSUS_CACHE_FILE, 'wb') as f: 151 | f.write(str(consensus)) 152 | except Exception as e: 153 | msg = ("Failed to write cached-consensus-microdesc. Reason: {}." 154 | .format(e)) 155 | logging.debug(msg) 156 | 157 | 158 | def _processRawMicroconsensus(raw): 159 | raw = zlib.decompress(raw) 160 | consensus = NetworkStatusDocumentV3(raw) 161 | _writeConsensusCacheFile(consensus) 162 | return consensus 163 | -------------------------------------------------------------------------------- /oppy/netstatus/microdescriptormanager.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | import re 5 | import logging 6 | import random 7 | import zlib 8 | 9 | from base64 import b64encode 10 | from hashlib import sha256 11 | 12 | from twisted.internet import defer 13 | from twisted.web.client import getPage 14 | 15 | from stem.descriptor import microdescriptor 16 | 17 | from oppy import data_dir 18 | import oppy.netstatus.microconsensusmanager as microconsensusmanager 19 | 20 | 21 | MICRO_DESC_PATH = '/tor/micro/d/' 22 | CIRCUIT_BUILD_THRESHOLD = 0.8 23 | REQUEST_MAX = 92 24 | REGEX = re.compile('^onion-key$', re.MULTILINE) 25 | REQUEST_DELIM = '-' 26 | MICRO_DESC_CACHE_FILE = data_dir + 'cached-microdescs.new' 27 | TIMEOUT = 20 28 | 29 | 30 | # Major TODO's: 31 | # - docs 32 | # - better logging 33 | # - do we reset whether or not we can we have enough descs to build a 34 | # circuit after getting a new consensus? 35 | # - write cached descriptors file 36 | # - discard descriptors not found in consensus 37 | # - set download timeout 38 | 39 | 40 | class MicrodescriptorManager(object): 41 | 42 | def __init__(self, microconsensus_manager, autostart=True): 43 | logging.debug('MicrodescriptorManager created.') 44 | self._microconsensus_manager = microconsensus_manager 45 | self._pending_requests_for_circuit = [] 46 | self._enough_for_circuit = False 47 | self._total_descriptors = 0 48 | self._microdescriptors = None 49 | 50 | if autostart is True: 51 | self.start() 52 | 53 | def start(self): 54 | self._microdescriptors = _getMicrodescriptorsFromCacheFile() 55 | self._downloadMicrodescriptors(initial=True) 56 | 57 | def getMicrodescriptorsForCircuit(self): 58 | d = defer.Deferred() 59 | if self._microdescriptors and self._enough_for_circuit: 60 | d.callback(self._microdescriptors) 61 | else: 62 | self._pending_requests_for_circuit.append(d) 63 | 64 | return d 65 | 66 | def _servePendingRequestsForCircuit(self): 67 | for request in self._pending_requests_for_circuit: 68 | request.callback(self._microdescriptors) 69 | self._pending_requests_for_circuit = [] 70 | 71 | @defer.inlineCallbacks 72 | def _downloadMicrodescriptors(self, initial=False): 73 | consensus = yield self._microconsensus_manager.getMicroconsensus() 74 | v2dirs = microconsensusmanager.getV2DirsFromConsensus(consensus) 75 | self._total_descriptors = len(consensus.routers) 76 | self._discardUnlistedMicrodescriptors(consensus) 77 | needed_digests = _getNeededDescriptorDigests( 78 | consensus, self._microdescriptors) 79 | # if we already have >= 80% of descriptors, we can build circuits 80 | # immediately 81 | self._checkIfReadyToBuildCircuit() 82 | # only request <= REQUEST_MAX descriptors from each dircache 83 | blocks = [needed_digests[i:i+REQUEST_MAX] 84 | for i in xrange(0, len(needed_digests), REQUEST_MAX)] 85 | 86 | if len(blocks) > 0: 87 | task_list = [self._downloadMicrodescriptorBlock(b, v2dirs) 88 | for b in blocks] 89 | d = defer.gatherResults(task_list) 90 | d.addCallback(self._writeMicrodescriptorCacheFile) 91 | 92 | if initial is True: 93 | self._microconsensus_manager.addMicroconsensusDownloadCallback( 94 | self._downloadMicrodescriptors) 95 | 96 | @defer.inlineCallbacks 97 | def _downloadMicrodescriptorBlock(self, block, v2dirs): 98 | descs = set() 99 | for d in block: 100 | try: 101 | tmp = b64encode(d.decode('hex')).rstrip('=') 102 | descs.add(tmp) 103 | except TypeError: 104 | msg = "Malformed descriptor {}. Discarding.".format(d) 105 | logging.debug(msg) 106 | 107 | dircaches = list(v2dirs) 108 | 109 | for _ in xrange(len(dircaches)): 110 | dircache = random.choice(dircaches) 111 | url = _makeDescDownloadURL(dircache, descs) 112 | try: 113 | result = yield getPage(url, timeout=TIMEOUT) 114 | # descs set to leftover descriptors that weren't received 115 | descs = self._processMicrodescriptorBlockResult(result, descs) 116 | if len(descs) == 0: 117 | break 118 | except Exception: 119 | # if a download fails, try again at a different dircache 120 | dircaches.remove(dircache) 121 | 122 | if len(descs) != 0: 123 | msg = ("Tried all V2Dir caches and failed to download the " 124 | "descriptors with digests: {}".format(' '.join(descs))) 125 | logging.debug(msg) 126 | 127 | defer.returnValue(None) 128 | 129 | def _processMicrodescriptorBlockResult(self, result, requested): 130 | try: 131 | micro_descs = _decompressAndSplitResult(result) 132 | except ValueError: 133 | return requested 134 | 135 | processed = {} 136 | 137 | for m in micro_descs: 138 | hashed = b64encode(sha256(m).digest()).rstrip('=') 139 | # discard any descriptors we didn't request 140 | if hashed not in requested: 141 | continue 142 | try: 143 | desc = microdescriptor.Microdescriptor(m) 144 | except Exception: 145 | # discard unparseable descriptors (shouldn't happen) 146 | continue 147 | 148 | processed[desc.digest] = desc 149 | requested.remove(hashed) 150 | 151 | self._saveProcessedMicrodescriptors(processed) 152 | # return any requested descriptors that weren't received/processed 153 | return requested 154 | 155 | def _saveProcessedMicrodescriptors(self, processed_descriptors): 156 | if self._microdescriptors is None: 157 | self._microdescriptors = processed_descriptors 158 | else: 159 | self._microdescriptors.update(processed_descriptors) 160 | 161 | if self._enough_for_circuit is False: 162 | self._checkIfReadyToBuildCircuit() 163 | 164 | def _checkIfReadyToBuildCircuit(self): 165 | if not self._microdescriptors or self._enough_for_circuit: 166 | return 167 | 168 | ml = float(len(self._microdescriptors)) 169 | cl = float(self._total_descriptors) 170 | 171 | if (ml / cl) >= CIRCUIT_BUILD_THRESHOLD: 172 | self._enough_for_circuit = True 173 | self._servePendingRequestsForCircuit() 174 | 175 | def _writeMicrodescriptorCacheFile(self, _): 176 | try: 177 | with open(MICRO_DESC_CACHE_FILE, 'w') as f: 178 | for desc in self._microdescriptors.values(): 179 | f.write(str(desc)) 180 | logging.debug("Wrote microdescriptor cache file.") 181 | except Exception as e: 182 | msg = ("Failed to write microdescriptor cache file. Reason: {}." 183 | .format(e)) 184 | logging.debug(msg) 185 | 186 | def _discardUnlistedMicrodescriptors(self, consensus): 187 | if self._microdescriptors is None: 188 | return 189 | 190 | digests = set([r.digest for r in consensus.routers.values()]) 191 | old_digests = set(self._microdescriptors.keys()) 192 | unlisted = old_digests - digests 193 | for d in unlisted: 194 | del self._microdescriptors[d] 195 | 196 | 197 | # TODO: see if there's a way to parse_file() with stem 198 | def _getMicrodescriptorsFromCacheFile(fname=MICRO_DESC_CACHE_FILE): 199 | try: 200 | with open(fname, 'r') as f: 201 | data = f.read() 202 | except Exception as e: 203 | msg = ("Failed to read microdescriptor cache file. Reason: {}." 204 | .format(e)) 205 | logging.debug(msg) 206 | return None 207 | 208 | micro_descs = {} 209 | cached_descs = ['onion-key' + m for m in REGEX.split(data)] 210 | for m in cached_descs: 211 | try: 212 | desc = microdescriptor.Microdescriptor(m) 213 | # discard any malformed descriptors 214 | except Exception as e: 215 | continue 216 | 217 | micro_descs[desc.digest] = desc 218 | 219 | msg = ("Read {} cached microdescriptors.".format(len(micro_descs))) 220 | logging.debug(msg) 221 | return micro_descs if len(micro_descs) > 0 else None 222 | 223 | 224 | def _decompressAndSplitResult(result): 225 | try: 226 | result = zlib.decompress(result) 227 | micro_descs = ['onion-key' + m for m in REGEX.split(result)] 228 | micro_descs.pop(0) 229 | except Exception as e: 230 | raise ValueError(str(e)) 231 | 232 | return micro_descs 233 | 234 | def _makeDescDownloadURL(v2dir, digest_list): 235 | request = REQUEST_DELIM.join([digest for digest in digest_list]) 236 | host = 'http://' + v2dir.address + ':' + str(v2dir.dir_port) 237 | path = MICRO_DESC_PATH + request + '.z' 238 | return str(host+path) 239 | 240 | 241 | def _getNeededDescriptorDigests(consensus, descriptors): 242 | total_digests = set([r.digest for r in consensus.routers.values()]) 243 | if descriptors is None: 244 | return list(total_digests) 245 | return list(total_digests - set([r.digest for r in descriptors.values()])) 246 | -------------------------------------------------------------------------------- /oppy/netstatus/netstatus.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | import logging 5 | 6 | from oppy.netstatus.microconsensusmanager import MicroconsensusManager 7 | from oppy.netstatus.microdescriptormanager import MicrodescriptorManager 8 | 9 | 10 | class NetStatus(object): 11 | '''Download consensus and server descriptor documents.''' 12 | 13 | def __init__(self): 14 | logging.debug("Created NetStatus.") 15 | self._mcm = MicroconsensusManager() 16 | self._mdm = MicrodescriptorManager(self._mcm) 17 | 18 | def getMicrodescriptorsForCircuit(self): 19 | return self._mdm.getMicrodescriptorsForCircuit() 20 | 21 | def getMicroconsensus(self): 22 | return self._mcm.getMicroconsensus() 23 | -------------------------------------------------------------------------------- /oppy/oppy: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env python 2 | 3 | # Copyright 2014, 2015, Nik Kinkel 4 | # See LICENSE for licensing information 5 | 6 | import argparse 7 | import logging 8 | import sys 9 | 10 | from twisted.internet import reactor 11 | from twisted.internet.endpoints import TCP4ServerEndpoint 12 | 13 | 14 | DEFAULT_SOCKS_PORT = 10050 15 | MIN_PORT = 1 16 | MAX_PORT = 65535 17 | PRIVILEGED_PORTS_MAX = 1023 18 | 19 | parser = argparse.ArgumentParser() 20 | parser.add_argument('-l', '--log-level', action='store', default=logging.INFO) 21 | parser.add_argument('-f', '--log-file', action='store') 22 | parser.add_argument('-p', '--SOCKS-port', action='store', 23 | default=DEFAULT_SOCKS_PORT) 24 | 25 | args = parser.parse_args() 26 | 27 | try: 28 | logLevel = getattr(logging, args.log_level.upper()) 29 | except AttributeError: 30 | logLevel = logging.INFO 31 | 32 | oppyLogger = logging.getLogger() 33 | oppyLogger.setLevel(logLevel) 34 | fmt = '%(asctime)s %(levelname)s %(message)s' 35 | datefmt = '%Y-%m-%d %H:%M:%S' 36 | formatter = logging.Formatter(fmt=fmt, datefmt=datefmt) 37 | 38 | # default logging to sys.stdout 39 | if args.log_file is not None: 40 | logger = logging.FileHandler(args.log_file) 41 | else: 42 | logger = logging.StreamHandler(sys.stdout) 43 | 44 | logger.setFormatter(formatter) 45 | oppyLogger.addHandler(logger) 46 | 47 | try: 48 | socks_port = int(args.SOCKS_port) 49 | if not MIN_PORT <= socks_port <= MAX_PORT: 50 | raise ValueError 51 | if socks_port <= PRIVILEGED_PORTS_MAX: 52 | msg = 'It is not recommended to run oppy as a privileged user or on a ' 53 | msg += 'privileged port.' 54 | logging.warning(msg) 55 | except ValueError: 56 | msg = ('Invalid SOCKS port {}. Using default {}.' 57 | .format(args.SOCKS_port, DEFAULT_SOCKS_PORT)) 58 | socks_port = DEFAULT_SOCKS_PORT 59 | 60 | msg = 'oppy will listen for connections on port {}.'.format(socks_port) 61 | logging.info(msg) 62 | logging.info('But we need to build some circuits first...') 63 | logging.info('Retrieving network status information.') 64 | 65 | from oppy.netstatus.netstatus import NetStatus 66 | netstatus = NetStatus() 67 | 68 | from oppy.connection.connectionmanager import ConnectionManager 69 | connection_manager = ConnectionManager() 70 | 71 | from oppy.circuit.circuitmanager import CircuitManager 72 | circuit_manager = CircuitManager(connection_manager, netstatus) 73 | 74 | # catch CTRL-C to shutdown properly 75 | from oppy.util.tools import shutdown 76 | reactor.addSystemEventTrigger('before', 'shutdown', shutdown, circuit_manager) 77 | 78 | from oppy.socks.socks import OppySOCKSProtocolFactory 79 | server_endpoint = TCP4ServerEndpoint(reactor, socks_port) 80 | server_endpoint.listen(OppySOCKSProtocolFactory(circuit_manager)) 81 | 82 | reactor.run() 83 | -------------------------------------------------------------------------------- /oppy/path/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/path/__init__.py -------------------------------------------------------------------------------- /oppy/path/exceptions.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | 5 | class NoUsableGuardsException(Exception): 6 | pass 7 | 8 | 9 | class PathSelectionFailedException(Exception): 10 | pass 11 | -------------------------------------------------------------------------------- /oppy/path/util.py: -------------------------------------------------------------------------------- 1 | # NOTE: 2 | # Most of these functions are based off pathsim.py from the torps project, 3 | # but this code is neither reviewed nor endorsed by the torps authors. 4 | # Torps is a relatively straightforward Python port of tor's path selection 5 | # algorithm. The original torps code and licensing information can be 6 | # found at: https://github.com/torps/torps 7 | import random 8 | 9 | from stem import Flag 10 | 11 | # TODO: docs 12 | # TODO: mention in docs the assumptions made here 13 | # (i.e. primarily that node fprints are guaranteed to be in descriptors) 14 | 15 | def nodeUsableWithOther(desc1, status_entry1, desc2, status_entry2): 16 | # return True if test_node is usable in a circuit with node 17 | # check: 18 | # - nodes are not equal 19 | # - nodes are not in same family 20 | # - nodes are not in same /16 subnet 21 | if status_entry1.fingerprint == status_entry2.fingerprint: 22 | return False 23 | if inSameFamily(desc1, status_entry1, desc2, status_entry2): 24 | return False 25 | if inSame16Subnet(status_entry1, status_entry2): 26 | return False 27 | 28 | return True 29 | 30 | 31 | def selectWeightedNode(weighted_nodes): 32 | """Takes (node,cum_weight) pairs where non-negative cum_weight increases, 33 | ending at 1. Use cum_weights as cumulative probablity to select a node.""" 34 | r = random.random() 35 | begin = 0 36 | end = len(weighted_nodes)-1 37 | mid = int((end+begin)/2) 38 | while True: 39 | if r <= weighted_nodes[mid][1]: 40 | if mid == begin: 41 | return weighted_nodes[mid][0] 42 | else: 43 | end = mid 44 | mid = int((end+begin)/2) 45 | else: 46 | if mid == end: 47 | raise ValueError('Weights must sum to 1.') 48 | else: 49 | begin = mid+1 50 | mid = int((end+begin)/2) 51 | 52 | 53 | def getWeightedNodes(nodes, weights): 54 | """Takes list of nodes (rel_stats) and weights (as a dict) and outputs 55 | a list of (node, cum_weight) pairs, where cum_weight is the cumulative 56 | probability of the nodes weighted by weights. 57 | """ 58 | # compute total weight 59 | total_weight = sum([float(weights[n]) for n in nodes]) 60 | if total_weight == 0: 61 | raise ValueError('Error: Node list has total weight zero.') 62 | 63 | # create cumulative weights 64 | weighted_nodes = [] 65 | cum_weight = 0 66 | for node in nodes: 67 | cum_weight += weights[node] / total_weight 68 | weighted_nodes.append((node, cum_weight)) 69 | 70 | return weighted_nodes 71 | 72 | 73 | def getPositionWeights(nodes, cons_rel_stats, position, bw_weights, 74 | bwweightscale): 75 | """Computes the consensus "bandwidth" weighted by position weights.""" 76 | weights = {} 77 | bwweightscale = float(bwweightscale) 78 | for node in nodes: 79 | r = cons_rel_stats[node] 80 | bw = float(r.bandwidth) 81 | weight = float(getBwweight(r.flags, position, bw_weights)) 82 | weight_scaled = weight / bwweightscale 83 | weights[node] = bw * weight_scaled 84 | return weights 85 | 86 | 87 | def getBwweight(flags, position, bw_weights): 88 | """Returns weight to apply to relay's bandwidth for given position. 89 | flags: list of Flag values for relay from a consensus 90 | position: position for which to find selection weight, 91 | one of 'g' for guard, 'm' for middle, and 'e' for exit 92 | bw_weights: bandwidth_weights from NetworkStatusDocumentV3 consensus 93 | """ 94 | if position == 'g': 95 | if (Flag.GUARD in flags) and (Flag.EXIT in flags): 96 | return bw_weights['Wgd'] 97 | elif Flag.GUARD in flags: 98 | return bw_weights['Wgg'] 99 | elif Flag.EXIT not in flags: 100 | return bw_weights['Wgm'] 101 | else: 102 | raise ValueError('Wge weight does not exist.') 103 | elif position == 'm': 104 | if (Flag.GUARD in flags) and (Flag.EXIT in flags): 105 | return bw_weights['Wmd'] 106 | elif Flag.GUARD in flags: 107 | return bw_weights['Wmg'] 108 | elif Flag.EXIT in flags: 109 | return bw_weights['Wme'] 110 | else: 111 | return bw_weights['Wmm'] 112 | elif position == 'e': 113 | if (Flag.GUARD in flags) and (Flag.EXIT in flags): 114 | return bw_weights['Wed'] 115 | elif Flag.GUARD in flags: 116 | return bw_weights['Weg'] 117 | elif Flag.EXIT in flags: 118 | return bw_weights['Wee'] 119 | else: 120 | return bw_weights['Wem'] 121 | else: 122 | raise ValueError('Unrecognized position: {}.'.format(position)) 123 | 124 | 125 | def inSameFamily(desc1, status_entry1, desc2, status_entry2): 126 | """Takes list of descriptors and two node fingerprints, 127 | checks if nodes list each other as in the same family.""" 128 | fprint1 = status_entry1.fingerprint 129 | fprint2 = status_entry2.fingerprint 130 | family1 = set([i.strip(u'$') for i in desc1.family]) 131 | family2 = set([i.strip(u'$') for i in desc2.family]) 132 | 133 | # True only if both nodes list each other 134 | return (fprint1 in family2) and (fprint2 in family1) 135 | 136 | 137 | # XXX: what do we do for IPv6? 138 | def inSame16Subnet(status_entry1, status_entry2): 139 | address1 = status_entry1.address 140 | address2 = status_entry2.address 141 | 142 | return address1.split('.')[:2] == address2.split('.')[:2] 143 | -------------------------------------------------------------------------------- /oppy/socks/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/socks/__init__.py -------------------------------------------------------------------------------- /oppy/stream/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/stream/__init__.py -------------------------------------------------------------------------------- /oppy/stream/stream.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | ''' 5 | .. topic:: Details 6 | 7 | Streams are the interface between local requests coming from 8 | OppySOCKSProtocol instances and circuits. Streams are responsible for: 9 | 10 | - Initiating a connection request (i.e. a RelayBeginCell) on behalf 11 | of a local application 12 | - Passing data from circuits to local applications and vice versa 13 | - Informing OppySOCKSProtocol instances (and thus, the client 14 | application) when a remote resource closes the stream 15 | - Informing the circuit when the local application closes the stream 16 | - Splitting up data to be written to the network into chunks that 17 | can fit into a RelayData cell 18 | - Doing some rudimentary flow-control 19 | 20 | ''' 21 | import logging 22 | 23 | from twisted.internet import defer 24 | 25 | from oppy.cell.definitions import MAX_RPAYLOAD_LEN 26 | 27 | 28 | SENDME_THRESHOLD = 450 29 | STREAM_WINDOW_INIT = 500 30 | STREAM_WINDOW_SIZE = 50 31 | 32 | 33 | class Stream(object): 34 | '''Represent a Tor Stream.''' 35 | 36 | def __init__(self, circuit_manager, request, socks): 37 | ''' 38 | :param oppy.util.exitrequest.ExitRequest request: connection request 39 | for this stream 40 | :param oppy.socks.socks.OppySOCKSProtocol socks: socks protocol 41 | instance this stream should relay data to and from 42 | ''' 43 | self.stream_id = None 44 | self._read_queue = defer.DeferredQueue() 45 | self._write_queue = defer.DeferredQueue() 46 | self._read_deferred = None 47 | self._write_deferred = None 48 | self.request = request 49 | self.socks = socks 50 | self._deliver_window = STREAM_WINDOW_INIT 51 | self._package_window = STREAM_WINDOW_INIT 52 | self.circuit = None 53 | # set this flag if SOCKS closes our connection before the circuit 54 | # is done building 55 | self._closed = False 56 | self._circuit_request = circuit_manager.getOpenCircuit(self) 57 | self._circuit_request.addCallback(self._registerNewStream) 58 | 59 | def recv(self, data): 60 | '''Put data received from the network on this stream's read queue. 61 | 62 | Called when the circuit attached to this stream passes data to this 63 | stream. 64 | 65 | :param str data: data passed in from circuit to write to this stream's 66 | attached SOCKS protocol 67 | ''' 68 | self._read_queue.put(data) 69 | 70 | def send(self, data): 71 | '''Split *data* into chunks that can fit in a RelayData cell, and put 72 | each chunk on this stream's write queue. 73 | 74 | Called when the local application attached to this stream sends data 75 | to the network. 76 | 77 | :param str data: data passed in from this stream's attached SOCKS 78 | protocol to write to this stream's circuit 79 | ''' 80 | chunks = _chunkRelayData(data) 81 | for chunk in chunks: 82 | self._write_queue.put(chunk) 83 | 84 | def incrementPackageWindow(self): 85 | '''Increment this stream's package window and, if the package window 86 | is now above zero and this stream was in a buffering state, begin 87 | listening for local data again. 88 | 89 | Called by the attached circuit when it receives a sendme cell 90 | for this stream. 91 | ''' 92 | self._package_window += STREAM_WINDOW_SIZE 93 | # if we were buffering, we're now free to send data again 94 | if self._write_deferred is None and self._package_window > 0: 95 | self._pollWriteQueue() 96 | 97 | def streamConnected(self): 98 | '''Begin listening for local data from the attached SOCKS protocol 99 | to write to this stream's circuit. 100 | 101 | Called when the attached circuit receives a RelayConnected cell for 102 | this stream's RelayBegin request. 103 | ''' 104 | self._pollWriteQueue() 105 | 106 | def closeFromCircuit(self): 107 | '''Called when this stream is closed by the circuit. 108 | 109 | This can be caused by receiving a RelayEnd cell, the circuit being 110 | torn down, or the connection going down. We do not need to send a 111 | RelayEnd cell ourselves if the circuit closed this stream. 112 | 113 | Notify any associated SOCKS protocols and let circuit know this stream 114 | has closed. 115 | ''' 116 | msg = ("Stream {} closing from circuit {}." 117 | .format(self.stream_id, self.circuit.circuit_id)) 118 | logging.debug(msg) 119 | self._closed = True 120 | self.socks.closeFromStream() 121 | 122 | # TODO: fix docs 123 | def closeFromSOCKS(self): 124 | '''Called when the attached SOCKS protocol object is done with this 125 | stream. 126 | 127 | Request that circuit send a RelayEnd cell on our behalf and notify 128 | circuit we're now closed. 129 | ''' 130 | if self.circuit is not None: 131 | msg = ("Stream {} closing on circuit {} from SOCKS." 132 | .format(self.stream_id, self.circuit.circuit_id)) 133 | logging.debug(msg) 134 | self.circuit.removeStream(self) 135 | else: 136 | logging.debug("Stream closed before circuit build task completes.") 137 | 138 | self._closed = True 139 | 140 | def _registerNewStream(self, circuit): 141 | '''Register this stream with it's circuit, initiate a conenction 142 | request, and begin listening for data from the network. 143 | 144 | Called when this stream receives a suitable open circuit. 145 | 146 | :param oppy.circuit.circuit.Circuit circuit: open circuit suitable 147 | for use on this stream 148 | ''' 149 | # don't do anything if the client closed the connection before the 150 | # circuit was done building 151 | if self._closed is True: 152 | return 153 | 154 | self.circuit = circuit 155 | self._circuit_request = None 156 | # notify circuit it has a new stream 157 | # NOTE: circuit sets this stream's stream_id 158 | self.circuit.addStreamAndSetStreamID(self) 159 | # tell the circuit to setup this stream (i.e. send a RELAY_BEGIN cell) 160 | self.circuit.beginStream(self) 161 | # start listening for incoming cells from our circuit 162 | self._pollReadQueue() 163 | 164 | def _pollWriteQueue(self): 165 | '''Pull a chunk of data from this stream's write queue and, when the 166 | data is ready, write it to the attached circuit. 167 | ''' 168 | self._write_deferred = self._write_queue.get() 169 | self._write_deferred.addCallback(self._writeData) 170 | 171 | def _pollReadQueue(self): 172 | '''Pull a chunk of data from this stream's read queue and, when the 173 | data is ready, write it to the attached SOCKS protocol instance. 174 | ''' 175 | self._read_deferred = self._read_queue.get() 176 | self._read_deferred.addCallback(self._recvData) 177 | 178 | def _writeData(self, data): 179 | '''Write *data* to the circuit attached to this stream and decrement 180 | the packaging window. 181 | 182 | :param str data: data received from attached SOCKS protocol instance 183 | to be written to the attached circuit 184 | ''' 185 | self.circuit.send(data, self) 186 | self._decPackageWindow() 187 | 188 | def _recvData(self, data): 189 | '''Receive *data* from the attached circuit and hand off to the 190 | attached SOCKS protocol instance. Decrement this stream's deliver 191 | window. 192 | 193 | :param str data: data received from attached circuit, to be written 194 | to the attached SOCKS protocol instance 195 | ''' 196 | self.socks.recv(data) 197 | self._decDeliverWindow() 198 | 199 | def _decDeliverWindow(self): 200 | '''Decrement this stream's deliver window and initiate sending a 201 | sendme cell if the deliver window drops too low. 202 | 203 | If the deliver window is <= SENDME_THRESHOLD, tell the attached 204 | circuit to send a sendme cell on behalf of this stream. 205 | ''' 206 | # XXX we should be checking how many cells we have left to flush 207 | # here before just blindly writing a RELAY_SENDME 208 | self._deliver_window -= 1 209 | if self._deliver_window <= SENDME_THRESHOLD: 210 | self.circuit.sendStreamSendMe(self) 211 | self._deliver_window += STREAM_WINDOW_SIZE 212 | self._pollReadQueue() 213 | 214 | def _decPackageWindow(self): 215 | '''Decrement this stream's package window and, if we still can, 216 | listen for more data from the attached SOCKS protocol instance. 217 | 218 | If the package window <= 0, we need to wait until we receive a 219 | sendme cell before writing anymore local data from this stream to 220 | the attached circuit. 221 | ''' 222 | self._package_window -= 1 223 | if self._package_window > 0: 224 | self._pollWriteQueue() 225 | else: 226 | self._write_deferred = None 227 | 228 | 229 | def _chunkRelayData(data): 230 | '''Split *data* into chunks that can fit inside a RelayData cell. 231 | 232 | :param str data: data to split 233 | :returns **list, str** list of pieces of data split into sizes that 234 | fit into a RelayData cell 235 | ''' 236 | LEN = MAX_RPAYLOAD_LEN 237 | return [data[i:i + LEN] for i in xrange(0, len(data), LEN)] 238 | -------------------------------------------------------------------------------- /oppy/tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/tests/__init__.py -------------------------------------------------------------------------------- /oppy/tests/integration/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/tests/integration/__init__.py -------------------------------------------------------------------------------- /oppy/tests/integration/cell/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/tests/integration/cell/__init__.py -------------------------------------------------------------------------------- /oppy/tests/integration/cell/cellbase.py: -------------------------------------------------------------------------------- 1 | from oppy.cell.cell import Cell 2 | from oppy.cell.exceptions import BadPayloadData 3 | 4 | # base class for cell tests. 5 | # a number of tests for cell methods differ only in what type of concrete 6 | # cell is instantiated, so we can get rid of a bunch of boiler-plate test 7 | # code with a bit of inheritance. 8 | # 9 | # tests for concrete types are mostly written declaratively by defining 10 | # the byte strings the cells should operate on and the attributes cells 11 | # should have as well as "bad" values that should raise exceptions. 12 | # 13 | # concrete test classes define the following fields: 14 | # 15 | # cell_constants (dict): 16 | # - 'cell-bytes-good': a valid byte string for this cell to test against 17 | # - 'cell-type': the type of this cell (e.g. Create2Cell) 18 | # - 'cell-bytes-good-nopadding': a valid byte string for this cell 19 | # without padding added (ignore or varlen cells) 20 | # 21 | # cell_header (OrderedDict): 22 | # - 'circ_id': circuit id for this cell 23 | # - 'cmd': command for this cell 24 | # - 'link_version': link version for this cell 25 | # 26 | # cell_attributes (OrderedDict): 27 | # - define all attributes here a cell should have using good/valid values 28 | # (e.g. for Create2Cell, you would define 'htype', 'hlen', and 'hdata') 29 | # 30 | # bad_parse_inputs (tuple): 31 | # - a tuple of byte strings to test against that are invalid in some way 32 | # each string listed here will be tested using Cell.parse, expecting to 33 | # raise a BadPayloadData exception 34 | # 35 | # bad_make_inputs (tuple): 36 | # - a tuple of tuples of arguments to `cell_type`.make() that should 37 | # raise a BadPayloadData exception (e.g. one of the arguments given 38 | # is an invalid value for the make() method for that cell) 39 | # 40 | # encrypted (bool): 41 | # - whether or not this cell should be parsed as if it was encrypted 42 | # (note that this is meaningless for VarLenCells and all FixedLenCells 43 | # except EncryptedCell) 44 | # 45 | # with those fields defined, all the following tests will be run for each 46 | # cell using the defined values 47 | 48 | CIRC_ID = 1 49 | 50 | 51 | class CellTestBase(object): 52 | 53 | def test_parse(self): 54 | '''Try to parse concrete cell type from a byte string and verify 55 | we read the correct cell attributes. 56 | ''' 57 | cell = Cell.parse(self.cell_constants['cell-bytes-good'], 58 | encrypted=self.encrypted) 59 | assert isinstance(cell, self.cell_constants['cell-type']) 60 | assert cell.header.__dict__ == self.cell_header 61 | for key in self.cell_attributes: 62 | assert getattr(cell, key) == self.cell_attributes[key] 63 | cell2 = Cell.parse(cell.getBytes(), encrypted=self.encrypted) 64 | assert cell.getBytes() == cell2.getBytes() 65 | assert cell == cell2 66 | 67 | def test_parse_bad(self): 68 | '''Try to parse each bad_input and check that they raise 69 | BadPayloadData.''' 70 | for bad_input in self.bad_parse_inputs: 71 | self.assertRaises(BadPayloadData, Cell.parse, bad_input, 72 | encrypted=self.encrypted) 73 | 74 | def test_getBytes(self): 75 | '''Verify that cell's 'getBytes()' method returns the correct byte 76 | string when cell is parsed from a string. 77 | ''' 78 | cell = Cell.parse(self.cell_constants['cell-bytes-good'], 79 | encrypted=self.encrypted) 80 | assert cell.getBytes() == self.cell_constants['cell-bytes-good'] 81 | 82 | def test_make(self): 83 | '''Verify that cell-building helper method 'make' can correctly 84 | assemble a cell. 85 | ''' 86 | cell = self.cell_constants['cell-type'].make( 87 | self.cell_header['circ_id'], 88 | *self.cell_attributes.values()) 89 | assert isinstance(cell, self.cell_constants['cell-type']) 90 | assert cell.getBytes() == self.cell_constants['cell-bytes-good'] 91 | assert cell.header.__dict__ == self.cell_header 92 | for key in self.cell_attributes: 93 | assert getattr(cell, key) == self.cell_attributes[key] 94 | 95 | def test_make_bad(self): 96 | '''Check that bad inputs to a cell's make() method raise a 97 | BadPayloadData exception.''' 98 | for bad_input in self.bad_make_inputs: 99 | self.assertRaises(BadPayloadData, 100 | self.cell_constants['cell-type'].make, 101 | *bad_input) 102 | 103 | def test_repr(self): 104 | '''Verify that a cell's repr can be used to create the same cell. 105 | ''' 106 | from oppy.cell.fixedlen import ( 107 | FixedLenCell, 108 | Create2Cell, 109 | Created2Cell, 110 | DestroyCell, 111 | EncryptedCell, 112 | NetInfoCell, 113 | PaddingCell, 114 | ) 115 | from oppy.cell.varlen import ( 116 | VarLenCell, 117 | AuthChallengeCell, 118 | CertsCell, 119 | VersionsCell, 120 | VPaddingCell, 121 | ) 122 | from oppy.cell.util import ( 123 | TLVTriple, 124 | CertsCellPayloadItem, 125 | ) 126 | # XXX should realy just define eq method on cells... 127 | cell = Cell.parse(self.cell_constants['cell-bytes-good'], 128 | encrypted=self.encrypted) 129 | cell2 = eval(repr(cell)) 130 | assert cell.getBytes() == cell2.getBytes() 131 | assert len(cell) == len(cell2) 132 | assert cell == cell2 133 | 134 | 135 | class FixedLenTestBase(CellTestBase): 136 | 137 | def test_getBytes_trimmed(self): 138 | cell = Cell.parse(self.cell_constants['cell-bytes-good'], 139 | encrypted=self.encrypted) 140 | assert cell.getBytes(trimmed=True) == self.cell_constants['cell-bytes-good-nopadding'] 141 | 142 | def test_len(self): 143 | '''Verify that len(cell) works properly.''' 144 | cell = Cell.parse(self.cell_constants['cell-bytes-good'], 145 | encrypted=self.encrypted) 146 | cell_2 = self.cell_constants['cell-type'].make( 147 | self.cell_header['circ_id'], 148 | *self.cell_attributes.values()) 149 | assert len(cell) == len(cell_2) 150 | assert len(cell) == len(self.cell_constants['cell-bytes-good']) 151 | if cell.header.link_version < 4: 152 | assert len(cell) == 512 153 | else: 154 | assert len(cell) == 514 155 | 156 | 157 | class VarLenTestBase(CellTestBase): 158 | 159 | def test_len(self): 160 | '''Verify that len(cell) works properly.''' 161 | cell = Cell.parse(self.cell_constants['cell-bytes-good'], 162 | encrypted=self.encrypted) 163 | cell_2 = self.cell_constants['cell-type'].make( 164 | self.cell_header['circ_id'], 165 | *self.cell_attributes.values()) 166 | assert len(cell) == len(cell_2) 167 | assert len(cell) == len(self.cell_constants['cell-bytes-good']) 168 | -------------------------------------------------------------------------------- /oppy/tests/unit/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/tests/unit/__init__.py -------------------------------------------------------------------------------- /oppy/tests/unit/circuit/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/tests/unit/circuit/__init__.py -------------------------------------------------------------------------------- /oppy/tests/unit/connection/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/tests/unit/connection/__init__.py -------------------------------------------------------------------------------- /oppy/tests/unit/connection/cert_der.py: -------------------------------------------------------------------------------- 1 | test_cert_der = '0\x82\x02\xae0\x82\x01\x96\xa0\x03\x02\x01\x02\x02\t\x00\xd8\x07\x1f\xf5\x16\x97\x8c\xea0\r\x06\t*\x86H\x86\xf7\r\x01\x01\x0b\x05\x000\x0f1\r0\x0b\x06\x03U\x04\x03\x0c\x04zbox0\x1e\x17\r150521153052Z\x17\r250518153052Z0\x0f1\r0\x0b\x06\x03U\x04\x03\x0c\x04zbox0\x82\x01"0\r\x06\t*\x86H\x86\xf7\r\x01\x01\x01\x05\x00\x03\x82\x01\x0f\x000\x82\x01\n\x02\x82\x01\x01\x00\xe5\xef7\xef\xfaRW\xe8 \xa9\xdeg\xdfF\xd18VkH\x00\x08s[2\xae\xdem\xb7\x121\x15\xb6\xe2O\x89k\x96\x12\x16D\xedd\xe5\xb6\xd1\xeb\x90\x95d\xa6w\x12!\\SK-o\xda\x87\xfd\xc4\xf7\xe1\x7fb\xba\x11\x06\xe4\xc7\xa5\x9dK\xb3~\x06\xfa\x0b0\xd1\xda\xd2\xf2\xa2\x91\x04\x96\xde(\xed\xd9\x94KZ\xea\x8b\xf9*\xde\xa95\xa6%\xa9\r\xe4\x84\xf9`\x1a\n\xda\r\xedR\xceI.tVj\xf8\xc3\x91\x18\x96\xef\xa6"\xbd\xdb!\'\x11\xc6I\xa8.j\xd0\xaa\xbd\xbd\xcc\x16p\xbe,\x1aE\xc4\x80K\x0c\xdf\x0b\r,\x12\x0b\x80\xcb\xaa\x1d\xf0H\xba#H*\x98\xc2\x01\\uG\xb9j)\x9e\x1ai#\x91\xa4;\xce\xc3\x91\xff\x1f\xfd\x16\xbdj`\x0cc\xaaD_0\xecu6\x0f\x03+\xa9I\x9a\xaa\xdb\x83\x93\x99\x95h\xbcj_?\x19+\xc9\xb2U\xd0\xb9\xa3\xad3\x04\x95\xfa\xbdg\t\xbcuJN\xb0\xa1\xa2[=\xf6\xb8\x00C\x98!s\x89\xa4\x13\xd8+\xf7{\xa5gf\xfa\xc2G:H\x93\xd9\xa8\xfc\x869\x95\\\x83\xcf:\xba\x98<\xae\rG\xc4\x99U\x12%\x87<\\f\x9ef\x860^\xb8\x1f=t\xca\x15w[\xd3\x1b\xf8\x8c\xd4l\x8e{j\xf46\xbe\xcegge\xe8[\xd2\x9b\xfd\x0e^\r1`"\xf0\x11\x04"\xc7\x81\xa2\xb9\xc9\xb2\xc0\xb2\x02\xa8y\xfd\'\xe2h\xcd\xce z' 2 | -------------------------------------------------------------------------------- /oppy/tests/unit/connection/test_connection.py: -------------------------------------------------------------------------------- 1 | import mock 2 | 3 | from twisted.trial import unittest 4 | 5 | from oppy.cell.definitions import ( 6 | CREATE2_CMD, 7 | PADDING_CMD, 8 | VPADDING_CMD, 9 | ) 10 | 11 | from oppy.connection import connection 12 | 13 | 14 | class ConnectionTest(unittest.TestCase): 15 | 16 | @mock.patch('oppy.connection.connectionbuildtask.ConnectionBuildTask', 17 | autospec=True) 18 | @mock.patch('oppy.connection.connectionmanager.ConnectionManager', 19 | autospec=True) 20 | def setUp(self, cm, cbt): 21 | cbt.micro_status_entry = mock.Mock() 22 | self.cbt = cbt 23 | self.cm = cm 24 | self.connection = connection.Connection(cm, cbt) 25 | 26 | def test_send(self): 27 | self.connection.transport = mock.Mock() 28 | self.connection.transport.write = mock.Mock() 29 | 30 | mock_cell = mock.Mock() 31 | mock_cell.get_bytes = mock.Mock() 32 | 33 | self.connection.send(mock_cell) 34 | 35 | self.connection.transport.write.assert_called_once_with( 36 | mock_cell.getBytes()) 37 | 38 | # TODO: test dataReceived 39 | 40 | def test_recv_no_padding_have_circuit(self): 41 | mock_cell = mock.Mock() 42 | mock_cell.header = mock.Mock() 43 | mock_cell.header.circ_id = 0 44 | mock_cell.header.cmd = CREATE2_CMD 45 | mock_circuit = mock.Mock() 46 | mock_circuit.recv = mock.Mock() 47 | 48 | self.connection._circuit_dict[0] = mock_circuit 49 | 50 | self.connection._recv(mock_cell) 51 | 52 | mock_circuit.recv.assert_called_once_with(mock_cell) 53 | 54 | def test_recvCell_padding_fixed(self): 55 | mock_cell = mock.Mock() 56 | mock_cell.header = mock.Mock() 57 | mock_cell.header.circ_id = 0 58 | mock_cell.header.cmd = PADDING_CMD 59 | mock_circuit = mock.Mock() 60 | mock_circuit.recv = mock.Mock() 61 | 62 | self.connection._circuit_dict[0] = mock_circuit 63 | 64 | self.connection._recv(mock_cell) 65 | 66 | self.assertEqual(mock_circuit.recv.call_count, 0) 67 | 68 | def test_recvCell_padding_varlen(self): 69 | mock_cell = mock.Mock() 70 | mock_cell.header = mock.Mock() 71 | mock_cell.header.circ_id = 0 72 | mock_cell.header.cmd = VPADDING_CMD 73 | mock_circuit = mock.Mock() 74 | mock_circuit.recv = mock.Mock() 75 | 76 | self.connection._circuit_dict[0] = mock_circuit 77 | 78 | self.connection._recv(mock_cell) 79 | 80 | self.assertEqual(mock_circuit.recv.call_count, 0) 81 | 82 | def test_recvCell_no_circuit(self): 83 | mock_cell = mock.Mock() 84 | mock_cell.header = mock.Mock() 85 | mock_cell.header.circ_id = 0 86 | mock_cell.header.cmd = VPADDING_CMD 87 | mock_circuit = mock.Mock() 88 | mock_circuit.recv = mock.Mock() 89 | 90 | self.connection._circuit_dict[1] = mock_circuit 91 | 92 | self.connection._recv(mock_cell) 93 | 94 | self.assertEqual(mock_circuit.recv.call_count, 0) 95 | 96 | def test_addCircuit(self): 97 | mock_circuit = mock.Mock() 98 | mock_circuit.circuit_id = 0 99 | 100 | self.connection.addCircuit(mock_circuit) 101 | 102 | self.assertEqual(self.connection._circuit_dict[0], mock_circuit) 103 | 104 | def test_closeConnection(self): 105 | self.connection._destroyAllCircuits = mock.Mock() 106 | self.connection._connection_manager.removeConnection = mock.Mock() 107 | self.connection.transport = mock.Mock() 108 | self.connection.transport.loseConnection = mock.Mock() 109 | 110 | self.connection.closeConnection() 111 | 112 | self.assertTrue(self.connection._closed) 113 | self.assertEqual(self.connection._destroyAllCircuits.call_count, 1) 114 | self.connection._connection_manager.removeConnection.\ 115 | assert_called_once_with(self.connection) 116 | self.assertEqual(self.connection.transport.loseConnection.call_count, 117 | 1) 118 | 119 | @mock.patch('oppy.connection.connection.logging') 120 | def test_connectionLost_not_closed(self, _): 121 | self.connection._destroyAllCircuits = mock.Mock() 122 | self.connection._connection_manager.removeConnection = mock.Mock() 123 | 124 | self.connection.connectionLost(None) 125 | 126 | self.assertTrue(self.connection._closed) 127 | self.assertEqual(self.connection._destroyAllCircuits.call_count, 1) 128 | self.connection._connection_manager.removeConnection.\ 129 | assert_called_once_with(self.connection) 130 | 131 | def test_connectionLost_closed(self): 132 | self.connection._closed = True 133 | self.connection._destroyAllCircuits = mock.Mock() 134 | self.connection._connection_manager.removeConnection = mock.Mock() 135 | 136 | self.connection.connectionLost(None) 137 | 138 | self.assertTrue(self.connection._closed) 139 | self.assertEqual(self.connection._destroyAllCircuits.call_count, 0) 140 | self.assertEqual( 141 | self.connection._connection_manager.removeConnection.call_count, 142 | 0) 143 | 144 | def test_destroyAllCircuits(self): 145 | mock_circuit_1 = mock.Mock() 146 | mock_circuit_1.destroyCircuitFromConnection = mock.Mock() 147 | mock_circuit_2 = mock.Mock() 148 | mock_circuit_2.destroyCircuitFromConnection = mock.Mock() 149 | 150 | self.connection._circuit_dict = { 151 | 1: mock_circuit_1, 152 | 2: mock_circuit_2, 153 | } 154 | 155 | self.connection._destroyAllCircuits() 156 | 157 | self.assertEqual( 158 | mock_circuit_1.destroyCircuitFromConnection.call_count, 159 | 1) 160 | self.assertEqual( 161 | mock_circuit_2.destroyCircuitFromConnection.call_count, 162 | 1) 163 | 164 | def test_removeCircuit_have_circuit_more_left(self): 165 | mock_circuit_1 = mock.Mock() 166 | mock_circuit_1.circuit_id = 1 167 | mock_circuit_2 = mock.Mock() 168 | mock_circuit_2.circuit_id = 2 169 | self.connection._circuit_dict = { 170 | 1: mock_circuit_1, 171 | 2: mock_circuit_2, 172 | } 173 | self.connection._connection_manager.shouldDestroyConnection = mock.Mock() 174 | 175 | self.connection.removeCircuit(mock_circuit_1) 176 | 177 | self.assertTrue(mock_circuit_1 not in self.connection._circuit_dict) 178 | self.assertEqual(self.connection._circuit_dict[2], mock_circuit_2) 179 | self.assertEqual( 180 | self.connection._connection_manager.shouldDestroyConnection.call_count, 181 | 0) 182 | 183 | def test_removeCircuit_have_circuit_none_left_dont_destroy(self): 184 | mock_circuit_1 = mock.Mock() 185 | mock_circuit_1.circuit_id = 1 186 | self.connection._circuit_dict = { 187 | 1: mock_circuit_1, 188 | } 189 | self.connection._connection_manager.shouldDestroyConnection = mock.Mock() 190 | self.connection._connection_manager.shouldDestroyConnection.return_value = False 191 | self.connection.closeConnection = mock.Mock() 192 | 193 | self.connection.removeCircuit(mock_circuit_1) 194 | 195 | self.assertTrue(mock_circuit_1 not in self.connection._circuit_dict) 196 | self.assertEqual(self.connection.closeConnection.call_count, 0) 197 | 198 | def test_removeCircuit_have_circuit_none_left_do_destroy(self): 199 | mock_circuit_1 = mock.Mock() 200 | mock_circuit_1.circuit_id = 1 201 | self.connection._circuit_dict = { 202 | 1: mock_circuit_1, 203 | } 204 | self.connection._connection_manager.shouldDestroyConnection = mock.Mock() 205 | self.connection._connection_manager.shouldDestroyConnection.return_value = True 206 | self.connection.closeConnection = mock.Mock() 207 | 208 | self.connection.removeCircuit(mock_circuit_1) 209 | 210 | self.assertTrue(mock_circuit_1 not in self.connection._circuit_dict) 211 | self.assertEqual(self.connection.closeConnection.call_count, 1) 212 | 213 | def test_removeCircuit_no_circuit(self): 214 | mock_circuit = mock.Mock() 215 | mock_circuit.circuit_id = 0 216 | self.connection._connection_manager.shouldDestroyConnection = mock.Mock() 217 | self.connection.closeConnection = mock.Mock() 218 | 219 | self.connection.removeCircuit(mock_circuit) 220 | 221 | self.assertEqual(self.connection._connection_manager.shouldDestroyConnection.call_count, 0) 222 | self.assertEqual(self.connection.closeConnection.call_count, 0) 223 | -------------------------------------------------------------------------------- /oppy/tests/unit/connection/test_connectionmanager.py: -------------------------------------------------------------------------------- 1 | import mock 2 | 3 | from twisted.trial import unittest 4 | 5 | from OpenSSL import SSL 6 | 7 | from oppy.connection import connectionmanager 8 | 9 | 10 | class ConnectionManagerTest(unittest.TestCase): 11 | 12 | def setUp(self): 13 | self.cm = connectionmanager.ConnectionManager() 14 | 15 | # TODO: test that cipher list is set properly for v3 protocol 16 | def test_TLSClientContextFactory_v3(self): 17 | t = connectionmanager.TLSClientContextFactory() 18 | 19 | self.assertEqual(t.isClient, 1) 20 | self.assertEqual(t.method, SSL.TLSv1_METHOD) 21 | self.assertEqual(t._contextFactory, SSL.Context) 22 | 23 | @mock.patch('twisted.internet.endpoints', autospec=True) 24 | def test_getConnection_have_connection(self, mock_endpoints): 25 | mock_relay = mock.Mock() 26 | mock_relay.fingerprint = 'test' 27 | mock_connection = mock.Mock() 28 | self.cm._connection_dict['test'] = mock_connection 29 | 30 | d = self.cm.getConnection(mock_relay) 31 | 32 | self.assertEqual(mock_connection, self.successResultOf(d)) 33 | self.assertEqual(mock_endpoints.connectProtocol.call_count, 0) 34 | 35 | @mock.patch('twisted.internet.endpoints', autospec=True) 36 | def test_getConnection_have_pending_connection(self, mock_endpoints): 37 | mock_relay = mock.Mock() 38 | mock_relay.fingerprint = 'test' 39 | self.cm._pending_request_dict['test'] = [] 40 | 41 | d = self.cm.getConnection(mock_relay) 42 | 43 | self.assertEqual(self.cm._pending_request_dict['test'], [d]) 44 | self.assertEqual(len(self.cm._pending_request_dict), 1) 45 | self.assertEqual(mock_endpoints.connectProtocol.call_count, 0) 46 | 47 | @mock.patch('twisted.internet.reactor') 48 | @mock.patch('oppy.connection.connectionmanager.endpoints', autospec=True) 49 | @mock.patch('oppy.connection.connectionmanager.ConnectionBuildTask', 50 | autospec=True) 51 | @mock.patch('oppy.connection.connectionmanager.TLSClientContextFactory') 52 | def test_getConnection_build_new_connection(self, mock_tls_ctx, mock_cbt, 53 | mock_endpoints, mock_reactor): 54 | mock_deferred = mock.Mock() 55 | mock_deferred.addErrback = mock.Mock() 56 | mock_endpoints.connectProtocol.return_value = mock_deferred 57 | mock_endpoints.SSL4ClientEndpoint.return_value = 'testval1' 58 | mock_cbt_instance = mock_cbt.return_value 59 | 60 | mock_relay = mock.Mock() 61 | mock_relay.fingerprint = 'test' 62 | mock_relay.or_port = 0 63 | mock_relay.address = 'address' 64 | 65 | mock_ctx = mock_tls_ctx.return_value 66 | 67 | d = self.cm.getConnection(mock_relay) 68 | 69 | self.assertEqual(mock_endpoints.connectProtocol.call_count, 1) 70 | self.assertEqual(mock_endpoints.connectProtocol.call_args_list, 71 | [mock.call('testval1', mock_cbt_instance)]) 72 | self.assertEqual(mock_endpoints.SSL4ClientEndpoint.call_count, 1) 73 | self.assertEqual(mock_endpoints.SSL4ClientEndpoint.call_args_list, 74 | [mock.call(mock_reactor, 'address', 0, mock_ctx)]) 75 | self.assertEqual(mock_cbt.call_count, 1) 76 | self.assertEqual(mock_cbt.call_args_list, 77 | [mock.call(self.cm, mock_relay)]) 78 | self.assertEqual(mock_deferred.addErrback.call_count, 1) 79 | self.assertEqual(mock_deferred.addErrback.call_args_list, 80 | [mock.call(self.cm._initialConnectionFailed, 'test')]) 81 | self.assertEqual(self.cm._pending_request_dict['test'], [d]) 82 | 83 | @mock.patch('twisted.internet.reactor') 84 | @mock.patch('oppy.connection.connectionmanager.endpoints', autospec=True) 85 | @mock.patch('oppy.connection.connectionmanager.ConnectionBuildTask', 86 | autospec=True) 87 | @mock.patch('oppy.connection.connectionmanager.TLSClientContextFactory') 88 | def test_getConnection_connection_failed(self, mock_tls_ctx, mock_cbt, 89 | mock_endpoints, mock_reactor): 90 | mock_deferred = mock.Mock() 91 | mock_deferred.addErrback = mock.Mock() 92 | mock_endpoints.connectProtocol.return_value = mock_deferred 93 | exc = Exception() 94 | mock_endpoints.SSL4ClientEndpoint.side_effect = exc 95 | mock_cbt_instance = mock_cbt.return_value 96 | 97 | mock_relay = mock.Mock() 98 | mock_relay.fingerprint = 'test' 99 | mock_relay.or_port = 0 100 | mock_relay.address = 'address' 101 | 102 | mock_ctx = mock_tls_ctx.return_value 103 | 104 | self.cm._initialConnectionFailed = mock.Mock() 105 | 106 | d = self.cm.getConnection(mock_relay) 107 | 108 | self.assertEqual(self.cm._initialConnectionFailed.call_count, 1) 109 | self.assertEqual(self.cm._initialConnectionFailed.call_args_list, 110 | [mock.call(exc, 'test')]) 111 | 112 | def test_initialConnectionFailed(self): 113 | self.cm.connectionTaskFailed = mock.Mock() 114 | 115 | self.cm._initialConnectionFailed('t1', 't2') 116 | 117 | self.assertEqual(self.cm.connectionTaskFailed.call_count, 1) 118 | self.assertEqual(self.cm.connectionTaskFailed.call_args_list, 119 | [mock.call(None, 't1', 't2')]) 120 | 121 | @mock.patch('oppy.connection.connectionmanager.Connection', autospec=True) 122 | def test_connectionTaskSucceeded(self, mock_connection): 123 | mock_cbt = mock.Mock() 124 | mock_transport = mock.Mock() 125 | mock_cbt.transport = mock_transport 126 | mock_cbt.micro_status_entry = mock.Mock() 127 | mock_cbt.micro_status_entry.fingerprint = 'test' 128 | mock_request_1 = mock.Mock() 129 | mock_request_1.callback = mock.Mock() 130 | mock_request_2 = mock.Mock() 131 | mock_request_2.callback = mock.Mock() 132 | self.cm._pending_request_dict['test'] = [mock_request_1, 133 | mock_request_2] 134 | 135 | mock_conn_instance = mock_connection.return_value 136 | 137 | self.cm.connectionTaskSucceeded(mock_cbt) 138 | 139 | self.assertEqual(self.cm._connection_dict['test'], mock_conn_instance) 140 | self.assertEqual(mock_conn_instance.transport, mock_transport) 141 | self.assertEqual(mock_conn_instance.transport.wrappedProtocol, 142 | mock_conn_instance) 143 | mock_request_1.callback.assert_called_once_with(mock_conn_instance) 144 | mock_request_2.callback.assert_called_once_with(mock_conn_instance) 145 | self.assertTrue(mock_cbt not in self.cm._pending_request_dict) 146 | self.assertTrue('test' not in self.cm._pending_request_dict.keys()) 147 | 148 | @mock.patch('oppy.connection.connectionmanager.logging', autospec=True) 149 | def test_connectionTaskSucceeded_no_reference(self, mock_logging): 150 | mock_cbt = mock.Mock() 151 | mock_cbt.micro_status_entry = mock.Mock() 152 | mock_cbt.micro_status_entry.fingerprint = 'test' 153 | 154 | self.cm.connectionTaskSucceeded(mock_cbt) 155 | 156 | self.assertTrue(mock_cbt not in self.cm._connection_dict) 157 | self.assertEqual(mock_logging.debug.call_count, 1) 158 | 159 | def test_connectionTaskFailed(self): 160 | mock_cbt = mock.Mock() 161 | mock_cbt.micro_status_entry = mock.Mock() 162 | mock_cbt.micro_status_entry.fingerprint = 'test' 163 | mock_request_1 = mock.Mock() 164 | mock_request_1.errback = mock.Mock() 165 | mock_request_2 = mock.Mock() 166 | mock_request_2.errback = mock.Mock() 167 | self.cm._pending_request_dict['test'] = [mock_request_1, 168 | mock_request_2] 169 | 170 | self.cm.connectionTaskFailed(mock_cbt, 'reason') 171 | 172 | self.assertTrue('test' not in self.cm._pending_request_dict) 173 | mock_request_1.errback.assert_called_once_with('reason') 174 | mock_request_2.errback.assert_called_once_with('reason') 175 | 176 | @mock.patch('oppy.connection.connectionmanager.logging', autospec=True) 177 | def test_connectionTaskFailed_no_reference(self, mock_logging): 178 | mock_cbt = mock.Mock() 179 | mock_cbt.micro_status_entry = mock.Mock() 180 | mock_cbt.micro_status_entry.fingerprint = 'test' 181 | 182 | self.cm.connectionTaskFailed(mock_cbt, 'reason') 183 | 184 | self.assertEqual(mock_logging.debug.call_count, 1) 185 | 186 | def test_removeConnection(self): 187 | mock_connection = mock.Mock() 188 | mock_connection.micro_status_entry = mock.Mock() 189 | mock_connection.micro_status_entry.fingerprint = 'test' 190 | 191 | self.cm._connection_dict['test'] = mock_connection 192 | 193 | self.cm.removeConnection(mock_connection) 194 | 195 | self.assertTrue(mock_connection not in self.cm._connection_dict) 196 | self.assertEqual(len(self.cm._connection_dict), 0) 197 | 198 | @mock.patch('oppy.connection.connectionmanager.logging', autospec=True) 199 | def test_removeConnection_no_reference(self, mock_logging): 200 | mock_connection = mock.Mock() 201 | mock_connection.micro_status_entry = mock.Mock() 202 | mock_connection.micro_status_entry.fingerprint = 'test' 203 | 204 | self.cm.removeConnection(mock_connection) 205 | self.assertEqual(mock_logging.debug.call_count, 1) 206 | 207 | def test_shouldDestroyConnection(self): 208 | self.assertTrue(self.cm.shouldDestroyConnection(mock.Mock())) 209 | -------------------------------------------------------------------------------- /oppy/tests/unit/crypto/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/tests/unit/crypto/__init__.py -------------------------------------------------------------------------------- /oppy/tests/unit/crypto/test_ntor.py: -------------------------------------------------------------------------------- 1 | import hashlib 2 | import mock 3 | 4 | from twisted.trial import unittest 5 | 6 | from oppy.crypto import ntor 7 | 8 | # NOTE: the test values here are borrowed from tor, specifically the 9 | # test/test_crypto.c file. 10 | 11 | 12 | class NTorTest(unittest.TestCase): 13 | 14 | @mock.patch('oppy.crypto.ntor.decodeMicrodescriptorIdentifier', 15 | return_value='relay_identity') 16 | @mock.patch('base64.b64decode', return_value='ntor_onion_key') 17 | @mock.patch('oppy.crypto.ntor.PrivateKey.generate') 18 | def test_NTorState(self, mock_sk, mock_b64, mock_dmi): 19 | mock_sk_g = mock.Mock() 20 | mock_sk_g.public_key = 'pk' 21 | mock_sk.return_value = mock_sk_g 22 | mock_sk.public_key = 'pk' 23 | mock_md = mock.Mock() 24 | mock_md.ntor_onion_key = 'test' 25 | 26 | n = ntor.NTorState(mock_md) 27 | 28 | mock_dmi.assert_called_once_with(mock_md) 29 | mock_b64.assert_called_once_with('test') 30 | self.assertEqual(mock_sk.call_count, 1) 31 | self.assertEqual(n.relay_identity, 'relay_identity') 32 | self.assertEqual(n.relay_ntor_onion_key, 'ntor_onion_key') 33 | self.assertEqual(n.secret_key, mock_sk_g) 34 | self.assertEqual(n.public_key, 'pk') 35 | 36 | def test_createOnionSkin(self): 37 | mock_nts = mock.Mock() 38 | mock_nts.relay_identity = 'ident' 39 | mock_nts.relay_ntor_onion_key = 'ntork' 40 | mock_nts.public_key = 'pk' 41 | 42 | self.assertEqual(ntor.createOnionSkin(mock_nts), 'identntorkpk') 43 | 44 | @mock.patch('oppy.crypto.ntor._buildSecretInput', 45 | return_value=('secret', True)) 46 | @mock.patch('oppy.crypto.util.makeHMACSHA256', return_value='hmac') 47 | @mock.patch('oppy.crypto.ntor._buildAuthInput', return_value='auth') 48 | @mock.patch('oppy.crypto.util.constantStrEqual', return_value=False) 49 | @mock.patch('oppy.crypto.ntor._makeRelayCrypto', return_value='ret') 50 | def test_deriveRelayCrypto_buildSecreInput_bad(self, mock_mrc, mock_cse, 51 | mock_bai, mock_mhs, mock_bsi): 52 | mock_nts = mock.Mock() 53 | mock_nts.relay_identity = 'ident' 54 | mock_nts.relay_ntor_onion_key = 'ntork' 55 | mock_nts.public_key = 'pk' 56 | mock_cell = mock.Mock() 57 | mock_cell.hdata = '\x00'*96 58 | 59 | self.assertRaises(ntor.KeyDerivationFailed, 60 | ntor.deriveRelayCrypto, 61 | mock_nts, 62 | mock_cell) 63 | 64 | @mock.patch('oppy.crypto.ntor._buildSecretInput', 65 | return_value=('secret', False)) 66 | @mock.patch('oppy.crypto.util.makeHMACSHA256', return_value='hmac') 67 | @mock.patch('oppy.crypto.ntor._buildAuthInput', return_value='auth') 68 | @mock.patch('oppy.crypto.util.constantStrEqual', return_value=True) 69 | @mock.patch('oppy.crypto.ntor._makeRelayCrypto', return_value='ret') 70 | def test_deriveRelayCrypto_auth_bad(self, mock_mrc, mock_cse, mock_bai, 71 | mock_mhs, mock_bsi): 72 | mock_nts = mock.Mock() 73 | mock_nts.relay_identity = 'ident' 74 | mock_nts.relay_ntor_onion_key = 'ntork' 75 | mock_nts.public_key = 'pk' 76 | mock_cell = mock.Mock() 77 | mock_cell.hdata = '\x00'*96 78 | 79 | self.assertRaises(ntor.KeyDerivationFailed, 80 | ntor.deriveRelayCrypto, 81 | mock_nts, 82 | mock_cell) 83 | 84 | 85 | @mock.patch('oppy.crypto.ntor._buildSecretInput', 86 | return_value=('secret', False)) 87 | @mock.patch('oppy.crypto.util.makeHMACSHA256', return_value='hmac') 88 | @mock.patch('oppy.crypto.ntor._buildAuthInput', return_value='auth') 89 | @mock.patch('oppy.crypto.util.constantStrEqual', return_value=False) 90 | @mock.patch('oppy.crypto.ntor._makeRelayCrypto', return_value='ret') 91 | def test_deriveRelayCrypto_ok(self, mock_mrc, mock_cse, mock_bai, mock_mhs, 92 | mock_bsi): 93 | mock_nts = mock.Mock() 94 | mock_nts.relay_identity = 'ident' 95 | mock_nts.relay_ntor_onion_key = 'ntork' 96 | mock_nts.public_key = 'pk' 97 | mock_cell = mock.Mock() 98 | mock_cell.hdata = [chr(i) for i in range(96)] 99 | hdata = mock_cell.hdata 100 | 101 | ret = ntor.deriveRelayCrypto(mock_nts, mock_cell) 102 | 103 | mock_bsi.assert_called_once_with(mock_nts, hdata[:32]) 104 | self.assertEqual(mock_mhs.call_count, 2) 105 | self.assertEqual(mock_mhs.call_args_list, 106 | [mock.call(msg='secret', key='ntor-curve25519-sha256-1:verify'), 107 | mock.call(msg='auth', key='ntor-curve25519-sha256-1:mac')]) 108 | mock_cse.assert_called_once_with(hdata[32:32+20], 'hmac') 109 | mock_mrc.assert_called_once_with('secret') 110 | self.assertEqual(ret, 'ret') 111 | 112 | @mock.patch('oppy.crypto.ntor._EXP', return_value=('\x00', True)) 113 | def test_buildSecretInput_bad_EXP(self, mock_exp): 114 | mock_nts = mock.Mock() 115 | mock_nts.relay_identity = 'ident' 116 | mock_nts.relay_ntor_onion_key = 'ntork' 117 | mock_nts.public_key = 'pk' 118 | relay_pk = 'relay_pk' 119 | 120 | v, b = ntor._buildSecretInput(mock_nts, relay_pk) 121 | self.assertTrue(b) 122 | 123 | @mock.patch('oppy.crypto.ntor._EXP', return_value=('exp', False)) 124 | def test_buildAuthInput(self, mock_exp): 125 | mock_nts = mock.Mock() 126 | mock_nts.relay_identity = 'ident' 127 | mock_nts.relay_ntor_onion_key = 'ntork' 128 | mock_nts.public_key = 'pk' 129 | relay_pk = 'relay_pk' 130 | 131 | v, b = ntor._buildSecretInput(mock_nts, relay_pk) 132 | 133 | self.assertEqual(mock_exp.call_count, 2) 134 | self.assertFalse(b) 135 | self.assertEqual(v, 136 | 'expexpidentntorkpkrelay_pkntor-curve25519-sha256-1') 137 | 138 | @mock.patch('hkdf.hkdf_extract', return_value='prk') 139 | @mock.patch('hkdf.hkdf_expand', return_value=[chr(i) for i in range(96)]) 140 | @mock.patch('hashlib.sha1', return_value='sha1') 141 | @mock.patch('oppy.crypto.util.makeAES128CTRCipher', return_value='cipher') 142 | @mock.patch('oppy.crypto.util.RelayCrypto', return_value='ret') 143 | def test_makeRelayCrypto(self, mock_rc, mock_maes, mock_sha1, mock_hexp, 144 | mock_hext): 145 | secret_input = 'secret input' 146 | km = [chr(i) for i in range(96)] 147 | 148 | ret = ntor._makeRelayCrypto(secret_input) 149 | 150 | mock_hext.assert_called_once_with( 151 | salt='ntor-curve25519-sha256-1:key_extract', 152 | input_key_material='secret input', 153 | hash=hashlib.sha256) 154 | 155 | mock_hexp.assert_called_once_with( 156 | pseudo_random_key='prk', 157 | info='ntor-curve25519-sha256-1:key_expand', 158 | length=72, 159 | hash=hashlib.sha256) 160 | 161 | self.assertEqual(mock_sha1.call_count, 2) 162 | self.assertEqual(mock_sha1.call_args_list, 163 | [mock.call(km[:20]), mock.call(km[20:40])]) 164 | self.assertEqual(mock_maes.call_count, 2) 165 | self.assertEqual(mock_maes.call_args_list, 166 | [mock.call(km[40:56]), mock.call(km[56:72])]) 167 | mock_rc.assert_called_once_with( 168 | forward_cipher='cipher', forward_digest='sha1', 169 | backward_cipher='cipher', backward_digest='sha1') 170 | self.assertEqual(ret, 'ret') 171 | 172 | @mock.patch('nacl.bindings.crypto_scalarmult', return_value='sm') 173 | @mock.patch('oppy.crypto.ntor.PublicKey', return_value='pk') 174 | @mock.patch('oppy.crypto.util.constantStrAllZero', return_value=True) 175 | def test_EXP_bad(self, mock_az, mock_pk, mock_csm): 176 | ret, bad = ntor._EXP('n', 'p') 177 | mock_pk.assert_called_once_with('p') 178 | mock_csm.assert_called_once_with('n', 'pk') 179 | self.assertTrue(bad) 180 | 181 | 182 | @mock.patch('nacl.bindings.crypto_scalarmult', return_value='sm') 183 | @mock.patch('oppy.crypto.ntor.PublicKey', return_value='pk') 184 | @mock.patch('oppy.crypto.util.constantStrAllZero', return_value=False) 185 | def test_EXP(self, mock_az, mock_pk, mock_csm): 186 | ret, bad = ntor._EXP('n', 'p') 187 | mock_pk.assert_called_once_with('p') 188 | mock_csm.assert_called_once_with('n', 'pk') 189 | mock_az.assert_called_once_with('sm') 190 | self.assertEqual(ret, 'sm') 191 | self.assertFalse(bad) 192 | -------------------------------------------------------------------------------- /oppy/tests/unit/netstatus/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/tests/unit/netstatus/__init__.py -------------------------------------------------------------------------------- /oppy/tests/unit/netstatus/test_netstatus.py: -------------------------------------------------------------------------------- 1 | import mock 2 | 3 | from twisted.trial import unittest 4 | 5 | from oppy.netstatus import netstatus 6 | 7 | 8 | class NetstatusTest(unittest.TestCase): 9 | 10 | @mock.patch('oppy.netstatus.netstatus.MicroconsensusManager', autospec=True) 11 | @mock.patch('oppy.netstatus.netstatus.MicrodescriptorManager', autospec=True) 12 | def setUp(self, mock_md, mock_mc): 13 | self.mdm = mock_md 14 | self.mcm = mock_mc 15 | self.ns = netstatus.NetStatus() 16 | 17 | def test_getMicrodescriptorsForCircuit(self): 18 | _ = self.ns.getMicrodescriptorsForCircuit() 19 | self.assertEqual(self.ns._mdm.getMicrodescriptorsForCircuit.call_count, 1) 20 | 21 | def test_getMicroconsensus(self): 22 | _ = self.ns.getMicroconsensus() 23 | self.assertEqual(self.ns._mcm.getMicroconsensus.call_count, 1) 24 | -------------------------------------------------------------------------------- /oppy/tests/unit/path/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/tests/unit/path/__init__.py -------------------------------------------------------------------------------- /oppy/tests/unit/socks/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/tests/unit/socks/__init__.py -------------------------------------------------------------------------------- /oppy/tests/unit/stream/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/tests/unit/stream/__init__.py -------------------------------------------------------------------------------- /oppy/tests/unit/stream/test_stream.py: -------------------------------------------------------------------------------- 1 | import mock 2 | 3 | from twisted.trial import unittest 4 | 5 | from oppy.stream import stream 6 | 7 | 8 | class StreamTest(unittest.TestCase): 9 | 10 | @mock.patch('oppy.circuit.circuitmanager.CircuitManager', autospec=True) 11 | @mock.patch('oppy.socks.socks.OppySOCKSProtocol', autospec=True) 12 | @mock.patch('oppy.util.exitrequest.ExitRequest', autospec=True) 13 | @mock.patch('twisted.internet.defer.DeferredQueue', autospec=True) 14 | def setUp(self, mock_dq, mock_er, mock_osp, mock_cm): 15 | self.mock_er = mock_er 16 | self.mock_osp = mock_osp 17 | self.mock_cm = mock_cm 18 | self.mock_dq = mock_dq 19 | self.stream = stream.Stream(self.mock_cm, self.mock_er, self.mock_osp) 20 | self.log_patch = mock.patch('oppy.stream.stream.logging') 21 | self.mock_log = self.log_patch.start() 22 | 23 | def test_recv(self): 24 | self.stream.recv('test') 25 | self.stream._read_queue.put.assert_called_once_with('test') 26 | 27 | @mock.patch('oppy.stream.stream._chunkRelayData', return_value=['c1', 'c2']) 28 | def test_send(self, mock_crd): 29 | self.stream.send('test') 30 | self.assertEqual(self.stream._write_queue.put.call_count, 2) 31 | self.assertEqual(self.stream._write_queue.put.call_args_list, 32 | [mock.call('c1'), mock.call('c2')]) 33 | 34 | def test_incrementPackageWindow_normal(self): 35 | self.stream._pollWriteQueue = mock.Mock() 36 | self.stream._write_deferred = 'wd' 37 | self.stream._package_window = 1 38 | self.stream.incrementPackageWindow() 39 | 40 | self.assertEqual(self.stream._package_window, 41 | stream.STREAM_WINDOW_SIZE+1) 42 | self.assertEqual(self.stream._pollWriteQueue.call_count, 0) 43 | 44 | def test_incrementPackageWindow_buffering(self): 45 | self.stream._pollWriteQueue = mock.Mock() 46 | self.stream._write_deferred = None 47 | self.stream._package_window = 0 48 | self.stream.incrementPackageWindow() 49 | 50 | self.assertEqual(self.stream._package_window, 51 | stream.STREAM_WINDOW_SIZE) 52 | self.assertEqual(self.stream._pollWriteQueue.call_count, 1) 53 | 54 | def test_streamConnected(self): 55 | self.stream._pollWriteQueue = mock.Mock() 56 | self.stream.streamConnected() 57 | self.assertEqual(self.stream._pollWriteQueue.call_count, 1) 58 | 59 | def test_closeFromCircuit(self): 60 | self.stream.circuit = mock.Mock() 61 | self.stream.circuit_id = 'test' 62 | self.stream.closeFromCircuit() 63 | self.assertEqual(self.stream.socks.closeFromStream.call_count, 1) 64 | self.assertTrue(self.stream._closed) 65 | 66 | def test_closeFromSOCKS_no_circuit(self): 67 | self.stream.circuit = None 68 | self.stream.closeFromSOCKS() 69 | self.assertTrue(self.stream._closed) 70 | 71 | def test_closeFromSOCKS_circuit(self): 72 | self.stream.circuit = mock.Mock() 73 | self.stream.circuit.removeStream = mock.Mock() 74 | 75 | self.stream.closeFromSOCKS() 76 | self.stream.circuit.removeStream.assert_called_once_with(self.stream) 77 | self.assertTrue(self.stream._closed) 78 | 79 | def test_registerNewStream_closed(self): 80 | mock_circuit = mock.Mock() 81 | mock_circuit.addStreamAndSetStreamID = mock.Mock() 82 | self.stream._closed = True 83 | 84 | self.stream._registerNewStream(mock_circuit) 85 | self.assertEqual(mock_circuit.addStreamAndSetStreamID.call_count, 0) 86 | 87 | def test_registerNewStream(self): 88 | mock_circuit = mock.Mock() 89 | mock_circuit.addStreamAndSetStreamID = mock.Mock() 90 | mock_circuit.beginStream = mock.Mock() 91 | self.stream._pollReadQueue = mock.Mock() 92 | self.stream._circuit_request = 'test' 93 | 94 | self.stream._registerNewStream(mock_circuit) 95 | self.assertEqual(self.stream.circuit, mock_circuit) 96 | self.assertEqual(self.stream._circuit_request, None) 97 | mock_circuit.addStreamAndSetStreamID.assert_called_once_with( 98 | self.stream) 99 | mock_circuit.beginStream.assert_called_once_with(self.stream) 100 | self.assertEqual(self.stream._pollReadQueue.call_count, 1) 101 | 102 | def test_pollWriteQueue(self): 103 | mock_wd = mock.Mock() 104 | mock_wd.addCallback = mock.Mock() 105 | self.stream._write_queue.get.return_value = mock_wd 106 | 107 | self.stream._pollWriteQueue() 108 | 109 | self.assertEqual(self.stream._write_deferred, mock_wd) 110 | mock_wd.addCallback.assert_called_once_with(self.stream._writeData) 111 | 112 | def test_pollReadQueue(self): 113 | mock_rd = mock.Mock() 114 | mock_rd.addCallback = mock.Mock() 115 | self.stream._read_queue.get.return_value = mock_rd 116 | 117 | self.stream._pollReadQueue() 118 | 119 | self.assertEqual(self.stream._read_deferred, mock_rd) 120 | mock_rd.addCallback.assert_called_once_with(self.stream._recvData) 121 | 122 | def test_writeData(self): 123 | self.stream._decPackageWindow = mock.Mock() 124 | self.stream.circuit = mock.Mock() 125 | self.stream.circuit.send = mock.Mock() 126 | self.stream._writeData('test') 127 | self.stream.circuit.send.assert_called_once_with('test', self.stream) 128 | self.assertEqual(self.stream._decPackageWindow.call_count, 1) 129 | 130 | def test_recvData(self): 131 | self.stream._decDeliverWindow = mock.Mock() 132 | self.stream._recvData('test') 133 | self.stream.socks.recv.assert_called_once_with('test') 134 | self.assertEqual(self.stream._decDeliverWindow.call_count, 1) 135 | 136 | def test_decDeliverWindow_above_threshold(self): 137 | self.stream._deliver_window = 500 138 | self.stream._pollReadQueue = mock.Mock() 139 | self.stream._decDeliverWindow() 140 | self.assertEqual(self.stream._deliver_window, 499) 141 | self.assertEqual(self.stream._pollReadQueue.call_count, 1) 142 | 143 | def test_decDeliverWindow_at_threshold(self): 144 | self.stream._deliver_window = 451 145 | self.stream.circuit = mock.Mock() 146 | self.stream.circuit.sendStreamSendMe = mock.Mock() 147 | self.stream._pollReadQueue = mock.Mock() 148 | self.stream._decDeliverWindow() 149 | self.assertEqual(self.stream._deliver_window, 500) 150 | self.stream.circuit.sendStreamSendMe.assert_called_once_with( 151 | self.stream) 152 | self.assertEqual(self.stream._pollReadQueue.call_count, 1) 153 | 154 | def test_decPackageWindow_above_threshold(self): 155 | self.stream._package_window = 2 156 | self.stream._pollWriteQueue = mock.Mock() 157 | self.stream._decPackageWindow() 158 | self.assertEqual(self.stream._package_window, 1) 159 | self.assertEqual(self.stream._pollWriteQueue.call_count, 1) 160 | 161 | def test_packageWindow_at_threshold(self): 162 | self.stream._package_window = 1 163 | self.stream._pollWriteQueue = mock.Mock() 164 | self.stream._decPackageWindow() 165 | self.assertEqual(self.stream._package_window, 0) 166 | self.assertEqual(self.stream._pollWriteQueue.call_count, 0) 167 | self.assertEqual(self.stream._write_deferred, None) 168 | 169 | def test_chunkRelayData(self): 170 | data = '\x00'*(stream.MAX_RPAYLOAD_LEN*2) 171 | data += '\x00'*(stream.MAX_RPAYLOAD_LEN-1) 172 | 173 | ret = stream._chunkRelayData(data) 174 | self.assertEqual(ret, 175 | ['\x00'*stream.MAX_RPAYLOAD_LEN, '\x00'*stream.MAX_RPAYLOAD_LEN, 176 | '\x00'*(stream.MAX_RPAYLOAD_LEN-1)]) 177 | 178 | def tearDown(self): 179 | self.log_patch.stop() 180 | -------------------------------------------------------------------------------- /oppy/tests/unit/util/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/tests/unit/util/__init__.py -------------------------------------------------------------------------------- /oppy/tests/unit/util/test_exitrequest.py: -------------------------------------------------------------------------------- 1 | import mock 2 | 3 | from twisted.trial import unittest 4 | 5 | from oppy.util import exitrequest 6 | 7 | 8 | class ExitRequestTest(unittest.TestCase): 9 | 10 | def test_ExitRequest_addr_ipv4(self): 11 | er = exitrequest.ExitRequest('\x00\x00', addr='\x7f\x00\x00\x01') 12 | self.assertTrue(er.is_ipv4) 13 | self.assertFalse(er.is_ipv6) 14 | self.assertFalse(er.is_host) 15 | self.assertEqual(er.port, 0) 16 | self.assertEqual(er.addr, '127.0.0.1') 17 | self.assertEqual(er.host, None) 18 | 19 | def test_ExitRequest_addr_ipv6(self): 20 | er = exitrequest.ExitRequest('\x00\x00', 21 | addr=' \x01\r\xb8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00') 22 | self.assertFalse(er.is_ipv4) 23 | self.assertTrue(er.is_ipv6) 24 | self.assertFalse(er.is_host) 25 | self.assertEqual(er.port, 0) 26 | self.assertEqual(er.addr, '2001:0db8:0000:0000:0000:0000:0000:1000') 27 | self.assertEqual(er.host, None) 28 | 29 | def test_ExitRequest_host(self): 30 | er = exitrequest.ExitRequest('\x00\x00', host='https://example.com') 31 | self.assertFalse(er.is_ipv4) 32 | self.assertFalse(er.is_ipv6) 33 | self.assertTrue(er.is_host) 34 | self.assertEqual(er.port, 0) 35 | self.assertEqual(er.addr, None) 36 | self.assertEqual(er.host, 'https://example.com') 37 | 38 | def test_ExitRequest_not_addr_or_port(self): 39 | self.assertRaises(AssertionError, 40 | exitrequest.ExitRequest, 41 | '\x00\x00') 42 | 43 | def test_ExitRequest_both(self): 44 | self.assertRaises(AssertionError, 45 | exitrequest.ExitRequest, 46 | 0, 47 | addr='\x7f\x00\x00\x01', 48 | host='https://example.com') 49 | 50 | def test_str_ipv4(self): 51 | er = exitrequest.ExitRequest('\x00\x00', addr='\x7f\x00\x00\x01') 52 | self.assertEqual(str(er), '127.0.0.1:0\x00') 53 | 54 | def test_str_ipv6(self): 55 | er = exitrequest.ExitRequest('\x00\x00', 56 | addr=' \x01\r\xb8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x10\x00') 57 | self.assertEqual(str(er), 58 | '[2001:0db8:0000:0000:0000:0000:0000:1000]:0\x00') 59 | 60 | def test_str_host(self): 61 | er = exitrequest.ExitRequest('\x00\x00', host='https://example.com') 62 | self.assertEqual(str(er), 'https://example.com:0\x00') 63 | -------------------------------------------------------------------------------- /oppy/tests/unit/util/test_tools.py: -------------------------------------------------------------------------------- 1 | import mock 2 | 3 | from twisted.trial import unittest 4 | 5 | from oppy.util import tools 6 | 7 | 8 | class ToolsTest(unittest.TestCase): 9 | 10 | @mock.patch('base64.b64decode', return_value='ret') 11 | def test_decodeMicrodescriptorIdentifier_pad4(self, mock_b64d): 12 | md = mock.Mock() 13 | md.identifier = 'test' 14 | ret = tools.decodeMicrodescriptorIdentifier(md) 15 | self.assertEqual(ret, 'ret') 16 | mock_b64d.assert_called_once_with('test====') 17 | 18 | @mock.patch('base64.b64decode', return_value='ret') 19 | def test_decodeMicrodescriptorIdentifier_pad3(self, mock_b64d): 20 | md = mock.Mock() 21 | md.identifier = 't' 22 | ret = tools.decodeMicrodescriptorIdentifier(md) 23 | self.assertEqual(ret, 'ret') 24 | mock_b64d.assert_called_once_with('t===') 25 | 26 | @mock.patch('base64.b64decode', return_value='ret') 27 | def test_decodeMicrodescriptorIdentifier_pad2(self, mock_b64d): 28 | md = mock.Mock() 29 | md.identifier = 'te' 30 | ret = tools.decodeMicrodescriptorIdentifier(md) 31 | self.assertEqual(ret, 'ret') 32 | mock_b64d.assert_called_once_with('te==') 33 | 34 | @mock.patch('base64.b64decode', return_value='ret') 35 | def test_decodeMicrodescriptorIdentifier_pad1(self, mock_b64d): 36 | md = mock.Mock() 37 | md.identifier = 'tes' 38 | ret = tools.decodeMicrodescriptorIdentifier(md) 39 | self.assertEqual(ret, 'ret') 40 | mock_b64d.assert_called_once_with('tes=') 41 | 42 | def test_enum(self): 43 | e = tools.enum(OPEN=0, CLOSED=1) 44 | self.assertEqual(e.OPEN, 0) 45 | self.assertEqual(e.CLOSED, 1) 46 | 47 | def test_shutdown(self): 48 | mock_cm = mock.Mock() 49 | mock_cm.destroyAllCircuits = mock.Mock() 50 | tools.shutdown(mock_cm) 51 | self.assertEqual(mock_cm.destroyAllCircuits.call_count, 1) 52 | 53 | def test_ctr(self): 54 | c = tools.ctr(10) 55 | for i in range(1, 10): 56 | self.assertEqual(i, next(c)) 57 | self.assertEqual(1, next(c)) 58 | -------------------------------------------------------------------------------- /oppy/util/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/nskinkel/oppy/68c84146da5d7d994fd55ead5df747c93c95a2f5/oppy/util/__init__.py -------------------------------------------------------------------------------- /oppy/util/exitrequest.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | import ipaddress 5 | import struct 6 | 7 | 8 | class ExitRequest(object): 9 | '''Represent a connection request.''' 10 | 11 | __slots__ = ('port', 'addr', 'host', 'is_ipv4', 'is_ipv6', 'is_host') 12 | 13 | def __init__(self, port, addr=None, host=None): 14 | ''' 15 | :param str port: port to connect to 16 | :param str addr: IP address to connect to 17 | :param str host: hostname to connect to 18 | ''' 19 | # either address or host must be set, but not both 20 | assert bool(addr) ^ bool(host) 21 | 22 | self.port = struct.unpack("!H", port)[0] 23 | self.addr = addr 24 | self.host = host 25 | self.is_ipv4 = False 26 | self.is_ipv6 = False 27 | self.is_host = False 28 | 29 | if addr: 30 | addr = ipaddress.ip_address(addr) 31 | if isinstance(addr, ipaddress.IPv4Address): 32 | self.is_ipv4 = True 33 | else: 34 | self.is_ipv6 = True 35 | self.addr = bytes(addr.exploded) 36 | else: 37 | self.is_host = True 38 | 39 | def __str__(self): 40 | # this is the format that a request should appear in in a RelayBegin 41 | # cell. overriding __str__ here allows us to just stick this 42 | # directly in a RelayBegin cell with str(request) 43 | if self.is_ipv4: 44 | ret = "{}:{}".format(self.addr, self.port) 45 | elif self.is_ipv6: 46 | ret = "[{}]:{}".format(self.addr, self.port) 47 | else: 48 | ret = "{}:{}".format(self.host, self.port) 49 | ret += "\x00" 50 | return ret 51 | -------------------------------------------------------------------------------- /oppy/util/tools.py: -------------------------------------------------------------------------------- 1 | # Copyright 2014, 2015, Nik Kinkel 2 | # See LICENSE for licensing information 3 | 4 | import base64 5 | import itertools 6 | 7 | 8 | def decodeMicrodescriptorIdentifier(microdescriptor): 9 | ident = microdescriptor.identifier 10 | short = 4-len(ident)%4 11 | if short: 12 | ident += '='*short 13 | return base64.b64decode(ident).rstrip('=') 14 | 15 | 16 | def enum(**enums): 17 | return type('Enum', (), enums) 18 | 19 | 20 | # TODO: fix docs 21 | def shutdown(circuit_manager): 22 | '''Destroy all connections, circuits, and streams. 23 | 24 | Called right before a shutdown event (e.g. CTRL-C). 25 | ''' 26 | circuit_manager.destroyAllCircuits() 27 | 28 | 29 | def ctr(upper): 30 | """Return a generator for a rollover counter. 31 | 32 | :param int upper: Upper bound of counter. 33 | """ 34 | return (i for _ in itertools.count() for i in range(1, upper)) 35 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | pynacl == 0.3.0 2 | twisted == 14.0 3 | stem == 1.3.0 4 | ipaddress == 1.0.6 5 | hkdf == 0.0.1 6 | pycrypto == 2.6.1 7 | pyopenssl == 0.14 8 | service-identity == 14.0 9 | mock == 1.0 10 | -------------------------------------------------------------------------------- /simplifications.md: -------------------------------------------------------------------------------- 1 | Simplifications and Unimplemented Functionality 2 | =============================================== 3 | 4 | Here we aim to document the major simplifications oppy makes and OP 5 | functionality that oppy does not yet implement. Some of the items listed here 6 | are **required** for Tor OPs, and some just reflect the behavior tor itself has 7 | when running as an OP. 8 | 9 | This is a living document, subject to (possibly frequent) change. It is not 10 | comprehensive; oppy still makes some simplifications that are not listed here. 11 | 12 | Major user-facing simplifications 13 | --------------------------------- 14 | 15 | These are the major "noticeable" things that are simplified/not implented. 16 | These may also appear below in their appropriate section. 17 | 18 | - circuits and streams don't know how to recover from RelayEnd cells sent 19 | because of reasons other than CONNECTION_DONE. If we get a RelayEnd 20 | due to, say, reason EXIT_POLICY, oppy will just not be able to handle 21 | this request. oppy doesn't know how to try this request on a different 22 | circuit yet. 23 | - circuit build timeouts are not calculated. this means that sometimes 24 | circuits are slooooow to be built or may not be built at all. this also 25 | means that oppy sometimes gets slower circuits than it should. 26 | - oppy doesn't know how to tear-down a slow/broken circuit yet. If a 27 | circuit is just too slow to be usable or, at some point, just stops 28 | responding, oppy doesn't yet know that it should tear it down and 29 | build a new one. 30 | - oppy doesn't set a timeout on network status downloads, so sometimes 31 | these will just hang if we choose a bad V2Dir cache. 32 | 33 | Cells 34 | ----- 35 | 36 | - oppy does not implement all types of cells - only (most of) the kinds 37 | that an OP needs. 38 | - RELAY_RESOLVE(D) cells are not implemented 39 | - oppy does not implement the "make()" helper method for all types of 40 | implemented cells - only those that we currently need to build (e.g. 41 | for backward only cells, oppy doesn't implement the helper method) 42 | 43 | Circuits 44 | -------- 45 | 46 | - oppy does not rotate circuits 47 | - oppy does not attempt to recover from RelayEnd cells received due to 48 | reasons other than CONNECTION_DONE. For instance, if oppy receives a 49 | RelayEnd cell with reason EXIT_POLICY, oppy doesn't know how to try 50 | this connection on another circuit and just closes the stream. 51 | - oppy does not calculate circuit build timeouts 52 | - oppy does not tear-down slow circuits. sometimes circuits may be really 53 | slow or stop working properly. oppy doesn't know how to recover from this 54 | yet. 55 | - oppy does not support the TAP handshake. 56 | - oppy doesn't know how to build internal circuits and/or access hidden 57 | services. 58 | - oppy doesn't know how to rebuild circuits. If oppy receives a 59 | RelayTruncated cell, the circuit is just immediately destroyed. 60 | - oppy does not cannabalize circuits. 61 | - oppy does not take into account bandwidth usage/history when assigning 62 | new streams to open circuits. 63 | - oppy doesn't currently mark circuits as "clean" or "dirty". circuits 64 | are either "PENDING" (i.e. being built and currently trying to extend), 65 | "OPEN" (i.e. accepting new streams and forwarding traffic), or 66 | "BUFFERING" (waiting for a RelaySendMeCell), and that's the only real 67 | state information circuits have. 68 | - oppy does not know how to use RELAY_RESOLVE cells and, consequently, 69 | does not make any *resolve* circuits 70 | - oppy doesn't know how to build directory circuits 71 | 72 | Connections 73 | ----------- 74 | 75 | - oppy only knows how to talk Link Protocol Version 3 (although 76 | functionality for version 4 is mostly there, at least in cells, just not 77 | tested yet) 78 | - oppy does not use the "this_or_addresses" field in a received NetInfoCell 79 | to verify we've connected to the correct OR address 80 | 81 | Crypto 82 | ------ 83 | 84 | - oppy doesn't handle clearing/wiping private keys properly (really, crypto 85 | should be handled in C modules) 86 | 87 | Path Selection 88 | -------------- 89 | 90 | - oppy does not take bandwidth into account when choosing paths 91 | - oppy always uses a default set of required flags for each node position 92 | in a path. these flags are probably not the correct flags to be using. 93 | - oppy only chooses relays that support the ntor handshake. 94 | - oppy does not use entry guards. 95 | - oppy does not mark relays as *down* if they are unreachable. 96 | 97 | Network Status Documents 98 | ------------------------ 99 | 100 | - oppy doesn't know how to build or use directory circuits, so all 101 | network status document requests are just HTTP requests to V2Dir caches 102 | or directory authorities 103 | - oppy just downloads all server descriptors at once instead of splitting 104 | up the downloads between multiple V2Dir caches 105 | - oppy does not check whether or not we have the "best" server descriptor 106 | before downloading new descriptors. Currently, oppy just downloads all 107 | server descriptors everytime it grabs a fresh consensus. 108 | - oppy does not schedule new consensus downloads at the correct time 109 | interval. currently oppy just downloads new network status documents 110 | every hour. 111 | 112 | SOCKS 113 | ----- 114 | 115 | - oppy only supports the NO_AUTH method 116 | - oppy does not yet implement the tor socks extensions (e.g. for the 117 | RESOLVE command) 118 | - oppy does not implement the "HTTP-Resistance" that tor does 119 | - oppy does not support optimistic data 120 | 121 | Streams 122 | ------- 123 | 124 | - streams do not check how many cells still need to be flushed before 125 | sending a RelaySendMeCell. streams just send a SendMe cell as soon as 126 | their window reaches the SendMe threshold. 127 | 128 | --------------------------------------------------------------------------------