├── .gitignore ├── .travis.yml ├── CHANGES.rst ├── CONTRIBUTORS ├── LICENSE ├── MANIFEST.in ├── README.rst ├── Vagrantfile ├── compactor ├── __init__.py ├── bin │ └── http_example.py ├── context.py ├── httpd.py ├── pid.py ├── process.py ├── request.py └── testing.py ├── docs ├── Makefile ├── api.rst ├── conf.py ├── index.rst └── rtd │ └── requirements.txt ├── setup.cfg ├── setup.py ├── tests ├── test_httpd.py ├── test_process.py └── test_protobuf_process.py ├── tox.ini └── vagrant ├── provision.sh └── test_integration.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | *~ 3 | *.egg-info 4 | .coverage 5 | /build 6 | /dist 7 | /htmlcov 8 | /.tox 9 | /.vagrant 10 | /docs/_build 11 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | python: 2.7 3 | env: 4 | - TOXENV=py26 5 | - TOXENV=py27 6 | - TOXENV=pypy 7 | install: 8 | - pip install tox 9 | script: 10 | - tox -v 11 | -------------------------------------------------------------------------------- /CHANGES.rst: -------------------------------------------------------------------------------- 1 | ======= 2 | CHANGES 3 | ======= 4 | 5 | ----- 6 | 0.3.0 7 | ----- 8 | 9 | * Add API documentation and register a `readthedocs `_ site. 10 | 11 | * Fix bug that would cause compactor to crash if you subclassed ``Process`` and implemented any 12 | name-mangled methods. 13 | 14 | ----- 15 | 0.2.2 16 | ----- 17 | 18 | * Fix the condition where multiple ``send``/``link`` calls to the same pid could race in 19 | ``Context.maybe_connect``. 20 | 21 | * Fix the issue where Process HTTP routes were not bound until after ``initialize``. This could 22 | result in races whereby you'd receive calls from remote processes before ``initialize`` exited, 23 | causing flaky behavior especially in tests. 24 | 25 | * Ensure that ``send`` and ``link`` take place on the event loop to prevent known non-threadsafe 26 | conditions on connection establishment. 27 | 28 | ----- 29 | 0.2.1 30 | ----- 31 | 32 | * Restores local dispatch so that you do not need to install methods intended for local 33 | dispatching only. 34 | 35 | * Fixes a race condition on ``Context.stop`` that could cause the event loop to raise an 36 | uncaught exception on teardown. 37 | 38 | ----- 39 | 0.2.0 40 | ----- 41 | 42 | * Adds vagrant-based integration test to test compactor against reference libprocess. 43 | 44 | * Fixes Python 3 support, pinning to protobof >= 2.6.1 < 2.7 which has correct support. 45 | 46 | ----- 47 | 0.1.3 48 | ----- 49 | 50 | * ``Context.singleton()`` now calls ``Thread.start`` on construction. 51 | 52 | * Pins compactor to ``tornado==4.1.dev1`` which forces you to use a 53 | master-built tornado distribution. 54 | 55 | ----- 56 | 0.1.2 57 | ----- 58 | 59 | * Temporarily removes local dispatch so that local sending works with protobuf processes. 60 | 61 | ----- 62 | 0.1.1 63 | ----- 64 | 65 | * Initial functioning release. 66 | -------------------------------------------------------------------------------- /CONTRIBUTORS: -------------------------------------------------------------------------------- 1 | Brian Wickman 2 | Tom Arnfeld 3 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include *.py 2 | include CHANGES.rst 3 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | compactor 2 | ========= 3 | .. image:: https://travis-ci.org/wickman/compactor.svg?branch=master 4 | :target: https://travis-ci.org/wickman/compactor 5 | 6 | compactor is a pure python implementation of libprocess, the actor library 7 | underpinning `mesos `_. 8 | 9 | 10 | documentation 11 | ============= 12 | 13 | compactor documentation is available at `readthedocs `_. 14 | an example framework built using compactor is `pesos `_, 15 | a pure python implementation of the mesos framework api. 16 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # Vagrantfile API/syntax version. Don't touch unless you know what you're 2 | # doing! 3 | VAGRANTFILE_API_VERSION = "2" 4 | 5 | # 1.5.0 is required to use vagrant cloud images. 6 | # https://www.vagrantup.com/blog/vagrant-1-5-and-vagrant-cloud.html 7 | Vagrant.require_version ">= 1.5.0" 8 | 9 | Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| 10 | config.vm.box = "ubuntu/trusty64" 11 | 12 | config.vm.define "testcluster" do |dev| 13 | dev.vm.network :private_network, ip: "192.168.33.2" 14 | # dev.vm.network "forwarded_port", guest: 31337, host: 41337 15 | dev.vm.provider :virtualbox do |vb| 16 | vb.customize ["modifyvm", :id, "--memory", "1536"] 17 | end 18 | dev.vm.provision "shell", path: 19 | "vagrant/provision.sh" 20 | end 21 | end 22 | -------------------------------------------------------------------------------- /compactor/__init__.py: -------------------------------------------------------------------------------- 1 | from functools import wraps 2 | 3 | from .context import Context 4 | from .process import Process 5 | 6 | 7 | _ROOT_CONTEXT = None 8 | 9 | 10 | def initialize(delegate="", **kw): 11 | global _ROOT_CONTEXT 12 | _ROOT_CONTEXT = Context.singleton(delegate=delegate, **kw) 13 | if not _ROOT_CONTEXT.is_alive(): 14 | _ROOT_CONTEXT.start() 15 | 16 | 17 | def join(): 18 | """Join against the global context -- blocking until the context has been stopped.""" 19 | _ROOT_CONTEXT.join() 20 | 21 | 22 | def after_init(fn): 23 | @wraps(fn) 24 | def wrapper_fn(*args, **kw): 25 | initialize() 26 | return fn(*args, **kw) 27 | return wrapper_fn 28 | 29 | 30 | @after_init 31 | def spawn(process): 32 | """Spawn a process on the global context and return its pid. 33 | 34 | :param process: The process to bind to the global context. 35 | :type process: :class:`Process` 36 | :returns pid: The pid of the spawned process. 37 | :rtype: :class:`PID` 38 | """ 39 | return _ROOT_CONTEXT.spawn(process) 40 | 41 | 42 | route = Process.route 43 | install = Process.install 44 | 45 | 46 | __all__ = ( 47 | 'initialize', 48 | 'install', 49 | 'join', 50 | 'route', 51 | 'spawn', 52 | ) 53 | 54 | 55 | del after_init 56 | -------------------------------------------------------------------------------- /compactor/bin/http_example.py: -------------------------------------------------------------------------------- 1 | import time 2 | from compactor.process import Process 3 | from compactor.context import Context 4 | 5 | 6 | import logging 7 | logging.basicConfig() 8 | 9 | log = logging.getLogger(__name__) 10 | log.setLevel(logging.INFO) 11 | 12 | 13 | class WebProcess(Process): 14 | 15 | @Process.install('ping') 16 | def ping(self, from_pid, body): 17 | log.info("Received ping") 18 | 19 | def respond(): 20 | time.sleep(0.5) 21 | self.send(from_pid, "pong") 22 | self.context.loop.add_callback(respond) 23 | 24 | @Process.install('pong') 25 | def pong(self, from_pid, body): 26 | log.info("Received pong") 27 | 28 | def respond(): 29 | time.sleep(0.5) 30 | self.send(from_pid, "ping") 31 | self.context.loop.add_callback(respond) 32 | 33 | 34 | def listen(identifier): 35 | """ 36 | Launch a listener and return the compactor context. 37 | """ 38 | 39 | context = Context() 40 | process = WebProcess(identifier) 41 | 42 | context.spawn(process) 43 | 44 | log.info("Launching PID %s", process.pid) 45 | 46 | return process, context 47 | 48 | 49 | if __name__ == '__main__': 50 | 51 | a, a_context = listen("web(1)") 52 | b, b_context = listen("web(2)") 53 | 54 | a_context.start() 55 | b_context.start() 56 | 57 | # Kick off the game of ping/pong by sending a message to B from A 58 | a.send(b.pid, "ping") 59 | 60 | while a_context.isAlive() or b_context.isAlive(): 61 | time.sleep(0.5) 62 | -------------------------------------------------------------------------------- /compactor/context.py: -------------------------------------------------------------------------------- 1 | """Context controls the routing and handling of messages between processes.""" 2 | 3 | import logging 4 | import socket 5 | import threading 6 | import os 7 | try: 8 | import asyncio 9 | except ImportError: 10 | import trollius as asyncio 11 | 12 | from collections import defaultdict 13 | from functools import partial 14 | 15 | from .httpd import HTTPD 16 | from .request import encode_request 17 | 18 | from tornado import stack_context 19 | from tornado.iostream import IOStream 20 | from tornado.netutil import bind_sockets 21 | from tornado.platform.asyncio import BaseAsyncIOLoop 22 | 23 | log = logging.getLogger(__name__) 24 | 25 | 26 | class Context(threading.Thread): 27 | """A compactor context. 28 | 29 | Compactor contexts control the routing and handling of messages between 30 | processes. At its most basic level, a context is a listening (ip, port) 31 | pair and an event loop. 32 | """ 33 | 34 | class Error(Exception): pass 35 | class SocketError(Error): pass 36 | class InvalidProcess(Error): pass 37 | class InvalidMethod(Error): pass 38 | 39 | _SINGLETON = None 40 | _LOCK = threading.Lock() 41 | 42 | CONNECT_TIMEOUT_SECS = 5 43 | 44 | @classmethod 45 | def _make_socket(cls, ip, port): 46 | """Bind to a new socket. 47 | 48 | If LIBPROCESS_PORT or LIBPROCESS_IP are configured in the environment, 49 | these will be used for socket connectivity. 50 | """ 51 | bound_socket = bind_sockets(port, address=ip)[0] 52 | ip, port = bound_socket.getsockname() 53 | 54 | if not ip or ip == '0.0.0.0': 55 | ip = socket.gethostbyname(socket.gethostname()) 56 | 57 | return bound_socket, ip, port 58 | 59 | @classmethod 60 | def get_ip_port(cls, ip=None, port=None): 61 | ip = ip or os.environ.get('LIBPROCESS_IP', '0.0.0.0') 62 | try: 63 | port = int(port or os.environ.get('LIBPROCESS_PORT', 0)) 64 | except ValueError: 65 | raise cls.Error('Invalid ip/port provided') 66 | return ip, port 67 | 68 | @classmethod 69 | def singleton(cls, delegate='', **kw): 70 | with cls._LOCK: 71 | if cls._SINGLETON: 72 | if cls._SINGLETON.delegate != delegate: 73 | raise RuntimeError('Attempting to construct different singleton context.') 74 | else: 75 | cls._SINGLETON = cls(delegate=delegate, **kw) 76 | cls._SINGLETON.start() 77 | return cls._SINGLETON 78 | 79 | def __init__(self, delegate='', loop=None, ip=None, port=None): 80 | """Construct a compactor context. 81 | 82 | Before any useful work can be done with a context, you must call 83 | ``start`` on the context. 84 | 85 | :keyword ip: The ip port of the interface on which the Context should listen. 86 | If none is specified, the context will attempt to bind to the ip specified by 87 | the ``LIBPROCESS_IP`` environment variable. If this variable is not set, 88 | it will bind on all interfaces. 89 | :type ip: ``str`` or None 90 | :keyword port: The port on which the Context should listen. If none is specified, 91 | the context will attempt to bind to the port specified by the ``LIBPROCESS_PORT`` 92 | environment variable. If this variable is not set, it will bind to an ephemeral 93 | port. 94 | :type port: ``int`` or None 95 | """ 96 | self._processes = {} 97 | self._links = defaultdict(set) 98 | self.delegate = delegate 99 | self.__loop = self.http = None 100 | self.__event_loop = loop 101 | self._ip = None 102 | ip, port = self.get_ip_port(ip, port) 103 | self.__sock, self.ip, self.port = self._make_socket(ip, port) 104 | self._connections = {} 105 | self._connection_callbacks = defaultdict(list) 106 | self._connection_callbacks_lock = threading.Lock() 107 | self.__context_name = 'CompactorContext(%s:%d)' % (self.ip, self.port) 108 | super(Context, self).__init__(name=self.__context_name) 109 | self.daemon = True 110 | self.lock = threading.Lock() 111 | self.__id = 1 112 | self.__loop_started = threading.Event() 113 | 114 | def _assert_started(self): 115 | assert self.__loop_started.is_set() 116 | 117 | def start(self): 118 | """Start the context. This method must be called before calls to ``send`` and ``spawn``. 119 | 120 | This method is non-blocking. 121 | """ 122 | super(Context, self).start() 123 | self.__loop_started.wait() 124 | 125 | def __debug(self, msg): 126 | log.debug('%s: %s' % (self.__context_name, msg)) 127 | 128 | def run(self): 129 | # The entry point of the Context thread. This should not be called directly. 130 | loop = self.__event_loop or asyncio.new_event_loop() 131 | 132 | class CustomIOLoop(BaseAsyncIOLoop): 133 | def initialize(self): 134 | super(CustomIOLoop, self).initialize(loop, close_loop=False) 135 | 136 | self.__loop = CustomIOLoop() 137 | self.http = HTTPD(self.__sock, self.__loop) 138 | 139 | self.__loop_started.set() 140 | 141 | self.__loop.start() 142 | self.__loop.close() 143 | 144 | def _is_local(self, pid): 145 | return pid in self._processes 146 | 147 | def _assert_local_pid(self, pid): 148 | if not self._is_local(pid): 149 | raise self.InvalidProcess('Operation only valid for local processes!') 150 | 151 | def stop(self): 152 | """Stops the context. This terminates all PIDs and closes all connections.""" 153 | 154 | log.info('Stopping %s' % self) 155 | 156 | pids = list(self._processes) 157 | 158 | # Clean up the context 159 | for pid in pids: 160 | self.terminate(pid) 161 | 162 | while self._connections: 163 | pid = next(iter(self._connections)) 164 | conn = self._connections.pop(pid, None) 165 | if conn: 166 | conn.close() 167 | 168 | self.__loop.stop() 169 | 170 | def spawn(self, process): 171 | """Spawn a process. 172 | 173 | Spawning a process binds it to this context and assigns the process a 174 | pid which is returned. The process' ``initialize`` method is called. 175 | 176 | Note: A process cannot send messages until it is bound to a context. 177 | 178 | :param process: The process to bind to this context. 179 | :type process: :class:`Process` 180 | :return: The pid of the process. 181 | :rtype: :class:`PID` 182 | """ 183 | self._assert_started() 184 | process.bind(self) 185 | self.http.mount_process(process) 186 | self._processes[process.pid] = process 187 | process.initialize() 188 | return process.pid 189 | 190 | def _get_dispatch_method(self, pid, method): 191 | try: 192 | return getattr(self._processes[pid], method) 193 | except KeyError: 194 | raise self.InvalidProcess('Unknown process %s' % pid) 195 | except AttributeError: 196 | raise self.InvalidMethod('Unknown method %s on %s' % (method, pid)) 197 | 198 | def dispatch(self, pid, method, *args): 199 | """Call a method on another process by its pid. 200 | 201 | The method on the other process does not need to be installed with 202 | ``Process.install``. The call is serialized with all other calls on the 203 | context's event loop. The pid must be bound to this context. 204 | 205 | This function returns immediately. 206 | 207 | :param pid: The pid of the process to be called. 208 | :type pid: :class:`PID` 209 | :param method: The name of the method to be called. 210 | :type method: ``str`` 211 | :return: Nothing 212 | """ 213 | self._assert_started() 214 | self._assert_local_pid(pid) 215 | function = self._get_dispatch_method(pid, method) 216 | self.__loop.add_callback(function, *args) 217 | 218 | def delay(self, amount, pid, method, *args): 219 | """Call a method on another process after a specified delay. 220 | 221 | This is equivalent to ``dispatch`` except with an additional amount of 222 | time to wait prior to invoking the call. 223 | 224 | This function returns immediately. 225 | 226 | :param amount: The amount of time to wait in seconds before making the call. 227 | :type amount: ``float`` or ``int`` 228 | :param pid: The pid of the process to be called. 229 | :type pid: :class:`PID` 230 | :param method: The name of the method to be called. 231 | :type method: ``str`` 232 | :return: Nothing 233 | """ 234 | self._assert_started() 235 | self._assert_local_pid(pid) 236 | function = self._get_dispatch_method(pid, method) 237 | self.__loop.add_timeout(self.__loop.time() + amount, function, *args) 238 | 239 | def __dispatch_on_connect_callbacks(self, to_pid, stream): 240 | with self._connection_callbacks_lock: 241 | callbacks = self._connection_callbacks.pop(to_pid, []) 242 | for callback in callbacks: 243 | log.debug('Dispatching connection callback %s for %s:%s -> %s' % ( 244 | callback, self.ip, self.port, to_pid)) 245 | self.__loop.add_callback(callback, stream) 246 | 247 | def _maybe_connect(self, to_pid, callback=None): 248 | """Asynchronously establish a connection to the remote pid.""" 249 | 250 | callback = stack_context.wrap(callback or (lambda stream: None)) 251 | 252 | def streaming_callback(data): 253 | # we are not guaranteed to get an acknowledgment, but log and discard bytes if we do. 254 | log.info('Received %d bytes from %s, discarding.' % (len(data), to_pid)) 255 | log.debug(' data: %r' % (data,)) 256 | 257 | def on_connect(exit_cb, stream): 258 | log.info('Connection to %s established' % to_pid) 259 | with self._connection_callbacks_lock: 260 | self._connections[to_pid] = stream 261 | self.__dispatch_on_connect_callbacks(to_pid, stream) 262 | self.__loop.add_callback( 263 | stream.read_until_close, 264 | exit_cb, 265 | streaming_callback=streaming_callback) 266 | 267 | create = False 268 | with self._connection_callbacks_lock: 269 | stream = self._connections.get(to_pid) 270 | callbacks = self._connection_callbacks.get(to_pid) 271 | 272 | if not stream: 273 | self._connection_callbacks[to_pid].append(callback) 274 | 275 | if not callbacks: 276 | create = True 277 | 278 | if stream: 279 | self.__loop.add_callback(callback, stream) 280 | return 281 | 282 | if not create: 283 | return 284 | 285 | sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM, 0) 286 | if not sock: 287 | raise self.SocketError('Failed opening socket') 288 | 289 | stream = IOStream(sock, io_loop=self.__loop) 290 | stream.set_nodelay(True) 291 | stream.set_close_callback(partial(self.__on_exit, to_pid, b'reached end of stream')) 292 | 293 | connect_callback = partial(on_connect, partial(self.__on_exit, to_pid), stream) 294 | 295 | log.info('Establishing connection to %s' % to_pid) 296 | 297 | stream.connect((to_pid.ip, to_pid.port), callback=connect_callback) 298 | 299 | if stream.closed(): 300 | raise self.SocketError('Failed to initiate stream connection') 301 | 302 | log.info('Maybe connected to %s' % to_pid) 303 | 304 | def _get_local_mailbox(self, pid, method): 305 | for mailbox, callable in self._processes[pid].iter_handlers(): 306 | if method == mailbox: 307 | return callable 308 | 309 | def send(self, from_pid, to_pid, method, body=None): 310 | """Send a message method from one pid to another with an optional body. 311 | 312 | Note: It is more idiomatic to send directly from a bound process rather than 313 | calling send on the context. 314 | 315 | If the destination pid is on the same context, the Context may skip the 316 | wire and route directly to process itself. ``from_pid`` must be bound 317 | to this context. 318 | 319 | This method returns immediately. 320 | 321 | :param from_pid: The pid of the sending process. 322 | :type from_pid: :class:`PID` 323 | :param to_pid: The pid of the destination process. 324 | :type to_pid: :class:`PID` 325 | :param method: The method name of the destination process. 326 | :type method: ``str`` 327 | :keyword body: Optional content to send along with the message. 328 | :type body: ``bytes`` or None 329 | :return: Nothing 330 | """ 331 | 332 | self._assert_started() 333 | self._assert_local_pid(from_pid) 334 | 335 | if self._is_local(to_pid): 336 | local_method = self._get_local_mailbox(to_pid, method) 337 | if local_method: 338 | log.info('Doing local dispatch of %s => %s (method: %s)' % (from_pid, to_pid, local_method)) 339 | self.__loop.add_callback(local_method, from_pid, body or b'') 340 | return 341 | else: 342 | # TODO(wickman) Consider failing hard if no local method is detected, otherwise we're 343 | # just going to do a POST and have it dropped on the floor. 344 | pass 345 | 346 | request_data = encode_request(from_pid, to_pid, method, body=body) 347 | 348 | log.info('Sending POST %s => %s (payload: %d bytes)' % ( 349 | from_pid, to_pid.as_url(method), len(request_data))) 350 | 351 | def on_connect(stream): 352 | log.info('Writing %s from %s to %s' % (len(request_data), from_pid, to_pid)) 353 | stream.write(request_data) 354 | log.info('Wrote %s from %s to %s' % (len(request_data), from_pid, to_pid)) 355 | 356 | self.__loop.add_callback(self._maybe_connect, to_pid, on_connect) 357 | 358 | def __erase_link(self, to_pid): 359 | for pid, links in self._links.items(): 360 | try: 361 | links.remove(to_pid) 362 | log.debug('PID link from %s <- %s exited.' % (pid, to_pid)) 363 | self._processes[pid].exited(to_pid) 364 | except KeyError: 365 | continue 366 | 367 | def __on_exit(self, to_pid, body): 368 | log.info('Disconnected from %s (%s)', to_pid, body) 369 | stream = self._connections.pop(to_pid, None) 370 | if stream is None: 371 | log.error('Received disconnection from %s but no stream found.' % to_pid) 372 | self.__erase_link(to_pid) 373 | 374 | def link(self, pid, to): 375 | """Link a local process to a possibly remote process. 376 | 377 | Note: It is more idiomatic to call ``link`` directly on the bound Process 378 | object instead. 379 | 380 | When ``pid`` is linked to ``to``, the termination of the ``to`` process 381 | (or the severing of its connection from the Process ``pid``) will result 382 | in the local process' ``exited`` method to be called with ``to``. 383 | 384 | This method returns immediately. 385 | 386 | :param pid: The pid of the linking process. 387 | :type pid: :class:`PID` 388 | :param to: The pid of the linked process. 389 | :type to: :class:`PID` 390 | :returns: Nothing 391 | """ 392 | 393 | self._assert_started() 394 | 395 | def really_link(): 396 | self._links[pid].add(to) 397 | log.info('Added link from %s to %s' % (pid, to)) 398 | 399 | def on_connect(stream): 400 | really_link() 401 | 402 | if self._is_local(pid): 403 | really_link() 404 | else: 405 | self.__loop.add_callback(self._maybe_connect, to, on_connect) 406 | 407 | def terminate(self, pid): 408 | """Terminate a process bound to this context. 409 | 410 | When a process is terminated, all the processes to which it is linked 411 | will be have their ``exited`` methods called. Messages to this process 412 | will no longer be delivered. 413 | 414 | This method returns immediately. 415 | 416 | :param pid: The pid of the process to terminate. 417 | :type pid: :class:`PID` 418 | :returns: Nothing 419 | """ 420 | self._assert_started() 421 | 422 | log.info('Terminating %s' % pid) 423 | process = self._processes.pop(pid, None) 424 | if process: 425 | log.info('Unmounting %s' % process) 426 | self.http.unmount_process(process) 427 | self.__erase_link(pid) 428 | 429 | def __str__(self): 430 | return 'Context(%s:%s)' % (self.ip, self.port) 431 | -------------------------------------------------------------------------------- /compactor/httpd.py: -------------------------------------------------------------------------------- 1 | from __future__ import absolute_import 2 | 3 | import logging 4 | import re 5 | import types 6 | import time 7 | 8 | from .pid import PID 9 | 10 | from tornado import gen 11 | from tornado import httputil 12 | from tornado.httpserver import HTTPServer 13 | from tornado.web import RequestHandler, Application, HTTPError 14 | 15 | log = logging.getLogger(__name__) 16 | 17 | 18 | class ProcessBaseHandler(RequestHandler): 19 | def initialize(self, process=None): 20 | self.process = process 21 | 22 | 23 | class WireProtocolMessageHandler(ProcessBaseHandler): 24 | """Tornado request handler for libprocess internal messages.""" 25 | 26 | @classmethod 27 | def detect_process(cls, headers): 28 | """Returns tuple of process, legacy or None, None if not process originating.""" 29 | 30 | try: 31 | if 'Libprocess-From' in headers: 32 | return PID.from_string(headers['Libprocess-From']), False 33 | elif 'User-Agent' in headers and headers['User-Agent'].startswith('libprocess/'): 34 | return PID.from_string(headers['User-Agent'][len('libprocess/'):]), True 35 | except ValueError as e: 36 | log.error('Failed to detect process: %r' % e) 37 | pass 38 | 39 | return None, None 40 | 41 | def initialize(self, **kw): 42 | self.__name = kw.pop('name') 43 | super(WireProtocolMessageHandler, self).initialize(**kw) 44 | 45 | def set_default_headers(self): 46 | self._headers = httputil.HTTPHeaders({ 47 | "Date": httputil.format_timestamp(time.time()) 48 | }) 49 | 50 | def post(self, *args, **kw): 51 | log.info('Handling %s for %s' % (self.__name, self.process.pid)) 52 | 53 | process, legacy = self.detect_process(self.request.headers) 54 | 55 | if process is None: 56 | self.set_status(404) 57 | return 58 | 59 | log.debug('Delivering %s to %s from %s' % (self.__name, self.process.pid, process)) 60 | log.debug('Request body length: %s' % len(self.request.body)) 61 | 62 | # Handle the message 63 | self.process.handle_message(self.__name, process, self.request.body) 64 | 65 | self.set_status(202) 66 | self.finish() 67 | 68 | 69 | class RoutedRequestHandler(ProcessBaseHandler): 70 | """Tornado request handler for routed http requests.""" 71 | 72 | def initialize(self, **kw): 73 | self.__path = kw.pop('path') 74 | super(RoutedRequestHandler, self).initialize(**kw) 75 | 76 | @gen.engine 77 | def get(self, *args, **kw): 78 | log.info('Handling %s for %s' % (self.__path, self.process.pid)) 79 | handle = self.process.handle_http(self.__path, self, *args, **kw) 80 | if isinstance(handle, types.GeneratorType): 81 | for stuff in handle: 82 | yield stuff 83 | self.finish() 84 | 85 | 86 | class Blackhole(RequestHandler): 87 | def get(self): 88 | log.debug("Sending request to the black hole") 89 | raise HTTPError(404) 90 | 91 | 92 | class HTTPD(object): # noqa 93 | """ 94 | HTTP Server implementation that attaches to an event loop and socket, and 95 | is capable of handling mesos wire protocol messages. 96 | """ 97 | 98 | def __init__(self, sock, loop): 99 | """ 100 | Construct an HTTP server on a socket given an ioloop. 101 | """ 102 | 103 | self.loop = loop 104 | self.sock = sock 105 | 106 | self.app = Application(handlers=[(r'/.*$', Blackhole)]) 107 | self.server = HTTPServer(self.app, io_loop=self.loop) 108 | self.server.add_sockets([sock]) 109 | 110 | self.sock.listen(1024) 111 | 112 | def terminate(self): 113 | log.info('Terminating HTTP server and all connections') 114 | 115 | self.server.close_all_connections() 116 | self.sock.close() 117 | 118 | def mount_process(self, process): 119 | """ 120 | Mount a Process onto the http server to receive message callbacks. 121 | """ 122 | 123 | for route_path in process.route_paths: 124 | route = '/%s%s' % (process.pid.id, route_path) 125 | log.info('Mounting route %s' % route) 126 | self.app.add_handlers('.*$', [( 127 | re.escape(route), 128 | RoutedRequestHandler, 129 | dict(process=process, path=route_path) 130 | )]) 131 | 132 | for message_name in process.message_names: 133 | route = '/%s/%s' % (process.pid.id, message_name) 134 | log.info('Mounting message handler %s' % route) 135 | self.app.add_handlers('.*$', [( 136 | re.escape(route), 137 | WireProtocolMessageHandler, 138 | dict(process=process, name=message_name) 139 | )]) 140 | 141 | def unmount_process(self, process): 142 | """ 143 | Unmount a process from the http server to stop receiving message 144 | callbacks. 145 | """ 146 | 147 | # There is no remove_handlers, but .handlers is public so why not. server.handlers is a list of 148 | # 2-tuples of the form (host_pattern, [list of RequestHandler]) objects. We filter out all 149 | # handlers matching our process from the RequestHandler list for each host pattern. 150 | def nonmatching(handler): 151 | return 'process' not in handler.kwargs or handler.kwargs['process'] != process 152 | 153 | def filter_handlers(handlers): 154 | host_pattern, handlers = handlers 155 | return (host_pattern, list(filter(nonmatching, handlers))) 156 | 157 | self.app.handlers = [filter_handlers(handlers) for handlers in self.app.handlers] 158 | -------------------------------------------------------------------------------- /compactor/pid.py: -------------------------------------------------------------------------------- 1 | class PID(object): # noqa 2 | __slots__ = ('ip', 'port', 'id') 3 | 4 | @classmethod 5 | def from_string(cls, pid): 6 | """Parse a PID from its string representation. 7 | 8 | PIDs may be represented as name@ip:port, e.g. 9 | 10 | .. code-block:: python 11 | 12 | pid = PID.from_string('master(1)@192.168.33.2:5051') 13 | 14 | :param pid: A string representation of a pid. 15 | :type pid: ``str`` 16 | :return: The parsed pid. 17 | :rtype: :class:`PID` 18 | :raises: ``ValueError`` should the string not be of the correct syntax. 19 | """ 20 | try: 21 | id_, ip_port = pid.split('@') 22 | ip, port = ip_port.split(':') 23 | port = int(port) 24 | except ValueError: 25 | raise ValueError('Invalid PID: %s' % pid) 26 | return cls(ip, port, id_) 27 | 28 | def __init__(self, ip, port, id_): 29 | """Construct a pid. 30 | 31 | :param ip: An IP address in string form. 32 | :type ip: ``str`` 33 | :param port: The port of this pid. 34 | :type port: ``int`` 35 | :param id_: The name of the process. 36 | :type id_: ``str`` 37 | """ 38 | self.ip = ip 39 | self.port = port 40 | self.id = id_ 41 | 42 | def __hash__(self): 43 | return hash((self.ip, self.port, self.id)) 44 | 45 | def __eq__(self, other): 46 | return isinstance(other, PID) and ( 47 | self.ip == other.ip and 48 | self.port == other.port and 49 | self.id == other.id 50 | ) 51 | 52 | def __ne__(self, other): 53 | return not (self == other) 54 | 55 | def as_url(self, endpoint=None): 56 | url = 'http://%s:%s/%s' % (self.ip, self.port, self.id) 57 | if endpoint: 58 | url += '/%s' % endpoint 59 | return url 60 | 61 | def __str__(self): 62 | return '%s@%s:%d' % (self.id, self.ip, self.port) 63 | 64 | def __repr__(self): 65 | return 'PID(%s, %d, %s)' % (self.ip, self.port, self.id) 66 | -------------------------------------------------------------------------------- /compactor/process.py: -------------------------------------------------------------------------------- 1 | import functools 2 | 3 | from .context import Context 4 | from .pid import PID 5 | 6 | 7 | class Process(object): 8 | class Error(Exception): pass 9 | class UnboundProcess(Error): pass 10 | 11 | ROUTE_ATTRIBUTE = '__route__' 12 | INSTALL_ATTRIBUTE = '__mailbox__' 13 | 14 | @classmethod 15 | def route(cls, path): 16 | """A decorator to indicate that a method should be a routable HTTP endpoint. 17 | 18 | .. code-block:: python 19 | 20 | from compactor.process import Process 21 | 22 | class WebProcess(Process): 23 | @Process.route('/hello/world') 24 | def hello_world(self, handler): 25 | return handler.write('hello world') 26 | 27 | The handler passed to the method is a tornado RequestHandler. 28 | 29 | WARNING: This interface is alpha and may change in the future if or when 30 | we remove tornado as a compactor dependency. 31 | 32 | :param path: The endpoint to route to this method. 33 | :type path: ``str`` 34 | """ 35 | 36 | if not path.startswith('/'): 37 | raise ValueError('Routes must start with "/"') 38 | 39 | def wrap(fn): 40 | setattr(fn, cls.ROUTE_ATTRIBUTE, path) 41 | return fn 42 | 43 | return wrap 44 | 45 | # TODO(wickman) Make mbox optional, defaulting to function.__name__. 46 | # TODO(wickman) Make INSTALL_ATTRIBUTE a defaultdict(list) so that we can 47 | # route multiple endpoints to a single method. 48 | @classmethod 49 | def install(cls, mbox): 50 | """A decorator to indicate a remotely callable method on a process. 51 | 52 | .. code-block:: python 53 | 54 | from compactor.process import Process 55 | 56 | class PingProcess(Process): 57 | @Process.install('ping') 58 | def ping(self, from_pid, body): 59 | # do something 60 | 61 | The installed method should take ``from_pid`` and ``body`` parameters. 62 | ``from_pid`` is the process calling the method. ``body`` is a ``bytes`` 63 | stream that was delivered with the message, possibly empty. 64 | 65 | :param mbox: Incoming messages to this "mailbox" will be dispatched to this method. 66 | :type mbox: ``str`` 67 | """ 68 | def wrap(fn): 69 | setattr(fn, cls.INSTALL_ATTRIBUTE, mbox) 70 | return fn 71 | return wrap 72 | 73 | def __init__(self, name): 74 | """Create a process with a given name. 75 | 76 | The process must still be bound to a context before it can send messages 77 | or link to other processes. 78 | 79 | :param name: The name of this process. 80 | :type name: ``str`` 81 | """ 82 | 83 | self.name = name 84 | self._delegates = {} 85 | self._http_handlers = dict(self.iter_routes()) 86 | self._message_handlers = dict(self.iter_handlers()) 87 | self._context = None 88 | 89 | def __iter_callables(self): 90 | # iterate over the methods in a way where we can differentiate methods from descriptors 91 | for method in type(self).__dict__.values(): 92 | if callable(method): 93 | # 'method' is the unbound method on the class -- we want to return the bound instancemethod 94 | try: 95 | yield getattr(self, method.__name__) 96 | except AttributeError: 97 | # This is possible for __name_mangled_attributes. 98 | continue 99 | 100 | def iter_routes(self): 101 | for function in self.__iter_callables(): 102 | if hasattr(function, self.ROUTE_ATTRIBUTE): 103 | yield getattr(function, self.ROUTE_ATTRIBUTE), function 104 | 105 | def iter_handlers(self): 106 | for function in self.__iter_callables(): 107 | if hasattr(function, self.INSTALL_ATTRIBUTE): 108 | yield getattr(function, self.INSTALL_ATTRIBUTE), function 109 | 110 | def _assert_bound(self): 111 | if not self._context: 112 | raise self.UnboundProcess('Cannot get pid of unbound process.') 113 | 114 | def bind(self, context): 115 | if not isinstance(context, Context): 116 | raise TypeError('Can only bind to a Context, got %s' % type(context)) 117 | self._context = context 118 | 119 | @property 120 | def pid(self): 121 | """The pid of this process. 122 | 123 | :raises: Will raise a ``Process.UnboundProcess`` exception if the 124 | process is not bound to a context. 125 | """ 126 | self._assert_bound() 127 | return PID(self._context.ip, self._context.port, self.name) 128 | 129 | @property 130 | def context(self): 131 | """The context that this process is bound to. 132 | 133 | :raises: Will raise a ``Process.UnboundProcess`` exception if the 134 | process is not bound to a context. 135 | """ 136 | self._assert_bound() 137 | return self._context 138 | 139 | @property 140 | def route_paths(self): 141 | return self._http_handlers.keys() 142 | 143 | @property 144 | def message_names(self): 145 | return self._message_handlers.keys() 146 | 147 | def delegate(self, name, pid): 148 | self._delegates[name] = pid 149 | 150 | def handle_message(self, name, from_pid, body): 151 | if name in self._message_handlers: 152 | self._message_handlers[name](from_pid, body) 153 | elif name in self._delegates: 154 | to = self._delegates[name] 155 | self._context.transport(to, name, body, from_pid) 156 | 157 | def handle_http(self, route, handler, *args, **kw): 158 | return self._http_handlers[route](handler, *args, **kw) 159 | 160 | def initialize(self): 161 | """Called when this process is spawned. 162 | 163 | Once this is called, it means a process is now routable. Subclasses 164 | should implement this to initialize state or possibly initiate 165 | connections to remote processes. 166 | """ 167 | 168 | def exited(self, pid): 169 | """Called when a linked process terminates or its connection is severed. 170 | 171 | :param pid: The pid of the linked process. 172 | :type pid: :class:`PID` 173 | """ 174 | 175 | def send(self, to, method, body=None): 176 | """Send a message to another process. 177 | 178 | Sending messages is done asynchronously and is not guaranteed to succeed. 179 | 180 | Returns immediately. 181 | 182 | :param to: The pid of the process to send a message. 183 | :type to: :class:`PID` 184 | :param method: The method/mailbox name of the remote method. 185 | :type method: ``str`` 186 | :keyword body: The optional content to send with the message. 187 | :type body: ``bytes`` or None 188 | :raises: Will raise a ``Process.UnboundProcess`` exception if the 189 | process is not bound to a context. 190 | :return: Nothing 191 | """ 192 | self._assert_bound() 193 | self._context.send(self.pid, to, method, body) 194 | 195 | def link(self, to): 196 | """Link to another process. 197 | 198 | The ``link`` operation is not guaranteed to succeed. If it does, when 199 | the other process terminates, the ``exited`` method will be called with 200 | its pid. 201 | 202 | Returns immediately. 203 | 204 | :param to: The pid of the process to send a message. 205 | :type to: :class:`PID` 206 | :raises: Will raise a ``Process.UnboundProcess`` exception if the 207 | process is not bound to a context. 208 | :return: Nothing 209 | """ 210 | self._assert_bound() 211 | self._context.link(self.pid, to) 212 | 213 | def terminate(self): 214 | """Terminate this process. 215 | 216 | This unbinds it from the context to which it is bound. 217 | 218 | :raises: Will raise a ``Process.UnboundProcess`` exception if the 219 | process is not bound to a context. 220 | """ 221 | self._assert_bound() 222 | self._context.terminate(self.pid) 223 | 224 | 225 | class ProtobufProcess(Process): 226 | @classmethod 227 | def install(cls, message_type): 228 | """A decorator to indicate a remotely callable method on a process using protocol buffers. 229 | 230 | .. code-block:: python 231 | 232 | from compactor.process import ProtobufProcess 233 | from messages_pb2 import RequestMessage, ResponseMessage 234 | 235 | class PingProcess(ProtobufProcess): 236 | @ProtobufProcess.install(RequestMessage) 237 | def ping(self, from_pid, message): 238 | # do something with message, a RequestMessage 239 | response = ResponseMessage(...) 240 | # send a protocol buffer which will get serialized on the wire. 241 | self.send(from_pid, response) 242 | 243 | The installed method should take ``from_pid`` and ``message`` parameters. 244 | ``from_pid`` is the process calling the method. ``message`` is a protocol 245 | buffer of the installed type. 246 | 247 | :param message_type: Incoming messages to this message_type will be dispatched to this method. 248 | :type message_type: A generated protocol buffer stub 249 | """ 250 | def wrap(fn): 251 | @functools.wraps(fn) 252 | def wrapped_fn(self, from_pid, message_str): 253 | message = message_type() 254 | message.MergeFromString(message_str) 255 | return fn(self, from_pid, message) 256 | return Process.install(message_type.DESCRIPTOR.full_name)(wrapped_fn) 257 | return wrap 258 | 259 | def send(self, to, message): 260 | """Send a message to another process. 261 | 262 | Same as ``Process.send`` except that ``message`` is a protocol buffer. 263 | 264 | Returns immediately. 265 | 266 | :param to: The pid of the process to send a message. 267 | :type to: :class:`PID` 268 | :param message: The message to send 269 | :type method: A protocol buffer instance. 270 | :raises: Will raise a ``Process.UnboundProcess`` exception if the 271 | process is not bound to a context. 272 | :return: Nothing 273 | """ 274 | super(ProtobufProcess, self).send(to, message.DESCRIPTOR.full_name, message.SerializeToString()) 275 | -------------------------------------------------------------------------------- /compactor/request.py: -------------------------------------------------------------------------------- 1 | 2 | CRLF = b'\r\n' 3 | 4 | 5 | def encode_request(from_pid, to_pid, method, body=None, content_type=None, legacy=False): 6 | """ 7 | Encode a request into a raw HTTP request. This function returns a string 8 | of bytes that represent a valid HTTP/1.0 request, including any libprocess 9 | headers required for communication. 10 | 11 | Use the `legacy` option (set to True) to use the legacy User-Agent based 12 | libprocess identification. 13 | """ 14 | 15 | if body is None: 16 | body = b'' 17 | 18 | if not isinstance(body, (bytes, bytearray)): 19 | raise TypeError('Body must be a sequence of bytes.') 20 | 21 | headers = [ 22 | 'POST /{process}/{method} HTTP/1.0'.format(process=to_pid.id, method=method), 23 | 'Connection: Keep-Alive', 24 | 'Content-Length: %d' % len(body) 25 | ] 26 | 27 | if legacy: 28 | headers.append('User-Agent: libprocess/{pid}'.format(pid=from_pid)) 29 | else: 30 | headers.append('Libprocess-From: {pid}'.format(pid=from_pid)) 31 | 32 | if content_type is not None: 33 | headers.append('Content-Type: {content_type}'.format(content_type=content_type)) 34 | 35 | headers = [header.encode('utf8') for header in headers] 36 | 37 | def iter_fragments(): 38 | for fragment in headers: 39 | yield fragment 40 | yield CRLF 41 | yield CRLF 42 | if body: 43 | yield body 44 | 45 | return b''.join(iter_fragments()) 46 | -------------------------------------------------------------------------------- /compactor/testing.py: -------------------------------------------------------------------------------- 1 | from contextlib import contextmanager 2 | import logging 3 | import unittest 4 | 5 | from .context import Context 6 | 7 | log = logging.getLogger(__name__) 8 | 9 | 10 | class EphemeralContextTestCase(unittest.TestCase): 11 | def setUp(self): 12 | self.context = Context() 13 | log.debug('XXX Starting context') 14 | self.context.start() 15 | 16 | def tearDown(self): 17 | log.debug('XXX Stopping context') 18 | self.context.stop() 19 | 20 | 21 | @contextmanager 22 | def ephemeral_context(**kw): 23 | context = Context(**kw) 24 | context.start() 25 | yield context 26 | context.stop() 27 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # User-friendly check for sphinx-build 11 | ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) 12 | $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) 13 | endif 14 | 15 | # Internal variables. 16 | PAPEROPT_a4 = -D latex_paper_size=a4 17 | PAPEROPT_letter = -D latex_paper_size=letter 18 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 19 | # the i18n builder cannot share the environment and doctrees with the others 20 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 21 | 22 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext 23 | 24 | help: 25 | @echo "Please use \`make ' where is one of" 26 | @echo " html to make standalone HTML files" 27 | @echo " dirhtml to make HTML files named index.html in directories" 28 | @echo " singlehtml to make a single large HTML file" 29 | @echo " pickle to make pickle files" 30 | @echo " json to make JSON files" 31 | @echo " htmlhelp to make HTML files and a HTML help project" 32 | @echo " qthelp to make HTML files and a qthelp project" 33 | @echo " applehelp to make an Apple Help Book" 34 | @echo " devhelp to make HTML files and a Devhelp project" 35 | @echo " epub to make an epub" 36 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 37 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 38 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" 39 | @echo " text to make text files" 40 | @echo " man to make manual pages" 41 | @echo " texinfo to make Texinfo files" 42 | @echo " info to make Texinfo files and run them through makeinfo" 43 | @echo " gettext to make PO message catalogs" 44 | @echo " changes to make an overview of all changed/added/deprecated items" 45 | @echo " xml to make Docutils-native XML files" 46 | @echo " pseudoxml to make pseudoxml-XML files for display purposes" 47 | @echo " linkcheck to check all external links for integrity" 48 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 49 | @echo " coverage to run coverage check of the documentation (if enabled)" 50 | 51 | clean: 52 | rm -rf $(BUILDDIR)/* 53 | 54 | html: 55 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 56 | @echo 57 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 58 | 59 | dirhtml: 60 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 61 | @echo 62 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 63 | 64 | singlehtml: 65 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 66 | @echo 67 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 68 | 69 | pickle: 70 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 71 | @echo 72 | @echo "Build finished; now you can process the pickle files." 73 | 74 | json: 75 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 76 | @echo 77 | @echo "Build finished; now you can process the JSON files." 78 | 79 | htmlhelp: 80 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 81 | @echo 82 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 83 | ".hhp project file in $(BUILDDIR)/htmlhelp." 84 | 85 | qthelp: 86 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 87 | @echo 88 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 89 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 90 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/compactor.qhcp" 91 | @echo "To view the help file:" 92 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/compactor.qhc" 93 | 94 | applehelp: 95 | $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp 96 | @echo 97 | @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." 98 | @echo "N.B. You won't be able to view it unless you put it in" \ 99 | "~/Library/Documentation/Help or install it in your application" \ 100 | "bundle." 101 | 102 | devhelp: 103 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 104 | @echo 105 | @echo "Build finished." 106 | @echo "To view the help file:" 107 | @echo "# mkdir -p $$HOME/.local/share/devhelp/compactor" 108 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/compactor" 109 | @echo "# devhelp" 110 | 111 | epub: 112 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 113 | @echo 114 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 115 | 116 | latex: 117 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 118 | @echo 119 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 120 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 121 | "(use \`make latexpdf' here to do that automatically)." 122 | 123 | latexpdf: 124 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 125 | @echo "Running LaTeX files through pdflatex..." 126 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 127 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 128 | 129 | latexpdfja: 130 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 131 | @echo "Running LaTeX files through platex and dvipdfmx..." 132 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja 133 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 134 | 135 | text: 136 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 137 | @echo 138 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 139 | 140 | man: 141 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 142 | @echo 143 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 144 | 145 | texinfo: 146 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 147 | @echo 148 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 149 | @echo "Run \`make' in that directory to run these through makeinfo" \ 150 | "(use \`make info' here to do that automatically)." 151 | 152 | info: 153 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 154 | @echo "Running Texinfo files through makeinfo..." 155 | make -C $(BUILDDIR)/texinfo info 156 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 157 | 158 | gettext: 159 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 160 | @echo 161 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 162 | 163 | changes: 164 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 165 | @echo 166 | @echo "The overview file is in $(BUILDDIR)/changes." 167 | 168 | linkcheck: 169 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 170 | @echo 171 | @echo "Link check complete; look for any errors in the above output " \ 172 | "or in $(BUILDDIR)/linkcheck/output.txt." 173 | 174 | doctest: 175 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 176 | @echo "Testing of doctests in the sources finished, look at the " \ 177 | "results in $(BUILDDIR)/doctest/output.txt." 178 | 179 | coverage: 180 | $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage 181 | @echo "Testing of coverage in the sources finished, look at the " \ 182 | "results in $(BUILDDIR)/coverage/python.txt." 183 | 184 | xml: 185 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml 186 | @echo 187 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml." 188 | 189 | pseudoxml: 190 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml 191 | @echo 192 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." 193 | -------------------------------------------------------------------------------- /docs/api.rst: -------------------------------------------------------------------------------- 1 | compactor 2 | ========= 3 | 4 | Global methods 5 | -------------- 6 | 7 | Some methods are proxied to the global singleton Context in order to make 8 | simple programs simpler to write. These methods do the Right Thing™ for 9 | most use-cases. 10 | 11 | .. automodule:: compactor 12 | :members: 13 | :show-inheritance: 14 | 15 | PIDs 16 | ---- 17 | 18 | .. code-block:: python 19 | 20 | from compactor.pid import PID 21 | pid = PID.from_string('slave(1)@192.168.33.2:5051') 22 | 23 | .. autoclass:: compactor.pid.PID 24 | :members: 25 | 26 | .. automethod:: compactor.pid.PID.__init__ 27 | 28 | 29 | Processes 30 | --------- 31 | 32 | .. code-block:: python 33 | 34 | from compactor.process import Process 35 | 36 | class PingProcess(Process): 37 | def initialize(self): 38 | super(PingProcess, self).initialize() 39 | self.pinged = threading.Event() 40 | 41 | @Process.install('ping') 42 | def ping(self, from_pid, body): 43 | self.pinged.set() 44 | 45 | .. autoclass:: compactor.process.Process 46 | :members: 47 | 48 | .. automethod:: compactor.process.Process.__init__ 49 | 50 | .. autoclass:: compactor.process.ProtobufProcess 51 | :members: 52 | :show-inheritance: 53 | 54 | Contexts 55 | -------- 56 | .. code-block:: python 57 | 58 | from compactor.context import Context 59 | 60 | context = Context(ip='127.0.0.1', port=8081) 61 | context.start() 62 | 63 | ping_process = PingProcess('ping') 64 | ping_pid = context.spawn(ping_process) 65 | 66 | context.join() 67 | 68 | .. autoclass:: compactor.context.Context 69 | :members: 70 | 71 | .. automethod:: compactor.context.Context.__init__ -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # compactor documentation build configuration file, created by 4 | # sphinx-quickstart on Tue Mar 24 15:33:57 2015. 5 | # 6 | # This file is execfile()d with the current directory set to its 7 | # containing dir. 8 | # 9 | # Note that not all possible configuration values are present in this 10 | # autogenerated file. 11 | # 12 | # All configuration values have a default; values that are commented out 13 | # serve to show the default. 14 | 15 | import sys 16 | import os 17 | import shlex 18 | 19 | # If extensions (or modules to document with autodoc) are in another directory, 20 | # add these directories to sys.path here. If the directory is relative to the 21 | # documentation root, use os.path.abspath to make it absolute, like shown here. 22 | sys.path.insert(0, os.path.abspath('..')) 23 | 24 | # -- General configuration ------------------------------------------------ 25 | 26 | # If your documentation needs a minimal Sphinx version, state it here. 27 | #needs_sphinx = '1.0' 28 | 29 | # Add any Sphinx extension module names here, as strings. They can be 30 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 31 | # ones. 32 | extensions = [ 33 | 'sphinx.ext.autodoc', 34 | 'sphinx.ext.viewcode', 35 | ] 36 | 37 | # Add any paths that contain templates here, relative to this directory. 38 | templates_path = ['_templates'] 39 | 40 | # The suffix(es) of source filenames. 41 | # You can specify multiple suffix as a list of string: 42 | # source_suffix = ['.rst', '.md'] 43 | source_suffix = '.rst' 44 | 45 | # The encoding of source files. 46 | #source_encoding = 'utf-8-sig' 47 | 48 | # The master toctree document. 49 | master_doc = 'index' 50 | 51 | # General information about the project. 52 | project = u'compactor' 53 | copyright = u'2015, Brian Wickman' 54 | author = u'Brian Wickman' 55 | 56 | # The version info for the project you're documenting, acts as replacement for 57 | # |version| and |release|, also used in various other places throughout the 58 | # built documents. 59 | # 60 | # The short X.Y version. 61 | version = '0.2.2' 62 | # The full version, including alpha/beta/rc tags. 63 | release = '0.2.2' 64 | 65 | # The language for content autogenerated by Sphinx. Refer to documentation 66 | # for a list of supported languages. 67 | # 68 | # This is also used if you do content translation via gettext catalogs. 69 | # Usually you set "language" from the command line for these cases. 70 | language = None 71 | 72 | # There are two options for replacing |today|: either, you set today to some 73 | # non-false value, then it is used: 74 | #today = '' 75 | # Else, today_fmt is used as the format for a strftime call. 76 | #today_fmt = '%B %d, %Y' 77 | 78 | # List of patterns, relative to source directory, that match files and 79 | # directories to ignore when looking for source files. 80 | exclude_patterns = ['_build'] 81 | 82 | # The reST default role (used for this markup: `text`) to use for all 83 | # documents. 84 | #default_role = None 85 | 86 | # If true, '()' will be appended to :func: etc. cross-reference text. 87 | #add_function_parentheses = True 88 | 89 | # If true, the current module name will be prepended to all description 90 | # unit titles (such as .. function::). 91 | #add_module_names = True 92 | 93 | # If true, sectionauthor and moduleauthor directives will be shown in the 94 | # output. They are ignored by default. 95 | #show_authors = False 96 | 97 | # The name of the Pygments (syntax highlighting) style to use. 98 | pygments_style = 'sphinx' 99 | 100 | # A list of ignored prefixes for module index sorting. 101 | #modindex_common_prefix = [] 102 | 103 | # If true, keep warnings as "system message" paragraphs in the built documents. 104 | #keep_warnings = False 105 | 106 | # If true, `todo` and `todoList` produce output, else they produce nothing. 107 | todo_include_todos = False 108 | 109 | 110 | # -- Options for HTML output ---------------------------------------------- 111 | 112 | try: 113 | import sphinx_rtd_theme 114 | html_theme = 'sphinx_rtd_theme' 115 | html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] 116 | except ImportError: 117 | html_theme = 'default' 118 | sys.stderr.write('Failed to import sphinx_rtd_theme!') 119 | 120 | 121 | 122 | # Theme options are theme-specific and customize the look and feel of a theme 123 | # further. For a list of options available for each theme, see the 124 | # documentation. 125 | #html_theme_options = {} 126 | 127 | # Add any paths that contain custom themes here, relative to this directory. 128 | #html_theme_path = [] 129 | 130 | # The name for this set of Sphinx documents. If None, it defaults to 131 | # " v documentation". 132 | #html_title = None 133 | 134 | # A shorter title for the navigation bar. Default is the same as html_title. 135 | #html_short_title = None 136 | 137 | # The name of an image file (relative to this directory) to place at the top 138 | # of the sidebar. 139 | #html_logo = None 140 | 141 | # The name of an image file (within the static path) to use as favicon of the 142 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 143 | # pixels large. 144 | #html_favicon = None 145 | 146 | # Add any paths that contain custom static files (such as style sheets) here, 147 | # relative to this directory. They are copied after the builtin static files, 148 | # so a file named "default.css" will overwrite the builtin "default.css". 149 | html_static_path = ['_static'] 150 | 151 | # Add any extra paths that contain custom files (such as robots.txt or 152 | # .htaccess) here, relative to this directory. These files are copied 153 | # directly to the root of the documentation. 154 | #html_extra_path = [] 155 | 156 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 157 | # using the given strftime format. 158 | #html_last_updated_fmt = '%b %d, %Y' 159 | 160 | # If true, SmartyPants will be used to convert quotes and dashes to 161 | # typographically correct entities. 162 | #html_use_smartypants = True 163 | 164 | # Custom sidebar templates, maps document names to template names. 165 | #html_sidebars = {} 166 | 167 | # Additional templates that should be rendered to pages, maps page names to 168 | # template names. 169 | #html_additional_pages = {} 170 | 171 | # If false, no module index is generated. 172 | #html_domain_indices = True 173 | 174 | # If false, no index is generated. 175 | #html_use_index = True 176 | 177 | # If true, the index is split into individual pages for each letter. 178 | #html_split_index = False 179 | 180 | # If true, links to the reST sources are added to the pages. 181 | #html_show_sourcelink = True 182 | 183 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 184 | #html_show_sphinx = True 185 | 186 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 187 | #html_show_copyright = True 188 | 189 | # If true, an OpenSearch description file will be output, and all pages will 190 | # contain a tag referring to it. The value of this option must be the 191 | # base URL from which the finished HTML is served. 192 | #html_use_opensearch = '' 193 | 194 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 195 | #html_file_suffix = None 196 | 197 | # Language to be used for generating the HTML full-text search index. 198 | # Sphinx supports the following languages: 199 | # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja' 200 | # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr' 201 | #html_search_language = 'en' 202 | 203 | # A dictionary with options for the search language support, empty by default. 204 | # Now only 'ja' uses this config value 205 | #html_search_options = {'type': 'default'} 206 | 207 | # The name of a javascript file (relative to the configuration directory) that 208 | # implements a search results scorer. If empty, the default will be used. 209 | #html_search_scorer = 'scorer.js' 210 | 211 | # Output file base name for HTML help builder. 212 | htmlhelp_basename = 'compactordoc' 213 | 214 | # -- Options for LaTeX output --------------------------------------------- 215 | 216 | latex_elements = { 217 | # The paper size ('letterpaper' or 'a4paper'). 218 | #'papersize': 'letterpaper', 219 | 220 | # The font size ('10pt', '11pt' or '12pt'). 221 | #'pointsize': '10pt', 222 | 223 | # Additional stuff for the LaTeX preamble. 224 | #'preamble': '', 225 | 226 | # Latex figure (float) alignment 227 | #'figure_align': 'htbp', 228 | } 229 | 230 | # Grouping the document tree into LaTeX files. List of tuples 231 | # (source start file, target name, title, 232 | # author, documentclass [howto, manual, or own class]). 233 | latex_documents = [ 234 | (master_doc, 'compactor.tex', u'compactor Documentation', 235 | u'Brian Wickman', 'manual'), 236 | ] 237 | 238 | # The name of an image file (relative to this directory) to place at the top of 239 | # the title page. 240 | #latex_logo = None 241 | 242 | # For "manual" documents, if this is true, then toplevel headings are parts, 243 | # not chapters. 244 | #latex_use_parts = False 245 | 246 | # If true, show page references after internal links. 247 | #latex_show_pagerefs = False 248 | 249 | # If true, show URL addresses after external links. 250 | #latex_show_urls = False 251 | 252 | # Documents to append as an appendix to all manuals. 253 | #latex_appendices = [] 254 | 255 | # If false, no module index is generated. 256 | #latex_domain_indices = True 257 | 258 | 259 | # -- Options for manual page output --------------------------------------- 260 | 261 | # One entry per manual page. List of tuples 262 | # (source start file, name, description, authors, manual section). 263 | man_pages = [ 264 | (master_doc, 'compactor', u'compactor Documentation', 265 | [author], 1) 266 | ] 267 | 268 | # If true, show URL addresses after external links. 269 | #man_show_urls = False 270 | 271 | 272 | # -- Options for Texinfo output ------------------------------------------- 273 | 274 | # Grouping the document tree into Texinfo files. List of tuples 275 | # (source start file, target name, title, author, 276 | # dir menu entry, description, category) 277 | texinfo_documents = [ 278 | (master_doc, 'compactor', u'compactor Documentation', 279 | author, 'compactor', 'One line description of project.', 280 | 'Miscellaneous'), 281 | ] 282 | 283 | # Documents to append as an appendix to all manuals. 284 | #texinfo_appendices = [] 285 | 286 | # If false, no module index is generated. 287 | #texinfo_domain_indices = True 288 | 289 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 290 | #texinfo_show_urls = 'footnote' 291 | 292 | # If true, do not generate a @detailmenu in the "Top" node's menu. 293 | #texinfo_no_detailmenu = False 294 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | .. compactor documentation master file, created by 2 | sphinx-quickstart on Tue Mar 24 15:27:46 2015. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | compactor 7 | ========= 8 | .. image:: https://travis-ci.org/wickman/compactor.svg?branch=master 9 | :target: https://travis-ci.org/wickman/compactor 10 | 11 | compactor is a pure python implementation of libprocess, the actor library 12 | underpinning `mesos `_. 13 | 14 | .. toctree:: 15 | :maxdepth: 2 16 | 17 | api 18 | 19 | getting started 20 | =============== 21 | 22 | implementing a process is a matter of subclassing ``compactor.Process``. 23 | you can "install" methods on processes using the ``install`` decorator. 24 | this makes them remotely callable. 25 | 26 | .. code-block:: python 27 | 28 | import threading 29 | 30 | from compactor import install, spawn, Process 31 | 32 | class PingProcess(Process): 33 | def initialize(self): 34 | self.pinged = threading.Event() 35 | 36 | @install('ping') 37 | def ping(self, from_pid, body): 38 | self.pinged.set() 39 | 40 | # construct the process 41 | ping_process = PingProcess('ping_process') 42 | 43 | # spawn the process, binding it to the current global context 44 | spawn(ping_process) 45 | 46 | # send a message to the process 47 | client = Process('client') 48 | spawn(client) 49 | client.send(ping_process.pid, 'ping') 50 | 51 | # ensure the message was delivered 52 | ping_process.pinged.wait() 53 | 54 | each context is, in essence, a listening (ip, port) pair. 55 | 56 | by default there is a global, singleton context. use ``compactor.spawn`` to 57 | spawn a process on it. by default it will bind to ``0.0.0.0`` on an 58 | arbitrary port. this can be overridden using the ``LIBPROCESS_IP`` and 59 | ``LIBPROCESS_PORT`` environment variables. 60 | 61 | alternately, you can create an instance of a ``compactor.Context``, 62 | explicitly passing it ``port=`` and ``ip=`` keywords. you can then call the 63 | ``spawn`` method on it to bind processes. 64 | 65 | spawning a process does two things: it binds the process to the context, 66 | creating a pid, and initializes the process. the pid is a unique identifier 67 | used for routing purposes. in practice, it consists of an (ip, port, name) 68 | tuple, where the ip and port are those of the context, and the name is the 69 | name of the process. 70 | 71 | when a process is spawned, its ``initialize`` method is called. this can be 72 | used to initialize state or initiate connections to other services, as 73 | illustrated in the following example. 74 | 75 | leader/follower pattern 76 | ======================= 77 | 78 | .. code-block:: python 79 | 80 | import threading 81 | import uuid 82 | from compactor import install 83 | from compactor.process import Process 84 | 85 | class Leader(Process): 86 | def __init__(self): 87 | super(Leader, self).__init__('leader') 88 | self.followers = set() 89 | 90 | @install('register') 91 | def register(self, from_pid, uuid): 92 | self.send(from_pid, 'registered', uuid) 93 | 94 | class Follower(Process): 95 | def __init__(self, name, leader_pid): 96 | super(Follower, self).__init__(name) 97 | self.leader_pid = leader_pid 98 | self.uuid = uuid.uuid4().bytes 99 | self.registered = threading.Event() 100 | 101 | def initialize(self): 102 | super(Follower, self).initialize() 103 | self.send(self.leader_pid, 'register', self.uuid) 104 | 105 | def exited(self, from_pid): 106 | self.registered.clear() 107 | 108 | @install('registered') 109 | def registered(self, from_pid, uuid): 110 | if uuid == self.uuid: 111 | self.link(from_pid) 112 | self.registered.set() 113 | 114 | with this, you can create two separate contexts: 115 | 116 | .. code-block:: python 117 | 118 | from compactor import Context 119 | 120 | leader_context = Context(port=5051) 121 | leader = Leader() 122 | leader_context.spawn(leader) 123 | 124 | # at this point, leader_context.pid is a unique identifier for this leader process 125 | # and can be disseminated via service discovery or passed explicitly to other services, 126 | # e.g. 'leader@192.168.33.2:5051'. the follower can be spawned in the same process, 127 | # in a separate process, or on a separate machine. 128 | 129 | follower_context = Context() 130 | follower = Follower('follower1', leader_context.pid) 131 | follower_context.spawn(follower) 132 | 133 | follower.registered.wait() 134 | 135 | this effectively initiates a handshake between the leader and follower processes, a common 136 | pattern building distributed systems using the actor model. 137 | 138 | the ``link`` method links the two processes together. should the connection be severed, 139 | the ``exited`` method on the process will be called. 140 | 141 | protocol buffer processes 142 | ========================= 143 | 144 | mesos uses protocol buffers over the wire to support RPC. compactor supports this natively. 145 | simply subclass ``ProtobufProcess`` instead and use ``ProtobufProcess.install`` 146 | 147 | .. code-block:: python 148 | 149 | from compactor.process import ProtobufProcess 150 | from service_pb2 import ServiceRequestMessage, ServiceResponseMessage 151 | 152 | class Service(ProtobufProcess): 153 | @ProtobufProcess.install(ServiceRequestMessage) 154 | def request(self, from_pid, message): 155 | # message is a deserialized protobuf ServiceRequestMessage 156 | response = ServiceResponseMessage(...) 157 | # self.send automatically serializes the response, a protocol buffer, over the wire. 158 | self.send(from_pid, response) 159 | -------------------------------------------------------------------------------- /docs/rtd/requirements.txt: -------------------------------------------------------------------------------- 1 | sphinx 2 | sphinx_rtd_theme 3 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [wheel] 2 | universal = 1 3 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | import os 2 | from setuptools import setup 3 | 4 | __version__ = '0.3.0' 5 | 6 | 7 | with open(os.path.join(os.path.dirname(__file__), 'CHANGES.rst')) as fp: 8 | LONG_DESCRIPTION = fp.read() 9 | 10 | 11 | setup( 12 | name='compactor', 13 | version=__version__, 14 | description='Pure python implementation of libprocess actors', 15 | long_description=LONG_DESCRIPTION, 16 | url='http://github.com/wickman/compactor', 17 | author='Brian Wickman', 18 | author_email='wickman@gmail.com', 19 | license='Apache License 2.0', 20 | packages=['compactor'], 21 | install_requires=[ 22 | 'trollius', 23 | 'tornado==4.1', 24 | 'twitter.common.lang', 25 | ], 26 | extras_require={ 27 | 'pb': ['protobuf>=2.6.1,<2.7'], 28 | }, 29 | zip_safe=True, 30 | ) 31 | -------------------------------------------------------------------------------- /tests/test_httpd.py: -------------------------------------------------------------------------------- 1 | import logging 2 | import threading 3 | 4 | from compactor.context import Context 5 | from compactor.process import Process 6 | from compactor.testing import ephemeral_context, EphemeralContextTestCase 7 | 8 | import requests 9 | import pytest 10 | from tornado import gen 11 | 12 | logging.basicConfig(level=logging.DEBUG) 13 | log = logging.getLogger(__name__) 14 | 15 | 16 | class PingPongProcess(Process): 17 | def __init__(self, **kw): 18 | self.ping_event = threading.Event() 19 | super(PingPongProcess, self).__init__('pingpong', **kw) 20 | 21 | @Process.route('/ping') 22 | def ping(self, handler): 23 | self.ping_event.set() 24 | handler.write('pong') 25 | 26 | 27 | class Web(Process): 28 | def __init__(self, name, **kw): 29 | super(Web, self).__init__(name, **kw) 30 | 31 | def write_pong(self, handler, callback=None): 32 | handler.write('pong') 33 | if callback: 34 | callback() 35 | 36 | @Process.route('/ping') 37 | def ping(self, handler): 38 | yield gen.Task(self.write_pong, handler) 39 | 40 | 41 | class ScatterProcess(Process): 42 | def __init__(self, name): 43 | self.acks = [] 44 | self.condition = threading.Condition() 45 | super(ScatterProcess, self).__init__(name) 46 | 47 | @Process.install('ack') 48 | def ack(self, from_pid, body): 49 | with self.condition: 50 | self.acks.append(body) 51 | self.condition.notify_all() 52 | 53 | 54 | class GatherProcess(Process): 55 | def __init__(self): 56 | self.messages = [] 57 | super(GatherProcess, self).__init__('gather') 58 | 59 | @Process.install('syn') 60 | def syn(self, from_pid, body): 61 | self.messages.append(body) 62 | self.send(from_pid, 'ack', body) 63 | 64 | 65 | class ScatterThread(threading.Thread): 66 | SUFFIX_ID = 1 67 | SUFFIX_LOCK = threading.Lock() 68 | 69 | def __init__(self, to_pid, iterations, context): 70 | self.success = False 71 | self.context = context 72 | self.to_pid = to_pid 73 | self.iterations = iterations 74 | super(ScatterThread, self).__init__() 75 | 76 | def run(self): 77 | with ScatterThread.SUFFIX_LOCK: 78 | suffix = ScatterThread.SUFFIX_ID 79 | ScatterThread.SUFFIX_ID += 1 80 | scatter = ScatterProcess('scatter(%d)' % suffix) 81 | self.context.spawn(scatter) 82 | 83 | expected_acks = set(('syn%d' % k).encode('utf8') for k in range(self.iterations)) 84 | 85 | for k in range(self.iterations): 86 | scatter.send(self.to_pid, 'syn', ('syn%d' % k).encode('utf8')) 87 | 88 | while True: 89 | with scatter.condition: 90 | if set(scatter.acks) == expected_acks: 91 | break 92 | 93 | self.success = True 94 | 95 | 96 | def startjoin(context, scatters): 97 | gather = GatherProcess() 98 | context.spawn(gather) 99 | 100 | for scatter in scatters: 101 | scatter.start() 102 | 103 | for scatter in scatters: 104 | scatter.join() 105 | 106 | for scatter in scatters: 107 | assert scatter.success 108 | 109 | 110 | def test_multi_thread_multi_scatter(): 111 | with ephemeral_context() as context: 112 | gather = GatherProcess() 113 | context.spawn(gather) 114 | scatters = [ScatterThread(gather.pid, 3, Context()) for k in range(5)] 115 | for scatter in scatters: 116 | scatter.context.start() 117 | try: 118 | startjoin(context, scatters) 119 | finally: 120 | for scatter in scatters: 121 | scatter.context.stop() 122 | 123 | 124 | def test_single_thread_multi_scatter(): 125 | with ephemeral_context() as context: 126 | gather = GatherProcess() 127 | context.spawn(gather) 128 | scatters = [ScatterThread(gather.pid, 3, context) for k in range(5)] 129 | startjoin(context, scatters) 130 | 131 | 132 | class ChildProcess(Process): 133 | def __init__(self): 134 | self.exit_event = threading.Event() 135 | self.link_event = threading.Event() 136 | self.parent_pid = None 137 | super(ChildProcess, self).__init__('child') 138 | 139 | @Process.install('link_me') 140 | def link_me(self, from_pid, body): 141 | log.info('Got link request') 142 | self.parent_pid = from_pid 143 | log.info('Sending link') 144 | self.link(from_pid) 145 | log.info('Sent link') 146 | self.link_event.set() 147 | log.info('Set link event') 148 | 149 | def exited(self, pid): 150 | log.info('ChildProcess got exited event for %s' % pid) 151 | if pid == self.parent_pid: 152 | self.exit_event.set() 153 | 154 | 155 | class ParentProcess(Process): 156 | def __init__(self): 157 | super(ParentProcess, self).__init__('parent') 158 | 159 | def exited(self, pid): 160 | log.info('ParentProcess got exited event for %s' % pid) 161 | 162 | 163 | class TestHttpd(EphemeralContextTestCase): 164 | def test_simple_routed_process(self): 165 | ping = PingPongProcess() 166 | pid = self.context.spawn(ping) 167 | 168 | url = 'http://%s:%s/pingpong/ping' % (pid.ip, pid.port) 169 | content = requests.get(url).text 170 | assert content == 'pong' 171 | assert ping.ping_event.is_set() 172 | 173 | def test_mount_unmount(self): 174 | # no mounts 175 | url = 'http://%s:%s' % (self.context.ip, self.context.port) 176 | response = requests.get(url) 177 | assert response.status_code == 404 178 | 179 | # unmounted process 180 | url = 'http://%s:%s/pingpong/ping' % (self.context.ip, self.context.port) 181 | response = requests.get(url) 182 | assert response.status_code == 404 183 | 184 | # mount 185 | ping = PingPongProcess() 186 | pid = self.context.spawn(ping) 187 | 188 | response = requests.get(url) 189 | assert response.status_code == 200 190 | assert response.text == 'pong' 191 | 192 | # unmount 193 | self.context.terminate(pid) 194 | response = requests.get(url) 195 | assert response.status_code == 404 196 | 197 | def test_async_route(self): 198 | web = Web('web') 199 | pid = self.context.spawn(web) 200 | 201 | url = 'http://%s:%s/web/ping' % (pid.ip, pid.port) 202 | content = requests.get(url).text 203 | assert content == 'pong' 204 | 205 | def test_simple_message(self): 206 | class PingPongProcess(Process): 207 | def __init__(self, name, **kw): 208 | self.ping_event = threading.Event() 209 | self.ping_body = None 210 | self.pong_event = threading.Event() 211 | self.pong_body = None 212 | super(PingPongProcess, self).__init__(name, **kw) 213 | 214 | @Process.install('ping') 215 | def ping(self, from_pid, body): 216 | self.ping_body = body 217 | self.ping_event.set() 218 | log.info('%s got ping' % self.pid) 219 | self.send(from_pid, 'pong', body=body) 220 | 221 | @Process.install('pong') 222 | def pong(self, from_pid, body): 223 | log.info('%s got pong' % self.pid) 224 | self.pong_body = body 225 | self.pong_event.set() 226 | 227 | proc1 = PingPongProcess('proc1') 228 | proc2 = PingPongProcess('proc2') 229 | self.context.spawn(proc1) 230 | pid2 = self.context.spawn(proc2) 231 | 232 | # ping with body 233 | proc1.send(pid2, 'ping', b'with_body') 234 | proc1.pong_event.wait(timeout=1) 235 | assert proc1.pong_event.is_set() 236 | assert proc2.ping_event.is_set() 237 | assert proc1.pong_body == b'with_body' 238 | assert proc2.ping_body == b'with_body' 239 | 240 | proc1.pong_event.clear() 241 | proc2.ping_event.clear() 242 | 243 | # ping without body 244 | proc1.send(pid2, 'ping') 245 | proc1.pong_event.wait(timeout=1) 246 | assert proc1.pong_event.is_set() 247 | assert proc2.ping_event.is_set() 248 | assert proc1.pong_body == b'' 249 | assert proc2.ping_body == b'' 250 | 251 | # Not sure why this doesn't work. 252 | @pytest.mark.xfail 253 | def test_link_exit_remote(self): 254 | parent_context = Context() 255 | parent_context.start() 256 | parent = ParentProcess() 257 | parent_context.spawn(parent) 258 | 259 | child = ChildProcess() 260 | self.context.spawn(child) 261 | 262 | parent.send(child.pid, 'link_me') 263 | 264 | child.link_event.wait(timeout=1.0) 265 | assert child.link_event.is_set() 266 | assert not child.exit_event.is_set() 267 | 268 | parent_context.terminate(parent.pid) 269 | parent_context.stop() 270 | 271 | child.send(parent.pid, 'this_will_break') 272 | child.exit_event.wait(timeout=1) 273 | assert child.exit_event.is_set() 274 | 275 | def test_link_exit_local(self): 276 | parent = ParentProcess() 277 | self.context.spawn(parent) 278 | child = ChildProcess() 279 | self.context.spawn(child) 280 | 281 | parent.send(child.pid, 'link_me') 282 | child.link_event.wait(timeout=1.0) 283 | assert child.link_event.is_set() 284 | assert not child.exit_event.is_set() 285 | 286 | log.info('*** Terminating parent.pid') 287 | self.context.terminate(parent.pid) 288 | child.exit_event.wait(timeout=1) 289 | assert child.exit_event.is_set() 290 | -------------------------------------------------------------------------------- /tests/test_process.py: -------------------------------------------------------------------------------- 1 | import uuid 2 | import threading 3 | 4 | from compactor.context import Context 5 | from compactor.process import Process 6 | 7 | 8 | import logging 9 | logging.basicConfig(level=logging.DEBUG) 10 | log = logging.getLogger(__name__) 11 | 12 | 13 | def test_simple_process(): 14 | parameter = [] 15 | event = threading.Event() 16 | 17 | class DerpProcess(Process): 18 | def __init__(self, **kw): 19 | super(DerpProcess, self).__init__('derp', **kw) 20 | 21 | @Process.install('ping') 22 | def ping(self, value): 23 | parameter.append(value) 24 | event.set() 25 | 26 | context = Context() 27 | context.start() 28 | 29 | derp = DerpProcess() 30 | pid = context.spawn(derp) 31 | context.dispatch(pid, 'ping', 42) 32 | 33 | event.wait(timeout=1.0) 34 | assert event.is_set() 35 | assert parameter == [42] 36 | 37 | context.stop() 38 | 39 | 40 | MAX_TIMEOUT = 10 41 | 42 | 43 | def test_link_race_condition(): 44 | context1 = Context() 45 | context1.start() 46 | 47 | context2 = Context() 48 | context2.start() 49 | 50 | class Leader(Process): 51 | def __init__(self): 52 | super(Leader, self).__init__('leader') 53 | self.uuid = None 54 | 55 | @Process.install('register') 56 | def register(self, from_pid, uuid): 57 | log.debug('Leader::register(%s, %s)' % (from_pid, uuid)) 58 | self.send(from_pid, 'registered', uuid) 59 | 60 | class Follower(Process): 61 | def __init__(self, leader): 62 | super(Follower, self).__init__('follower') 63 | self.leader = leader 64 | self.uuid = uuid.uuid4().bytes 65 | self.registered = threading.Event() 66 | 67 | def initialize(self): 68 | super(Follower, self).initialize() 69 | self.link(self.leader.pid) 70 | self.send(self.leader.pid, 'register', self.uuid) 71 | 72 | @Process.install('registered') 73 | def registered(self, from_pid, uuid): 74 | log.debug('Follower::registered(%s, %s)' % (from_pid, uuid)) 75 | assert uuid == self.uuid 76 | assert from_pid == self.leader.pid 77 | self.registered.set() 78 | 79 | leader = Leader() 80 | context1.spawn(leader) 81 | 82 | follower = Follower(leader) 83 | context2.spawn(follower) 84 | 85 | follower.registered.wait(timeout=MAX_TIMEOUT) 86 | assert follower.registered.is_set() 87 | 88 | context1.stop() 89 | context2.stop() 90 | -------------------------------------------------------------------------------- /tests/test_protobuf_process.py: -------------------------------------------------------------------------------- 1 | import threading 2 | 3 | from compactor.context import Context 4 | from compactor.process import ProtobufProcess 5 | 6 | import pytest 7 | 8 | try: 9 | from google.protobuf import descriptor_pb2 10 | HAS_PROTOBUF = True 11 | except ImportError: 12 | HAS_PROTOBUF = False 13 | 14 | import logging 15 | logging.basicConfig() 16 | 17 | 18 | # Send from one to another, swap out contexts to test local vs remote dispatch. 19 | def ping_pong(context1, context2): 20 | ping_calls = [] 21 | event = threading.Event() 22 | 23 | class Pinger(ProtobufProcess): 24 | @ProtobufProcess.install(descriptor_pb2.DescriptorProto) 25 | def ping(self, from_pid, message): 26 | ping_calls.append((from_pid, message)) 27 | event.set() 28 | 29 | class Ponger(ProtobufProcess): 30 | pass 31 | 32 | pinger = Pinger('pinger') 33 | ponger = Ponger('ponger') 34 | 35 | ping_pid = context1.spawn(pinger) 36 | pong_pid = context2.spawn(ponger) 37 | 38 | send_msg = descriptor_pb2.DescriptorProto() 39 | send_msg.name = 'ping' 40 | 41 | ponger.send(ping_pid, send_msg) 42 | 43 | event.wait(timeout=1) 44 | assert event.is_set() 45 | assert len(ping_calls) == 1 46 | from_pid, message = ping_calls[0] 47 | assert from_pid == pong_pid 48 | assert message == send_msg 49 | 50 | 51 | @pytest.mark.skipif('not HAS_PROTOBUF') 52 | def test_protobuf_process_remote_dispatch(): 53 | context1 = Context() 54 | context1.start() 55 | 56 | context2 = Context() 57 | context2.start() 58 | 59 | try: 60 | ping_pong(context1, context2) 61 | finally: 62 | context1.stop() 63 | context2.stop() 64 | 65 | 66 | @pytest.mark.skipif('not HAS_PROTOBUF') 67 | def test_protobuf_process_local_dispatch(): 68 | context = Context() 69 | context.start() 70 | 71 | try: 72 | ping_pong(context, context) 73 | finally: 74 | context.stop() 75 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | skip_missing_interpreters = True 3 | minversion = 1.8 4 | envlist = 5 | py27,py27-pb,py34,pypy 6 | 7 | [testenv] 8 | commands = py.test tests {posargs:} 9 | setenv = 10 | LIBPROCESS_IP = 127.0.0.1 11 | deps = 12 | pytest 13 | requests 14 | py27: mock 15 | pypy: mock 16 | coverage: coverage 17 | pb: protobuf>=2.6.1,<2.7 18 | 19 | [testenv:py27] 20 | [testenv:py27-pb] 21 | [testenv:pypy] 22 | 23 | [testenv:py27-integration] 24 | commands = 25 | vagrant up 26 | py.test vagrant 27 | 28 | [testenv:py27-coverage] 29 | commands = 30 | coverage run --source compactor -m pytest -- tests 31 | coverage report 32 | coverage html 33 | 34 | [testenv:py34] 35 | [testenv:py34-pb] 36 | 37 | [testenv:style] 38 | basepython = python2.7 39 | deps = 40 | twitter.checkstyle 41 | commands = 42 | twitterstyle -n ImportOrder compactor tests 43 | -------------------------------------------------------------------------------- /vagrant/provision.sh: -------------------------------------------------------------------------------- 1 | apt-get update 2 | apt-get -y install \ 3 | autoconf \ 4 | git \ 5 | libapr1 \ 6 | libapr1-dev \ 7 | libaprutil1 \ 8 | libaprutil1-dev \ 9 | libcurl4-openssl-dev \ 10 | libsasl2-dev \ 11 | libsvn-dev \ 12 | libtool \ 13 | maven \ 14 | openjdk-7-jdk \ 15 | python-dev \ 16 | python-pip \ 17 | zookeeper 18 | 19 | # Ensure java 7 is the default java. 20 | update-alternatives --set java /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java 21 | 22 | # Set the hostname to the IP address. This simplifies things for components 23 | # that want to advertise the hostname to the user, or other components. 24 | hostname 192.168.33.2 25 | 26 | MESOS_VERSION=0.20.1 27 | 28 | function build_mesos { 29 | # wget -q -c http://downloads.mesosphere.io/master/ubuntu/12.04/mesos_${MESOS_VERSION}-1.0.ubuntu1204_amd64.deb 30 | # dpkg --install mesos_${MESOS_VERSION}-1.0.ubuntu1204_amd64.deb 31 | 32 | git clone https://github.com/wickman/mesos mesos-fork 33 | pushd mesos-fork 34 | git checkout wickman/pong_example 35 | ./bootstrap 36 | popd 37 | mkdir -p mesos-build 38 | pushd mesos-build 39 | ../mesos-fork/configure 40 | pushd 3rdparty 41 | make 42 | popd 43 | pushd src 44 | make pong-process 45 | popd 46 | popd 47 | ln -s mesos-build/src/pong-process pong 48 | } 49 | 50 | function install_ssh_config { 51 | cat >> /etc/ssh/ssh_config <