├── .gitignore ├── LICENSE ├── README.md ├── conc_test.py ├── deco ├── __init__.py ├── astutil.py └── conc.py ├── examples ├── climate_model.py ├── free.py ├── hash_test.py ├── nbody.py ├── simple_loop.py ├── simple_loop2.py └── threads.py ├── setup.py └── test ├── testast.py └── testconc.py /.gitignore: -------------------------------------------------------------------------------- 1 | MANIFEST 2 | 3 | # Byte-compiled / optimized / DLL files 4 | __pycache__/ 5 | *.py[cod] 6 | *$py.class 7 | 8 | # C extensions 9 | *.so 10 | 11 | # Distribution / packaging 12 | .Python 13 | env/ 14 | build/ 15 | develop-eggs/ 16 | dist/ 17 | downloads/ 18 | eggs/ 19 | .eggs/ 20 | lib/ 21 | lib64/ 22 | parts/ 23 | sdist/ 24 | var/ 25 | *.egg-info/ 26 | .installed.cfg 27 | *.egg 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .coverage 43 | .coverage.* 44 | .cache 45 | nosetests.xml 46 | coverage.xml 47 | *,cover 48 | .hypothesis/ 49 | 50 | # Translations 51 | *.mo 52 | *.pot 53 | 54 | # Django stuff: 55 | *.log 56 | 57 | # Sphinx documentation 58 | docs/_build/ 59 | 60 | # PyBuilder 61 | target/ 62 | 63 | #Ipython Notebook 64 | .ipynb_checkpoints 65 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2016 Alex Sherman 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Decorated Concurrency 2 | =========== 3 | 4 | A simplified parallel computing model for Python. 5 | DECO automatically parallelizes Python programs, and requires minimal modifications to existing serial programs. 6 | 7 | Install using pip: 8 | 9 | ``` 10 | pip install deco 11 | ``` 12 | 13 | Documentation 14 | -------------- 15 | You can reference the [Wiki on Github](https://github.com/alex-sherman/deco/wiki) for slightly more in-depth documentation. 16 | 17 | General Usage 18 | --------------- 19 | 20 | Using DECO is as simple as finding, or creating, two functions in your Python program. 21 | The first function is the one we want to run in parallel, and is decorated with `@concurrent`. 22 | The second function is the function which calls the `@concurrent` function and is decorated with `@synchronized`. 23 | Decorating the second function is optional, but provides some very cool benefits. 24 | Let's take a look at an example. 25 | 26 | 27 | ```python 28 | @concurrent # We add this for the concurrent function 29 | def process_lat_lon(lat, lon, data): 30 | #Does some work which takes a while 31 | return result 32 | 33 | @synchronized # And we add this for the function which calls the concurrent function 34 | def process_data_set(data): 35 | results = defaultdict(dict) 36 | for lat in range(...): 37 | for lon in range(...): 38 | results[lat][lon] = process_lat_lon(lat, lon, data) 39 | return results 40 | ``` 41 | 42 | That's it, two lines of changes is all we need in order to parallelize this program. 43 | Now this program will make use of all the cores on the machine it's running on, allowing it to run significantly faster. 44 | 45 | What it does 46 | ------------- 47 | 48 | - The `@concurrent` decorator uses multiprocessing.pool to parallelize calls to the target function 49 | - Indexed based mutation of function arguments is handled automatically, which pool cannot do 50 | - The `@synchronized` decorator automatically inserts synchronization events 51 | - It also automatically refactors assignments of the results of `@concurrent` function calls to happen during synchronization events 52 | 53 | Limitations 54 | ------------- 55 | - The `@concurrent` decorator will only speed up functions that take longer than ~1ms 56 | - If they take less time your code will run slower! 57 | - By default, `@concurrent` function arguments/return values must be pickleable for use with `multiprocessing` 58 | - The `@synchronized` decorator only works on 'simple' functions, make sure the function meets the following criteria 59 | - Only calls, or assigns the result of `@concurrent` functions to indexable objects such as: 60 | - concurrent(...) 61 | - result[key] = concurrent(...) 62 | - Never indirectly reads objects that get assigned to by calls of the `@concurrent` function 63 | 64 | How it works 65 | ------------- 66 | 67 | For an in depth discussion of the mechanisms at work, we wrote a paper for a class 68 | which [can be found here](https://drive.google.com/file/d/0B_olmC0u8E3gWTBmN3pydGxHdEE/view?usp=sharing&resourcekey=0-9aUctXy9Hn5g9SIul4kbVw). 69 | 70 | As an overview, DECO is mainly just a smart wrapper for Python's multiprocessing.pool. 71 | When `@concurrent` is applied to a function it replaces it with calls to pool.apply_async. 72 | Additionally when arguments are passed to pool.apply_async, DECO replaces any index mutable objects with proxies, allowing it to detect and synchronize mutations of these objects. 73 | The results of these calls can then be obtained by calling wait() on the concurrent function, invoking a synchronization event. 74 | These events can be placed automatically in your code by using the `@synchronized` decorator on functions that call `@concurrent` functions. 75 | Additionally while using `@synchronized`, you can directly assign the result of concurrent function calls to index mutable objects. 76 | These assignments get refactored by DECO to automatically occur during the next synchronization event. 77 | All of this means that in many cases, parallel programming using DECO appears exactly the same as simpler serial programming. 78 | 79 | 80 | -------------------------------------------------------------------------------- /conc_test.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | 3 | import deco 4 | import time 5 | 6 | 7 | @deco.concurrent 8 | def test(sleep_time, i): 9 | time.sleep(sleep_time) 10 | return i 11 | 12 | 13 | @deco.synchronized 14 | def test_size(size): 15 | results = {} 16 | for i in range(size): 17 | results[i] = test(time_duration, i) 18 | print(results) 19 | 20 | 21 | SIZE = 100 22 | if __name__ == "__main__": 23 | processes = [1, 2, 3, 4] 24 | times = [0, 0.005, 0.01, 0.05, 0.1] # 0.25] 25 | for process_count in processes: 26 | for time_duration in times: 27 | test.processes = process_count 28 | test.p = None 29 | test_size(SIZE) 30 | exit() 31 | print(process_count, time_duration) 32 | -------------------------------------------------------------------------------- /deco/__init__.py: -------------------------------------------------------------------------------- 1 | from . import conc 2 | 3 | 4 | concurrent = conc.concurrent 5 | synchronized = conc.synchronized 6 | -------------------------------------------------------------------------------- /deco/astutil.py: -------------------------------------------------------------------------------- 1 | import ast 2 | from ast import NodeTransformer, copy_location 3 | import sys 4 | 5 | def unindent(source_lines): 6 | for i, line in enumerate(source_lines): 7 | source_lines[i] = line.lstrip() 8 | if source_lines[i][:3] == "def": 9 | break 10 | 11 | def Call(func, args=None, kwargs=None): 12 | if args is None: 13 | args = [] 14 | if kwargs is None: 15 | kwargs = [] 16 | if sys.version_info >= (3, 5): 17 | return ast.Call(func, args, kwargs) 18 | else: 19 | return ast.Call(func, args, kwargs, None, None) 20 | 21 | class SchedulerRewriter(NodeTransformer): 22 | def __init__(self, concurrent_funcs, frameinfo): 23 | self.arguments = set() 24 | self.concurrent_funcs = concurrent_funcs 25 | self.encountered_funcs = set() 26 | self.line_offset = frameinfo.lineno - 1 27 | self.filename = frameinfo.filename 28 | 29 | def references_arg(self, node): 30 | if not isinstance(node, ast.AST): 31 | return False 32 | if type(node) is ast.Name: 33 | return type(node.ctx) is ast.Load and node.id in self.arguments 34 | for field in node._fields: 35 | if field == "body": continue 36 | value = getattr(node, field) 37 | if not hasattr(value, "__iter__"): 38 | value = [value] 39 | if any([self.references_arg(child) for child in value]): 40 | return True 41 | return False 42 | 43 | def not_implemented_error(self, node, message): 44 | return NotImplementedError(self.filename + "(" + str(node.lineno + self.line_offset) + ") " + message) 45 | 46 | @staticmethod 47 | def top_level_name(node): 48 | if type(node) is ast.Name: 49 | return node.id 50 | elif type(node) is ast.Subscript or type(node) is ast.Attribute: 51 | return SchedulerRewriter.top_level_name(node.value) 52 | return None 53 | 54 | def is_concurrent_call(self, node): 55 | return type(node) is ast.Call and type(node.func) is ast.Name and node.func.id in self.concurrent_funcs 56 | 57 | def encounter_call(self, call): 58 | self.encountered_funcs.add(call.func.id) 59 | for arg in call.args: 60 | arg_name = SchedulerRewriter.top_level_name(arg) 61 | if arg_name is not None: 62 | self.arguments.add(arg_name) 63 | 64 | def get_waits(self): 65 | return [ast.Expr(Call(ast.Attribute(ast.Name(fname, ast.Load()), 'wait', ast.Load()))) for fname in self.encountered_funcs] 66 | 67 | def visit_Call(self, node): 68 | if self.is_concurrent_call(node): 69 | raise self.not_implemented_error(node, "The usage of the @concurrent function is unsupported") 70 | node = self.generic_visit(node) 71 | return node 72 | 73 | def generic_visit(self, node): 74 | if (isinstance(node, ast.stmt) and self.references_arg(node)) or isinstance(node, ast.Return): 75 | return self.get_waits() + [node] 76 | return NodeTransformer.generic_visit(self, node) 77 | 78 | def makeCall(self, func, args = [], keywords = []): 79 | return ast.Call(func = func, args = args, keywords = keywords) 80 | 81 | def makeLambda(self, args, call): 82 | return ast.Lambda(ast.arguments(posonlyargs = [], args = args, defaults = [], kwonlyargs = [], kw_defaults = []), call) 83 | 84 | def visit_Expr(self, node): 85 | if type(node.value) is ast.Call: 86 | call = node.value 87 | if self.is_concurrent_call(call): 88 | self.encounter_call(call) 89 | return node 90 | elif any([self.is_concurrent_call(arg) for arg in call.args]): 91 | conc_args = [(i, arg) for i, arg in enumerate(call.args) if self.is_concurrent_call(arg)] 92 | if len(conc_args) > 1: 93 | raise self.not_implemented_error(call, "Functions with multiple @concurrent parameters are unsupported") 94 | conc_call = conc_args[0][1] 95 | if isinstance(call.func, ast.Attribute): 96 | self.arguments.add(SchedulerRewriter.top_level_name(call.func.value)) 97 | self.encounter_call(conc_call) 98 | call.args[conc_args[0][0]] = ast.Name("__value__", ast.Load()) 99 | if sys.version_info >= (3, 0): 100 | args = [ast.arg("__value__", None)] 101 | else: 102 | args = [ast.Name("__value__", ast.Param())] 103 | call_lambda = self.makeLambda(args, call) 104 | copy_location_kwargs = { 105 | "func": ast.Attribute(conc_call.func, 'call', ast.Load()), 106 | "args": [call_lambda] + conc_call.args, 107 | "keywords": conc_call.keywords 108 | } 109 | if(sys.version_info < (3, 0)): 110 | copy_location_kwargs["kwargs"] = conc_call.kwargs 111 | return copy_location(ast.Expr(ast.Call(**copy_location_kwargs)), node) 112 | return self.generic_visit(node) 113 | 114 | # List comprehensions are self contained, so no need to add to self.arguments 115 | def visit_ListComp(self, node): 116 | if self.is_concurrent_call(node.elt): 117 | self.encounter_call(node.elt) 118 | wrapper = self.makeCall(func = ast.Name('list', ast.Load()), 119 | args = [self.makeCall(func = ast.Name('map', ast.Load()), 120 | args = [ 121 | self.makeLambda([ast.arg(arg='r')], self.makeCall(func = ast.Attribute(ast.Name('r', ast.Load()), 'result', ast.Load()))), 122 | node 123 | ])]) 124 | return wrapper 125 | return self.generic_visit(node) 126 | 127 | def is_valid_assignment(self, node): 128 | if not (type(node) is ast.Assign and self.is_concurrent_call(node.value)): 129 | return False 130 | if len(node.targets) != 1: 131 | raise self.not_implemented_error(node, "Concurrent assignment does not support multiple assignment targets") 132 | if not type(node.targets[0]) is ast.Subscript: 133 | raise self.not_implemented_error(node, "Concurrent assignment only implemented for index based objects") 134 | return True 135 | 136 | def visit_Assign(self, node): 137 | if self.is_valid_assignment(node): 138 | call = node.value 139 | self.encounter_call(call) 140 | name = node.targets[0].value 141 | self.arguments.add(SchedulerRewriter.top_level_name(name)) 142 | # Check ast.slice compatibility 143 | if hasattr(node.targets[0].slice, "value"): 144 | # For Python <= 3.8 145 | index = node.targets[0].slice.value 146 | else: 147 | # For Python == 3.9 148 | index = node.targets[0].slice 149 | call.func = ast.Attribute(call.func, 'assign', ast.Load()) 150 | call.args = [ast.Tuple([name, index], ast.Load())] + call.args 151 | return copy_location(ast.Expr(call), node) 152 | return self.generic_visit(node) 153 | 154 | def visit_FunctionDef(self, node): 155 | node.decorator_list = [] 156 | node = self.generic_visit(node) 157 | node.body += self.get_waits() 158 | return node 159 | -------------------------------------------------------------------------------- /deco/conc.py: -------------------------------------------------------------------------------- 1 | from multiprocessing import Pool 2 | from multiprocessing.pool import ThreadPool 3 | import inspect 4 | import ast 5 | from . import astutil 6 | import types 7 | 8 | 9 | def concWrapper(f, args, kwargs): 10 | result = concurrent.functions[f](*args, **kwargs) 11 | operations = [inner for outer in args + list(kwargs.values()) if type(outer) is argProxy for inner in outer.operations] 12 | return result, operations 13 | 14 | 15 | class argProxy(object): 16 | def __init__(self, arg_id, value): 17 | self.arg_id = arg_id 18 | self.operations = [] 19 | self.value = value 20 | 21 | def __getattr__(self, name): 22 | if name in ["__getstate__", "__setstate__"]: 23 | raise AttributeError 24 | if hasattr(self, 'value') and hasattr(self.value, name): 25 | return getattr(self.value, name) 26 | raise AttributeError 27 | 28 | def __setitem__(self, key, value): 29 | self.value.__setitem__(key, value) 30 | self.operations.append((self.arg_id, key, value)) 31 | 32 | def __getitem__(self, key): 33 | return self.value.__getitem__(key) 34 | 35 | 36 | class synchronized(object): 37 | def __init__(self, f): 38 | callerframerecord = inspect.stack()[1][0] 39 | info = inspect.getframeinfo(callerframerecord) 40 | self.frame_info = info 41 | self.orig_f = f 42 | self.f = None 43 | self.ast = None 44 | self.__name__ = f.__name__ 45 | 46 | def __get__(self, *args): 47 | raise NotImplementedError("Decorators from deco cannot be used on class methods") 48 | 49 | def __call__(self, *args, **kwargs): 50 | if self.f is None: 51 | source = inspect.getsourcelines(self.orig_f)[0] 52 | astutil.unindent(source) 53 | source = "".join(source) 54 | self.ast = ast.parse(source) 55 | rewriter = astutil.SchedulerRewriter(concurrent.functions.keys(), self.frame_info) 56 | rewriter.visit(self.ast.body[0]) 57 | ast.fix_missing_locations(self.ast) 58 | out = compile(self.ast, "", "exec") 59 | scope = dict(self.orig_f.__globals__) 60 | exec(out, scope) 61 | self.f = scope[self.orig_f.__name__] 62 | return self.f(*args, **kwargs) 63 | 64 | 65 | class concurrent(object): 66 | functions = {} 67 | 68 | @staticmethod 69 | def custom(constructor = None, apply_async = None): 70 | @staticmethod 71 | def _custom_concurrent(*args, **kwargs): 72 | conc = concurrent(*args, **kwargs) 73 | if constructor is not None: conc.conc_constructor = constructor 74 | if apply_async is not None: conc.apply_async = apply_async 75 | return conc 76 | return _custom_concurrent 77 | 78 | def __init__(self, *args, **kwargs): 79 | self.in_progress = False 80 | self.conc_args = [] 81 | self.conc_kwargs = {} 82 | if len(args) > 0 and hasattr(args[0], "__call__") and hasattr(args[0], "__name__"): 83 | self.setFunction(args[0]) 84 | else: 85 | self.conc_args = args 86 | self.conc_kwargs = kwargs 87 | self.results = [] 88 | self.assigns = [] 89 | self.calls = [] 90 | self.arg_proxies = {} 91 | self.conc_constructor = Pool 92 | self.apply_async = lambda self, function, args: self.concurrency.apply_async(function, args) 93 | self.concurrency = None 94 | 95 | def __get__(self, *args): 96 | raise NotImplementedError("Decorators from deco cannot be used on class methods") 97 | 98 | def replaceWithProxies(self, args): 99 | args_iter = args.items() if type(args) is dict else enumerate(args) 100 | for i, arg in args_iter: 101 | if type(arg) is dict or type(arg) is list: 102 | if not id(arg) in self.arg_proxies: 103 | self.arg_proxies[id(arg)] = argProxy(id(arg), arg) 104 | args[i] = self.arg_proxies[id(arg)] 105 | 106 | def setFunction(self, f): 107 | concurrent.functions[f.__name__] = f 108 | self.f_name = f.__name__ 109 | self.__doc__ = f.__doc__ 110 | self.__module__ = f.__module__ 111 | 112 | def assign(self, target, *args, **kwargs): 113 | self.assigns.append((target, self(*args, **kwargs))) 114 | 115 | def call(self, target, *args, **kwargs): 116 | self.calls.append((target, self(*args, **kwargs))) 117 | 118 | def __call__(self, *args, **kwargs): 119 | if len(args) > 0 and isinstance(args[0], types.FunctionType): 120 | self.setFunction(args[0]) 121 | return self 122 | self.in_progress = True 123 | if self.concurrency is None: 124 | self.concurrency = self.conc_constructor(*self.conc_args, **self.conc_kwargs) 125 | args = list(args) 126 | self.replaceWithProxies(args) 127 | self.replaceWithProxies(kwargs) 128 | result = ConcurrentResult(self.apply_async(self, concWrapper, [self.f_name, args, kwargs])) 129 | self.results.append(result) 130 | return result 131 | 132 | def apply_operations(self, ops): 133 | for arg_id, key, value in ops: 134 | self.arg_proxies[arg_id].value.__setitem__(key, value) 135 | 136 | def wait(self): 137 | results = [] 138 | while self.results: 139 | result, operations = self.results.pop().get() 140 | self.apply_operations(operations) 141 | results.append(result) 142 | for assign in self.assigns: 143 | assign[0][0][assign[0][1]] = assign[1].result() 144 | self.assigns = [] 145 | for call in self.calls: 146 | call[0](call[1].result()) 147 | self.calls = [] 148 | self.arg_proxies = {} 149 | self.in_progress = False 150 | return results 151 | 152 | concurrent.threaded = concurrent.custom(ThreadPool) 153 | 154 | class ConcurrentResult(object): 155 | def __init__(self, async_result): 156 | self.async_result = async_result 157 | 158 | def get(self): 159 | return self.async_result.get(3e+6) 160 | 161 | def result(self): 162 | return self.get()[0] 163 | -------------------------------------------------------------------------------- /examples/climate_model.py: -------------------------------------------------------------------------------- 1 | from deco import * 2 | import time 3 | import random 4 | from collections import defaultdict 5 | 6 | 7 | @concurrent # We add this for the concurrent function 8 | def process_lat_lon(lat, lon, data): 9 | time.sleep(0.1) 10 | return data[lat + lon] 11 | 12 | 13 | @synchronized # And we add this for the function which calls the concurrent function 14 | def process_data_set(data): 15 | results = defaultdict(dict) 16 | for lat in range(5): 17 | for lon in range(5): 18 | results[lat][lon] = process_lat_lon(lat, lon, data) 19 | return dict(results) 20 | 21 | if __name__ == "__main__": 22 | random.seed(0) 23 | data = [random.random() for _ in range(200)] 24 | start = time.time() 25 | print(process_data_set(data)) 26 | print(time.time() - start) 27 | -------------------------------------------------------------------------------- /examples/free.py: -------------------------------------------------------------------------------- 1 | from deco import concurrent 2 | 3 | BODIES = [90] 4 | 5 | 6 | def run(): 7 | BODIES.append(210) 8 | simulate() 9 | simulate.wait() 10 | 11 | 12 | @concurrent 13 | def simulate(): 14 | print(BODIES) 15 | 16 | if __name__ == "__main__": 17 | run() 18 | -------------------------------------------------------------------------------- /examples/hash_test.py: -------------------------------------------------------------------------------- 1 | import deco 2 | import time 3 | import md5 4 | 5 | 6 | def test_synchronous(sleep_time): 7 | starttime = time.time() 8 | m = md5.new("Nobody inspects the spammish repetition") 9 | count = 0 10 | while((time.time() - starttime) < sleep_time/1000.0): 11 | m.update("p") 12 | a = m.hexdigest() 13 | count += 1 14 | return count 15 | 16 | 17 | @deco.concurrent 18 | def test(sleep_time): 19 | starttime = time.time() 20 | m = md5.new("Nobody inspects the spammish repetition") 21 | count = 0 22 | while((time.time() - starttime) < sleep_time/1000.0): 23 | m.update(" repetition") 24 | a = m.hexdigest() 25 | count += 1 26 | return count 27 | 28 | test_time = 2 29 | if __name__ == "__main__": 30 | times = [0.001, 0.005, 0.01, 0.1, 0.5, 1, 2, 5, 10, 20, 30] 31 | print("Measuring hashes/sec with single threaded synchronous calls") 32 | for time_duration in times: 33 | hashes = 0 34 | start = time.time() 35 | iterations = int(test_time/(time_duration/1000.0)) 36 | if iterations > 20000: 37 | iterations = 20000 38 | for _ in range(iterations): 39 | hashes += test_synchronous(time_duration) 40 | print(time_duration, hashes / (time.time() - start)) 41 | print("Measuring hashes/sec with deco multiprocess calls (3 workers)") 42 | for time_duration in times: 43 | hashes = 0 44 | start = time.time() 45 | iterations = int(test_time/(time_duration/1000.0)) 46 | if iterations > 20000: 47 | iterations = 20000 48 | for _ in range(iterations): 49 | test(time_duration) 50 | result = test.wait() 51 | hashes = sum(result) 52 | print(hashes / (time.time() - start)) 53 | -------------------------------------------------------------------------------- /examples/nbody.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | 3 | from __future__ import print_function 4 | import pygame 5 | import random 6 | import math 7 | import time 8 | from deco import * 9 | 10 | 11 | iterations = 50 12 | 13 | 14 | class Body(object): 15 | def __init__(self, x, y, vx, vy, mass): 16 | self.x = x 17 | self.y = y 18 | self.vx = vx 19 | self.vy = vy 20 | self.mass = mass 21 | 22 | def update(self, fx, fy, dt): 23 | self.vx += fx / self.mass * dt 24 | self.vy += fy / self.mass * dt 25 | self.x += self.vx * dt 26 | self.y += self.vy * dt 27 | 28 | def distanceSquared(self, other): 29 | return ((self.x - other.x) ** 2 + (self.y - other.y) ** 2) 30 | 31 | 32 | @synchronized 33 | def Simulate(body_list, dt): 34 | next_body_list = {} 35 | for i in body_list.keys(): 36 | SimulateBody(body_list, next_body_list, i, dt) 37 | body_list.update(next_body_list) 38 | 39 | 40 | @concurrent 41 | def SimulateBody(body_list, next_body_list, index, dt): 42 | simulated_body = body_list[index] 43 | for _ in range(iterations): 44 | fx = 0 45 | fy = 0 46 | for key in body_list.keys(): 47 | if key == index: 48 | continue 49 | body = body_list[key] 50 | distanceSquared = body.distanceSquared(simulated_body) 51 | f = body.mass * simulated_body.mass / distanceSquared 52 | d = distanceSquared ** 0.5 53 | fx += (body.x - simulated_body.x) / d * f 54 | fy += (body.y - simulated_body.y) / d * f 55 | simulated_body.update(fx, fy, dt / iterations) 56 | next_body_list[index] = simulated_body 57 | 58 | 59 | if __name__ == "__main__": 60 | frame_limit = 10 61 | random.seed(0) 62 | 63 | pygame.init() 64 | screen_size = 700 65 | screen = pygame.display.set_mode((screen_size, screen_size)) 66 | points = {} 67 | point_count = 10 68 | cs = math.cos(math.pi/3) 69 | sn = math.sin(math.pi/3) 70 | key = 0 71 | for x in range(point_count): 72 | for y in range(point_count): 73 | dx = point_count / 2 - x 74 | dy = point_count / 2 - y 75 | d_len = math.pow(math.pow(dx, 2) + math.pow(dy, 2), .5) 76 | if d_len == 0: 77 | m = 1000 78 | vx = 0 79 | vy = 0 80 | else: 81 | dx /= d_len 82 | dy /= d_len 83 | m = 10000 * random.random() 84 | vx = (dx * cs - dy * sn) * 30 + (random.random() - .5) / 700 85 | vy = (dx * sn + dy * cs) * 30 + (random.random() - .5) / 700 86 | 87 | points[key] = Body( 88 | 1.0 * screen_size * x / point_count, 1.0 * screen_size * y / point_count, 89 | vx, vy, m) 90 | key += 1 91 | 92 | body_count = len(points) 93 | 94 | i = 0 95 | start = time.time() 96 | while True: 97 | screen.fill((0, 0, 0)) 98 | for point in points.values(): 99 | pygame.draw.circle( 100 | screen, (128, 128, 128), 101 | (int(point.x), int(point.y)), 102 | int(math.pow(point.mass/200, .5)), 0) 103 | 104 | pygame.display.update() 105 | Simulate(points, .1) 106 | if i < frame_limit: 107 | i += 1 108 | if i == frame_limit: 109 | print("Time:", time.time() - start) 110 | 111 | for event in pygame.event.get(): 112 | if event.type == pygame.QUIT: 113 | pygame.quit() 114 | exit() 115 | -------------------------------------------------------------------------------- /examples/simple_loop.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | from deco import * 3 | import time 4 | 5 | 6 | @concurrent 7 | def work(): 8 | time.sleep(0.1) 9 | 10 | 11 | @synchronized 12 | def run(): 13 | for _ in range(100): 14 | work() 15 | 16 | 17 | if __name__ == "__main__": 18 | start = time.time() 19 | run() 20 | print("Executing in serial should take 10 seconds") 21 | print("Executing in parallel took:", time.time() - start, "seconds") -------------------------------------------------------------------------------- /examples/simple_loop2.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | from deco import * 3 | import time 4 | 5 | 6 | @concurrent 7 | def work(i): 8 | time.sleep(0.1) 9 | return i 10 | 11 | 12 | @synchronized 13 | def run(): 14 | output = [] 15 | for i in range(100): 16 | output.append(work(i)) 17 | return output 18 | 19 | 20 | if __name__ == "__main__": 21 | start = time.time() 22 | print(run()) 23 | print("Executing in serial should take 10 seconds") 24 | print("Executing in parallel took:", time.time() - start, "seconds") -------------------------------------------------------------------------------- /examples/threads.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | from deco import * 3 | import time 4 | 5 | @concurrent.threaded 6 | def threaded_func(): 7 | time.sleep(0.1) 8 | 9 | @concurrent 10 | def mp_func(): 11 | time.sleep(0.1) 12 | 13 | def sync(func): 14 | for _ in range(10): 15 | func() 16 | func.wait() 17 | 18 | if __name__ == "__main__": 19 | start = time.time() 20 | sync(mp_func) 21 | print("Multiprocess duration:", time.time() - start) 22 | start = time.time() 23 | sync(threaded_func) 24 | print("Threaded duration:", time.time() - start) 25 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from distutils.core import setup 4 | 5 | setup( 6 | name = "deco", 7 | version = "0.6.3", 8 | description = "A decorator for concurrency", 9 | packages = ["deco"], 10 | author='Alex Sherman', 11 | author_email='asherman1024@gmail.com', 12 | url='https://github.com/alex-sherman/deco') 13 | -------------------------------------------------------------------------------- /test/testast.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | import ast 3 | import inspect 4 | import time 5 | from deco import * 6 | 7 | @concurrent 8 | def conc_func(*args, **kwargs): 9 | time.sleep(0.1) 10 | return kwargs 11 | 12 | @synchronized 13 | def body_cases(): 14 | conc_func() 15 | a = True if False else False 16 | b = (lambda : True)() 17 | 18 | @synchronized 19 | def tainted_return(): 20 | data = [] 21 | data.append(conc_func(data)) 22 | return data 23 | 24 | @synchronized 25 | def len_of_append(): 26 | data = [] 27 | data.append(conc_func([])) 28 | derp = len(data) 29 | return derp 30 | 31 | def indented(): 32 | @synchronized 33 | def _indented(): 34 | conc_func() 35 | 36 | return _indented() 37 | 38 | @synchronized 39 | def kwarged_sync(**kwargs): 40 | data = [] 41 | data.append(conc_func(**kwargs)) 42 | return data[0] 43 | 44 | @synchronized 45 | def subscript_args(): 46 | d = type('', (object,), {"items": {(0,0): 0}})() 47 | conc_func(d.items[0, 0]) 48 | #Read d to force a synchronization event 49 | d = d 50 | output = conc_func.in_progress 51 | return output 52 | 53 | @synchronized 54 | def list_comp(): 55 | result = [conc_func(i = i) for i in range(10)] 56 | return result 57 | 58 | class TestAST(unittest.TestCase): 59 | 60 | #This just shouldn't throw any exceptions 61 | def test_body_cases(self): 62 | body_cases() 63 | 64 | #This just shouldn't throw any exceptions 65 | def test_indent_cases(self): 66 | indented() 67 | 68 | #This just shouldn't throw any exceptions 69 | def test_tainted_return(self): 70 | tainted_return() 71 | 72 | def test_subscript_args(self): 73 | self.assertFalse(subscript_args()) 74 | 75 | def test_kwarged_sync(self): 76 | self.assertTrue(kwarged_sync(test = "test")["test"] == "test") 77 | 78 | def test_wait_after_append(self): 79 | self.assertEqual(len_of_append(), 1) 80 | 81 | def test_list_comp(self): 82 | self.assertEqual(list_comp(),[{'i': i} for i in range(10)]) 83 | 84 | if __name__ == "__main__": 85 | unittest.main() 86 | -------------------------------------------------------------------------------- /test/testconc.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | from deco import * 3 | 4 | @concurrent 5 | def kwarg_func(kwarg = None): 6 | kwarg[0] = "kwarged" 7 | return kwarg 8 | 9 | @concurrent 10 | def add_one(value): 11 | return value + 1 12 | 13 | @synchronized 14 | def for_loop(values): 15 | output = [] 16 | for i in values: 17 | output.append(add_one(i)) 18 | return [i - 1 for i in output] 19 | 20 | class TestCONC(unittest.TestCase): 21 | 22 | def test_kwargs(self): 23 | list_ = [0] 24 | kwarg_func(kwarg = list_) 25 | kwarg_func.wait() 26 | self.assertEqual(list_[0], "kwarged") 27 | 28 | def test_for_loop(self): 29 | values = range(30) 30 | self.assertEqual(list(values), for_loop(values)) 31 | 32 | if __name__ == "__main__": 33 | unittest.main() 34 | --------------------------------------------------------------------------------