├── .gitignore
├── .travis.yml
├── CHANGES.rst
├── COPYRIGHT.txt
├── LICENSE.txt
├── MANIFEST.in
├── README.rst
├── docs
├── Makefile
├── api.rst
├── api
│ ├── cache.rst
│ ├── exc.rst
│ ├── limiter.rst
│ ├── lock.rst
│ └── queue.rst
├── changelog.rst
├── conf.py
└── index.rst
├── retools
├── __init__.py
├── cache.py
├── event.py
├── exc.py
├── jobs.py
├── limiter.py
├── lock.py
├── queue.py
├── tests
│ ├── __init__.py
│ ├── jobs.py
│ ├── test_cache.py
│ ├── test_limiter.py
│ ├── test_lock.py
│ ├── test_queue.py
│ └── test_util.py
└── util.py
├── setup.cfg
└── setup.py
/.gitignore:
--------------------------------------------------------------------------------
1 | *.egg
2 | *.egg-info
3 | *.pyc
4 | *$py.class
5 | *.pt.py
6 | *.txt.py
7 | *~
8 | .coverage
9 | .tox/
10 | nosetests.xml
11 | pyramid/coverage.xml
12 | tutorial.db
13 | env26/
14 | env26-debug/
15 | bookenv/
16 | env24/
17 | env27/
18 | jyenv/
19 | pypyenv/
20 | build/
21 | dist/
22 | docs/_build
23 | bin/
24 | lib/
25 | include/
26 | .idea/
27 | distribute-*.tar.gz
28 |
--------------------------------------------------------------------------------
/.travis.yml:
--------------------------------------------------------------------------------
1 | language: python
2 | python:
3 | - "2.6"
4 | - "2.7"
5 | install:
6 | - sudo apt-get install redis-server
7 | - redis-server &
8 | - python setup.py develop
9 | script: python setup.py nosetests
10 |
--------------------------------------------------------------------------------
/CHANGES.rst:
--------------------------------------------------------------------------------
1 | =========
2 | Changelog
3 | =========
4 |
5 | 0.4.1 (02/19/2014)
6 | ==================
7 |
8 | Bug Fixes
9 | ---------
10 |
11 | - Properly support StrictRedis with ZADD (used in the limiter). Patch by
12 | Bernardo Heynemann.
13 |
14 | 0.4 (01/27/2014)
15 | ================
16 |
17 | Features
18 | --------
19 |
20 | - Added limiter functionality. Pull request #22, by Bernardo Heynemann.
21 |
22 | 0.3 (08/13/2012)
23 | ================
24 |
25 | Bug Fixes
26 | ---------
27 |
28 | - Call redis.expire with proper expires value for RedisLock. Patch by
29 | Mike McCabe.
30 | - Use functools.wraps to preserve doc strings for cache_region. Patch by
31 | Daniel Holth.
32 |
33 | API Changes
34 | -----------
35 |
36 | - Added get_job/get_jobs methods to QueueManager class to get information
37 | on a job or get a list of jobs for a queue.
38 |
39 | 0.2 (02/01/2012)
40 | ================
41 |
42 | Bug Fixes
43 | ---------
44 |
45 | - Critical fix for caching that prevents old values from being displayed
46 | forever. Thanks to Daniel Holth for tracking down the problem-aware.
47 | - Actually sets the Redis expiration for a value when setting the cached
48 | value in Redis. This defaults to 1 week.
49 |
50 | Features
51 | --------
52 |
53 | - Statistics for the cache is now optional and can be disabled to slightly
54 | reduce the Redis queries used to store/retrieve cache data.
55 | - Added first revision of worker/job Queue system, with event support.
56 |
57 | Internals
58 | ---------
59 |
60 | - Heavily refactored ``Connection`` to not be a class singleton, instead
61 | a global_connection instance is created and used by default.
62 | - Increased conditional coverage to 100% (via instrumental_).
63 |
64 | Backwards Incompatibilities
65 | ---------------------------
66 |
67 | - Changing the default global Redis connection has changed semantics, instead
68 | of using ``Connection.set_default``, you should set the global_connection's
69 | redis property directly::
70 |
71 | import redis
72 | from retools import global_connection
73 |
74 | global_connection.redis = redis.Redis(host='myhost')
75 |
76 |
77 | Incompatibilities
78 | -----------------
79 |
80 | - Removed clear argument from invalidate_region, as removing keys from the
81 | set but not removing the hit statistics can lead to data accumulating in
82 | Redis that has no easy removal other than .keys() which should not be run
83 | in production environments.
84 |
85 | - Removed deco_args from invalidate_callable (invalidate_function) as its
86 | not actually needed since the namespace is already on the callable to
87 | invalidate.
88 |
89 |
90 | 0.1 (07/08/2011)
91 | ================
92 |
93 | Features
94 | --------
95 |
96 | - Caching in a similar style to Beaker, with hit/miss statistics, backed by
97 | a Redis global write-lock with old values served to prevent the dogpile
98 | effect
99 | - Redis global lock
100 |
101 | .. _instrumental: http://pypi.python.org/pypi/instrumental
102 |
--------------------------------------------------------------------------------
/COPYRIGHT.txt:
--------------------------------------------------------------------------------
1 | Copyright (c) 2011 Ben Bangert, All Rights Reserved
2 |
--------------------------------------------------------------------------------
/LICENSE.txt:
--------------------------------------------------------------------------------
1 | Copyright (C) 2011 by Ben Bangert
2 |
3 | Permission is hereby granted, free of charge, to any person obtaining a copy
4 | of this software and associated documentation files (the "Software"), to deal
5 | in the Software without restriction, including without limitation the rights
6 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 | copies of the Software, and to permit persons to whom the Software is
8 | furnished to do so, subject to the following conditions:
9 |
10 | The above copyright notice and this permission notice shall be included in
11 | all copies or substantial portions of the Software.
12 |
13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19 | THE SOFTWARE.
20 |
--------------------------------------------------------------------------------
/MANIFEST.in:
--------------------------------------------------------------------------------
1 | recursive-include docs *
2 | include CHANGES.rst
3 | include LICENSE.txt
4 |
5 | global-exclude .DS_Store *.hgignore *.hgtags
6 |
--------------------------------------------------------------------------------
/README.rst:
--------------------------------------------------------------------------------
1 | =====================
2 | Redis Tools (retools)
3 | =====================
4 |
5 | ``retools`` is a package of `Redis `_ tools. It's aim is to
6 | provide a variety of Redis backed Python tools that are always 100% unit
7 | tested, fast, efficient, and utilize the capabilities of Redis.
8 |
9 | `retools` is available on PyPI at https://pypi.python.org/pypi/retools
10 |
11 | .. image:: https://badge.fury.io/py/retools.svg
12 | :target: http://badge.fury.io/py/retools
13 |
14 | Current tools in ``retools``:
15 |
16 | * Caching
17 | * Global Lock
18 | * Queues - A worker/job processing system similar to `Celery
19 | `_ but based on how Ruby's `Resque
20 | `_ system works.
21 |
22 | .. image:: https://secure.travis-ci.org/bbangert/retools.png?branch=master
23 | :alt: Build Status
24 | :target: https://secure.travis-ci.org/bbangert/retools
25 |
26 |
27 | Caching
28 | =======
29 |
30 | A high performance caching system that can act as a drop-in replacement for
31 | Beaker's caching. Unlike Beaker's caching, this utilizes Redis for distributed
32 | write-locking dogpile prevention. It also collects hit/miss cache statistics
33 | along with recording what regions are used by which functions and arguments.
34 |
35 | Example:
36 |
37 | .. code-block:: python
38 |
39 | from retools.cache import CacheRegion, cache_region, invalidate_function
40 |
41 | CacheRegion.add_region('short_term', expires=3600)
42 |
43 | @cache_region('short_term')
44 | def slow_function(*search_terms):
45 | # Do a bunch of work
46 | return results
47 |
48 | my_results = slow_function('bunny')
49 |
50 | # Invalidate the cache for 'bunny'
51 | invalidate_function(slow_function, [], 'bunny')
52 |
53 |
54 | Differences from Beaker
55 | -----------------------
56 |
57 | Unlike Beaker's caching system, this is built strictly for Redis. As such, it
58 | adds several features that Beaker doesn't possess:
59 |
60 | * A distributed write-lock so that only one writer updates the cache at a time
61 | across a cluster.
62 | * Hit/Miss cache statistics to give you insight into what caches are less
63 | effectively utilized (and may need either higher expiration times, or just
64 | not very worthwhile to cache).
65 | * Very small, compact code-base with 100% unit test coverage.
66 |
67 |
68 | Locking
69 | =======
70 |
71 | A Redis based lock implemented as a Python context manager, based on `Chris
72 | Lamb's example
73 | `_.
74 |
75 | Example:
76 |
77 | .. code-block:: python
78 |
79 | from retools.lock import Lock
80 |
81 | with Lock('a_key', expires=60, timeout=10):
82 | # do something that should only be done one at a time
83 |
84 |
85 | License
86 | =======
87 |
88 | ``retools`` is offered under the MIT license.
89 |
90 |
91 | Authors
92 | =======
93 |
94 | ``retools`` is made available by `Ben Bangert`.
95 |
--------------------------------------------------------------------------------
/docs/Makefile:
--------------------------------------------------------------------------------
1 | # Makefile for Sphinx documentation
2 | #
3 |
4 | # You can set these variables from the command line.
5 | SPHINXOPTS =
6 | SPHINXBUILD = sphinx-build
7 | PAPER =
8 | BUILDDIR = _build
9 |
10 | # Internal variables.
11 | PAPEROPT_a4 = -D latex_paper_size=a4
12 | PAPEROPT_letter = -D latex_paper_size=letter
13 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
14 |
15 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest
16 |
17 | help:
18 | @echo "Please use \`make ' where is one of"
19 | @echo " html to make standalone HTML files"
20 | @echo " dirhtml to make HTML files named index.html in directories"
21 | @echo " singlehtml to make a single large HTML file"
22 | @echo " pickle to make pickle files"
23 | @echo " json to make JSON files"
24 | @echo " htmlhelp to make HTML files and a HTML help project"
25 | @echo " qthelp to make HTML files and a qthelp project"
26 | @echo " devhelp to make HTML files and a Devhelp project"
27 | @echo " epub to make an epub"
28 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
29 | @echo " latexpdf to make LaTeX files and run them through pdflatex"
30 | @echo " text to make text files"
31 | @echo " man to make manual pages"
32 | @echo " changes to make an overview of all changed/added/deprecated items"
33 | @echo " linkcheck to check all external links for integrity"
34 | @echo " doctest to run all doctests embedded in the documentation (if enabled)"
35 |
36 | clean:
37 | -rm -rf $(BUILDDIR)/*
38 |
39 | html:
40 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
41 | @echo
42 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
43 |
44 | dirhtml:
45 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
46 | @echo
47 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
48 |
49 | singlehtml:
50 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
51 | @echo
52 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
53 |
54 | pickle:
55 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
56 | @echo
57 | @echo "Build finished; now you can process the pickle files."
58 |
59 | json:
60 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
61 | @echo
62 | @echo "Build finished; now you can process the JSON files."
63 |
64 | htmlhelp:
65 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
66 | @echo
67 | @echo "Build finished; now you can run HTML Help Workshop with the" \
68 | ".hhp project file in $(BUILDDIR)/htmlhelp."
69 |
70 | qthelp:
71 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
72 | @echo
73 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \
74 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
75 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/retools.qhcp"
76 | @echo "To view the help file:"
77 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/retools.qhc"
78 |
79 | devhelp:
80 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
81 | @echo
82 | @echo "Build finished."
83 | @echo "To view the help file:"
84 | @echo "# mkdir -p $$HOME/.local/share/devhelp/retools"
85 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/retools"
86 | @echo "# devhelp"
87 |
88 | epub:
89 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
90 | @echo
91 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
92 |
93 | latex:
94 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
95 | @echo
96 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
97 | @echo "Run \`make' in that directory to run these through (pdf)latex" \
98 | "(use \`make latexpdf' here to do that automatically)."
99 |
100 | latexpdf:
101 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
102 | @echo "Running LaTeX files through pdflatex..."
103 | make -C $(BUILDDIR)/latex all-pdf
104 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
105 |
106 | text:
107 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
108 | @echo
109 | @echo "Build finished. The text files are in $(BUILDDIR)/text."
110 |
111 | man:
112 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
113 | @echo
114 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man."
115 |
116 | changes:
117 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
118 | @echo
119 | @echo "The overview file is in $(BUILDDIR)/changes."
120 |
121 | linkcheck:
122 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
123 | @echo
124 | @echo "Link check complete; look for any errors in the above output " \
125 | "or in $(BUILDDIR)/linkcheck/output.txt."
126 |
127 | doctest:
128 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
129 | @echo "Testing of doctests in the sources finished, look at the " \
130 | "results in $(BUILDDIR)/doctest/output.txt."
131 |
--------------------------------------------------------------------------------
/docs/api.rst:
--------------------------------------------------------------------------------
1 | API Documentation
2 | =================
3 |
4 | Comprehensive reference material for every public API exposed by
5 | :py:mod:`retools` is available within this chapter. The API documentation is
6 | organized alphabetically by module name.
7 |
8 | .. toctree::
9 | :maxdepth: 1
10 |
11 | api/cache
12 | api/exc
13 | api/limiter
14 | api/lock
15 | api/queue
16 |
--------------------------------------------------------------------------------
/docs/api/cache.rst:
--------------------------------------------------------------------------------
1 | .. _cache_module:
2 |
3 | :mod:`retools.cache`
4 | --------------------
5 |
6 | .. automodule:: retools.cache
7 |
8 | Constants
9 | ~~~~~~~~~
10 |
11 | .. py:data:: NoneMarker
12 |
13 | A module global returned to indicate no value is present in Redis
14 | rather than a ``None`` object.
15 |
16 | Functions
17 | ~~~~~~~~~
18 |
19 | .. autofunction:: cache_region
20 | .. autofunction:: invalidate_region
21 | .. autofunction:: invalidate_function
22 |
23 |
24 | Classes
25 | ~~~~~~~
26 |
27 | .. autoclass:: CacheKey
28 | :members:
29 | :inherited-members:
30 |
31 | .. autoclass:: CacheRegion
32 | :members:
33 | :inherited-members:
34 |
--------------------------------------------------------------------------------
/docs/api/exc.rst:
--------------------------------------------------------------------------------
1 | .. _exc_module:
2 |
3 | :mod:`retools.exc`
4 | -------------------
5 |
6 | .. automodule:: retools.exc
7 |
8 |
9 | Exceptions
10 | ~~~~~~~~~~
11 |
12 | .. autoexception:: RetoolsException
13 | .. autoexception:: ConfigurationError
14 | .. autoexception:: CacheConfigurationError
15 | .. autoexception:: QueueError
16 | .. autoexception:: AbortJob
17 |
--------------------------------------------------------------------------------
/docs/api/limiter.rst:
--------------------------------------------------------------------------------
1 | .. _limiter_module:
2 |
3 | :mod:`retools.limiter`
4 | ----------------------
5 |
6 | .. automodule:: retools.limiter
7 |
8 |
9 | Public API Classes
10 | ~~~~~~~~~~~~~~~~~~
11 |
12 | .. autoclass:: Limiter
13 | :members: __init__, acquire_limit, release_limit
14 |
--------------------------------------------------------------------------------
/docs/api/lock.rst:
--------------------------------------------------------------------------------
1 | .. _lock_module:
2 |
3 | :mod:`retools.lock`
4 | -------------------
5 |
6 | .. automodule:: retools.lock
7 |
8 |
9 | Classes
10 | ~~~~~~~
11 |
12 | .. autoclass:: Lock
13 | :members: __init__
14 |
15 | Exceptions
16 | ~~~~~~~~~~
17 |
18 | .. autoexception:: LockTimeout
19 |
--------------------------------------------------------------------------------
/docs/api/queue.rst:
--------------------------------------------------------------------------------
1 | .. _queue_module:
2 |
3 | :mod:`retools.queue`
4 | --------------------
5 |
6 | .. automodule:: retools.queue
7 |
8 |
9 | Public API Classes
10 | ~~~~~~~~~~~~~~~~~~
11 |
12 | .. autoclass:: QueueManager
13 | :members: __init__, set_queue_for_job, subscriber, enqueue
14 |
15 | Private API Classes
16 | ~~~~~~~~~~~~~~~~~~~
17 |
18 | .. autoclass:: Job
19 | :members: __init__, load_events, perform, enqueue, run_event
20 |
21 | .. autoclass:: Worker
22 | :members: __init__, worker_id, queue_names, work, reserve, set_proc_title, register_worker, unregister_worker, startup, trigger_shutdown, immediate_shutdown, kill_child, pause_processing, resume_processing, prune_dead_workers, register_signal_handlers, working_on, done_working, worker_pids, perform
23 |
--------------------------------------------------------------------------------
/docs/changelog.rst:
--------------------------------------------------------------------------------
1 | .. include:: ../CHANGES.rst
2 |
--------------------------------------------------------------------------------
/docs/conf.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | #
3 | # retools documentation build configuration file, created by
4 | # sphinx-quickstart on Sun Jul 10 12:01:31 2011.
5 | #
6 | # This file is execfile()d with the current directory set to its containing dir.
7 | #
8 | # Note that not all possible configuration values are present in this
9 | # autogenerated file.
10 | #
11 | # All configuration values have a default; values that are commented out
12 | # serve to show the default.
13 |
14 | import sys, os
15 |
16 | # If extensions (or modules to document with autodoc) are in another directory,
17 | # add these directories to sys.path here. If the directory is relative to the
18 | # documentation root, use os.path.abspath to make it absolute, like shown here.
19 | #sys.path.insert(0, os.path.abspath('.'))
20 |
21 | # -- General configuration -----------------------------------------------------
22 |
23 | # If your documentation needs a minimal Sphinx version, state it here.
24 | #needs_sphinx = '1.0'
25 |
26 | # Add any Sphinx extension module names here, as strings. They can be extensions
27 | # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
28 | extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.viewcode']
29 |
30 | # Add any paths that contain templates here, relative to this directory.
31 | templates_path = ['_templates']
32 |
33 | # The suffix of source filenames.
34 | source_suffix = '.rst'
35 |
36 | # The encoding of source files.
37 | #source_encoding = 'utf-8-sig'
38 |
39 | # The master toctree document.
40 | master_doc = 'index'
41 |
42 | # General information about the project.
43 | project = u'retools'
44 | copyright = u'2011-2012, Ben Bangert'
45 |
46 | # The version info for the project you're documenting, acts as replacement for
47 | # |version| and |release|, also used in various other places throughout the
48 | # built documents.
49 | #
50 | # The short X.Y version.
51 | version = '0.4.1'
52 | # The full version, including alpha/beta/rc tags.
53 | release = '0.4.1'
54 |
55 | # The language for content autogenerated by Sphinx. Refer to documentation
56 | # for a list of supported languages.
57 | #language = None
58 |
59 | # There are two options for replacing |today|: either, you set today to some
60 | # non-false value, then it is used:
61 | #today = ''
62 | # Else, today_fmt is used as the format for a strftime call.
63 | #today_fmt = '%B %d, %Y'
64 |
65 | # List of patterns, relative to source directory, that match files and
66 | # directories to ignore when looking for source files.
67 | exclude_patterns = ['_build']
68 |
69 | # The reST default role (used for this markup: `text`) to use for all documents.
70 | #default_role = None
71 |
72 | # If true, '()' will be appended to :func: etc. cross-reference text.
73 | #add_function_parentheses = True
74 |
75 | # If true, the current module name will be prepended to all description
76 | # unit titles (such as .. function::).
77 | #add_module_names = True
78 |
79 | # If true, sectionauthor and moduleauthor directives will be shown in the
80 | # output. They are ignored by default.
81 | #show_authors = False
82 |
83 | # The name of the Pygments (syntax highlighting) style to use.
84 | pygments_style = 'sphinx'
85 |
86 | # A list of ignored prefixes for module index sorting.
87 | #modindex_common_prefix = []
88 |
89 |
90 | # -- Options for HTML output ---------------------------------------------------
91 |
92 | # The theme to use for HTML and HTML Help pages. See the documentation for
93 | # a list of builtin themes.
94 | html_theme = 'default'
95 |
96 | # Theme options are theme-specific and customize the look and feel of a theme
97 | # further. For a list of options available for each theme, see the
98 | # documentation.
99 | #html_theme_options = {}
100 |
101 | # Add any paths that contain custom themes here, relative to this directory.
102 | #html_theme_path = []
103 |
104 | # The name for this set of Sphinx documents. If None, it defaults to
105 | # " v documentation".
106 | #html_title = None
107 |
108 | # A shorter title for the navigation bar. Default is the same as html_title.
109 | #html_short_title = None
110 |
111 | # The name of an image file (relative to this directory) to place at the top
112 | # of the sidebar.
113 | #html_logo = None
114 |
115 | # The name of an image file (within the static path) to use as favicon of the
116 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
117 | # pixels large.
118 | #html_favicon = None
119 |
120 | # Add any paths that contain custom static files (such as style sheets) here,
121 | # relative to this directory. They are copied after the builtin static files,
122 | # so a file named "default.css" will overwrite the builtin "default.css".
123 | #html_static_path = ['_static']
124 |
125 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
126 | # using the given strftime format.
127 | #html_last_updated_fmt = '%b %d, %Y'
128 |
129 | # If true, SmartyPants will be used to convert quotes and dashes to
130 | # typographically correct entities.
131 | #html_use_smartypants = True
132 |
133 | # Custom sidebar templates, maps document names to template names.
134 | #html_sidebars = {}
135 |
136 | # Additional templates that should be rendered to pages, maps page names to
137 | # template names.
138 | #html_additional_pages = {}
139 |
140 | # If false, no module index is generated.
141 | #html_domain_indices = True
142 |
143 | # If false, no index is generated.
144 | #html_use_index = True
145 |
146 | # If true, the index is split into individual pages for each letter.
147 | #html_split_index = False
148 |
149 | # If true, links to the reST sources are added to the pages.
150 | #html_show_sourcelink = True
151 |
152 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
153 | #html_show_sphinx = True
154 |
155 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
156 | #html_show_copyright = True
157 |
158 | # If true, an OpenSearch description file will be output, and all pages will
159 | # contain a tag referring to it. The value of this option must be the
160 | # base URL from which the finished HTML is served.
161 | #html_use_opensearch = ''
162 |
163 | # This is the file name suffix for HTML files (e.g. ".xhtml").
164 | #html_file_suffix = None
165 |
166 | # Output file base name for HTML help builder.
167 | htmlhelp_basename = 'retoolsdoc'
168 |
169 |
170 | # -- Options for LaTeX output --------------------------------------------------
171 |
172 | # The paper size ('letter' or 'a4').
173 | #latex_paper_size = 'letter'
174 |
175 | # The font size ('10pt', '11pt' or '12pt').
176 | #latex_font_size = '10pt'
177 |
178 | # Grouping the document tree into LaTeX files. List of tuples
179 | # (source start file, target name, title, author, documentclass [howto/manual]).
180 | latex_documents = [
181 | ('index', 'retools.tex', u'retools Documentation',
182 | u'Ben Bangert', 'manual'),
183 | ]
184 |
185 | # The name of an image file (relative to this directory) to place at the top of
186 | # the title page.
187 | #latex_logo = None
188 |
189 | # For "manual" documents, if this is true, then toplevel headings are parts,
190 | # not chapters.
191 | #latex_use_parts = False
192 |
193 | # If true, show page references after internal links.
194 | #latex_show_pagerefs = False
195 |
196 | # If true, show URL addresses after external links.
197 | #latex_show_urls = False
198 |
199 | # Additional stuff for the LaTeX preamble.
200 | #latex_preamble = ''
201 |
202 | # Documents to append as an appendix to all manuals.
203 | #latex_appendices = []
204 |
205 | # If false, no module index is generated.
206 | #latex_domain_indices = True
207 |
208 |
209 | # -- Options for manual page output --------------------------------------------
210 |
211 | # One entry per manual page. List of tuples
212 | # (source start file, name, description, authors, manual section).
213 | man_pages = [
214 | ('index', 'retools', u'retools Documentation',
215 | [u'Ben Bangert'], 1)
216 | ]
217 |
218 |
219 | # -- Options for Epub output ---------------------------------------------------
220 |
221 | # Bibliographic Dublin Core info.
222 | epub_title = u'retools'
223 | epub_author = u'Ben Bangert'
224 | epub_publisher = u'Ben Bangert'
225 | epub_copyright = u'2011-2012, Ben Bangert'
226 |
227 | # The language of the text. It defaults to the language option
228 | # or en if the language is not set.
229 | #epub_language = ''
230 |
231 | # The scheme of the identifier. Typical schemes are ISBN or URL.
232 | #epub_scheme = ''
233 |
234 | # The unique identifier of the text. This can be a ISBN number
235 | # or the project homepage.
236 | #epub_identifier = ''
237 |
238 | # A unique identification for the text.
239 | #epub_uid = ''
240 |
241 | # HTML files that should be inserted before the pages created by sphinx.
242 | # The format is a list of tuples containing the path and title.
243 | #epub_pre_files = []
244 |
245 | # HTML files shat should be inserted after the pages created by sphinx.
246 | # The format is a list of tuples containing the path and title.
247 | #epub_post_files = []
248 |
249 | # A list of files that should not be packed into the epub file.
250 | #epub_exclude_files = []
251 |
252 | # The depth of the table of contents in toc.ncx.
253 | #epub_tocdepth = 3
254 |
255 | # Allow duplicate toc entries.
256 | #epub_tocdup = True
257 |
--------------------------------------------------------------------------------
/docs/index.rst:
--------------------------------------------------------------------------------
1 | .. _index:
2 |
3 | ================================
4 | retools - A Python Redis Toolset
5 | ================================
6 |
7 | `retools` is a concise set of well-tested extensible Python Redis tools.
8 |
9 | `retools` is available on PyPI at https://pypi.python.org/pypi/retools
10 |
11 | .. image:: https://badge.fury.io/py/retools.svg
12 | :target: http://badge.fury.io/py/retools
13 |
14 | - :mod:`Caching `
15 | - Hit/Miss Statistics
16 | - Regions for common expiration periods and invalidating batches of
17 | functions at once.
18 | - Write-lock to prevent the `Thundering Herd`_
19 | - :mod:`Distributed Locking `
20 | - Python context-manager with lock timeouts and retries
21 | - :mod:`Queuing `
22 | - Simple :ref:`forking worker ` based on `Resque`_
23 | - Jobs stored as JSON in Redis for easy introspection
24 | - `setproctitle`_ used by workers for easy worker introspection on
25 | the command line
26 | - :ref:`Rich event system ` for extending job processing behavior
27 | - :mod:`Limiter `
28 | - Useful for making sure that only N operations for a given process happen at the same time
29 | - Well Tested [1]_
30 | - 100% statement coverage
31 | - 100% condition coverage (via instrumental_)
32 |
33 |
34 | Reference Material
35 | ==================
36 |
37 | Reference material includes documentation for every `retools` API.
38 |
39 | .. toctree::
40 | :maxdepth: 1
41 |
42 | api
43 | Changelog
44 |
45 |
46 | Indices and tables
47 | ==================
48 |
49 | * :ref:`genindex`
50 | * :ref:`modindex`
51 |
52 | .. [1] queuing not up to 100% testing yet
53 |
54 | .. _`Thundering Herd`: http://en.wikipedia.org/wiki/Thundering_herd_problem
55 | .. _instrumental: http://pypi.python.org/pypi/instrumental
56 | .. _Resque: https://github.com/resque/resque
57 | .. _setproctitle: http://pypi.python.org/pypi/setproctitle
58 |
--------------------------------------------------------------------------------
/retools/__init__.py:
--------------------------------------------------------------------------------
1 | """retools
2 |
3 | This module holds a default Redis instance, which can be configured
4 | process-wide::
5 |
6 | from redis import Redis
7 | from retools import global_connection
8 |
9 | global_connection.redis = Redis(host='192.168.1.1', db=2)
10 |
11 | Alternatively, many parts of retools accept Redis instances that may be passed
12 | directly.
13 |
14 | """
15 | from redis import Redis
16 |
17 | __all__ = ['Connection']
18 |
19 |
20 | class Connection(object):
21 | """The default Redis Connection
22 |
23 | A :obj:`retools.global_connection` object is created using this
24 | during import. The ``.redis`` property can be set on it to change
25 | the connection used globally by retools, or individual ``retools``
26 | functions can be called with a custom ``Redis`` object.
27 |
28 | """
29 | def __init__(self):
30 | self._redis = None
31 |
32 | @property
33 | def redis(self):
34 | if not self._redis:
35 | self._redis = Redis()
36 | return self._redis
37 |
38 | @redis.setter
39 | def redis(self, conn):
40 | self._redis = conn
41 |
42 | global_connection = Connection()
43 |
--------------------------------------------------------------------------------
/retools/cache.py:
--------------------------------------------------------------------------------
1 | """Caching
2 |
3 | Cache regions are used to simplify common expirations and group function
4 | caches.
5 |
6 | To indicate functions should use cache regions, apply the decorator::
7 |
8 | from retools.cache import cache_region
9 |
10 | @cache_region('short_term')
11 | def myfunction(arg1):
12 | return arg1
13 |
14 | To configure the cache regions, setup the :class:`~retools.cache.CacheRegion`
15 | object::
16 |
17 | from retools.cache import CacheRegion
18 |
19 | CacheRegion.add_region("short_term", expires=60)
20 |
21 | """
22 | import cPickle
23 | import time
24 | from datetime import date
25 |
26 | from retools import global_connection
27 | from retools.exc import CacheConfigurationError
28 | from retools.lock import Lock
29 | from retools.lock import LockTimeout
30 | from retools.util import func_namespace
31 | from retools.util import has_self_arg
32 |
33 | from functools import wraps
34 |
35 | class _NoneMarker(object):
36 | pass
37 | NoneMarker = _NoneMarker()
38 |
39 |
40 | class CacheKey(object):
41 | """Cache Key object
42 |
43 | Generator of cache keys for a variety of purposes once
44 | provided with a region, namespace, and key (args).
45 |
46 | """
47 | def __init__(self, region, namespace, key, today=None):
48 | """Setup a CacheKey object
49 |
50 | The CacheKey object creates the key-names used to store and
51 | retrieve values from Redis.
52 |
53 | :param region: Name of the region
54 | :type region: string
55 | :param namespace: Namespace to use
56 | :type namespace: string
57 | :param key: Key of the cached data, to differentiate various
58 | arguments to the same callable
59 |
60 | """
61 | if not today:
62 | today = str(date.today())
63 | self.lock_key = 'retools:lock:%s:%s:%s' % (region, namespace, key)
64 | self.redis_key = 'retools:%s:%s:%s' % (region, namespace, key)
65 | self.redis_hit_key = 'retools:hits:%s:%s:%s:%s' % (
66 | today, region, namespace, key)
67 | self.redis_miss_key = 'retools:misses:%s:%s:%s:%s' % (
68 | today, region, namespace, key)
69 | self.redis_keyset = 'retools:%s:%s:keys' % (region, namespace)
70 |
71 |
72 | class CacheRegion(object):
73 | """CacheRegion manager and configuration object
74 |
75 | For organization sake, the CacheRegion object is used to configure
76 | the available cache regions, query regions for currently cached
77 | keys, and set batches of keys by region for immediate expiration.
78 |
79 | Caching can be turned off globally by setting enabled to False::
80 |
81 | CacheRegion.enabled = False
82 |
83 | Statistics should also be turned on or off globally::
84 |
85 | CacheRegion.statistics = False
86 |
87 | However, if only some namespaces should have statistics recorded,
88 | then this should be used directly.
89 |
90 | """
91 | regions = {}
92 | enabled = True
93 | statistics = True
94 |
95 | @classmethod
96 | def add_region(cls, name, expires, redis_expiration=60 * 60 * 24 * 7):
97 | """Add a cache region to the current configuration
98 |
99 | :param name: The name of the cache region
100 | :type name: string
101 | :param expires: The expiration in seconds.
102 | :type expires: integer
103 | :param redis_expiration: How long the Redis key expiration is
104 | set for. Defaults to 1 week.
105 | :type redis_expiration: integer
106 |
107 | """
108 | cls.regions[name] = dict(expires=expires,
109 | redis_expiration=redis_expiration)
110 |
111 | @classmethod
112 | def _add_tracking(cls, pipeline, region, namespace, key):
113 | """Add's basic set members for tracking
114 |
115 | This is added to a Redis pipeline for a single round-trip to
116 | Redis.
117 |
118 | """
119 | pipeline.sadd('retools:regions', region)
120 | pipeline.sadd('retools:%s:namespaces' % region, namespace)
121 | pipeline.sadd('retools:%s:%s:keys' % (region, namespace), key)
122 |
123 | @classmethod
124 | def invalidate(cls, region):
125 | """Invalidate an entire region
126 |
127 | .. note::
128 |
129 | This does not actually *clear* the region of data, but
130 | just sets the value to expire on next access.
131 |
132 | :param region: Region name
133 | :type region: string
134 |
135 | """
136 | redis = global_connection.redis
137 | namespaces = redis.smembers('retools:%s:namespaces' % region)
138 | if not namespaces:
139 | return None
140 |
141 | # Locate the longest expiration of a region, so we can set
142 | # the created value far enough back to force a refresh
143 | longest_expire = max(
144 | [x['expires'] for x in CacheRegion.regions.values()])
145 | new_created = time.time() - longest_expire - 3600
146 |
147 | for ns in namespaces:
148 | cache_keyset_key = 'retools:%s:%s:keys' % (region, ns)
149 | keys = set(['']) | redis.smembers(cache_keyset_key)
150 | for key in keys:
151 | cache_key = 'retools:%s:%s:%s' % (region, ns, key)
152 | if not redis.exists(cache_key):
153 | redis.srem(cache_keyset_key, key)
154 | else:
155 | redis.hset(cache_key, 'created', new_created)
156 |
157 | @classmethod
158 | def load(cls, region, namespace, key, regenerate=True, callable=None,
159 | statistics=None):
160 | """Load a value from Redis, and possibly recreate it
161 |
162 | This method is used to load a value from Redis, and usually
163 | regenerates the value using the callable when provided.
164 |
165 | If ``regenerate`` is ``False`` and a ``callable`` is not passed
166 | in, then :obj:`~retools.cache.NoneMarker` will be returned.
167 |
168 | :param region: Region name
169 | :type region: string
170 | :param namespace: Namespace for the value
171 | :type namespace: string
172 | :param key: Key for this value under the namespace
173 | :type key: string
174 | :param regenerate: If False, then existing keys will always be
175 | returned regardless of cache expiration. In the
176 | event that there is no existing key and no
177 | callable was provided, then a NoneMarker will
178 | be returned.
179 | :type regenerate: bool
180 | :param callable: A callable to use when the cached value needs to be
181 | created
182 | :param statistics: Whether or not hit/miss statistics should be
183 | updated
184 | :type statistics: bool
185 |
186 | """
187 | if statistics is None:
188 | statistics = cls.statistics
189 | redis = global_connection.redis
190 | now = time.time()
191 | region_settings = cls.regions[region]
192 | expires = region_settings['expires']
193 | redis_expiration = region_settings['redis_expiration']
194 |
195 | keys = CacheKey(region=region, namespace=namespace, key=key)
196 |
197 | # Create a transaction to update our hit counter for today and
198 | # retrieve the current value.
199 | if statistics:
200 | p = redis.pipeline(transaction=True)
201 | p.hgetall(keys.redis_key)
202 | p.get(keys.redis_hit_key)
203 | p.incr(keys.redis_hit_key)
204 | results = p.execute()
205 | result, existing_hits = results[0], results[1]
206 | if existing_hits is None:
207 | existing_hits = 0
208 | else:
209 | existing_hits = int(existing_hits)
210 | else:
211 | result = redis.hgetall(keys.redis_key)
212 |
213 | expired = True
214 | if result and now - float(result['created']) < expires:
215 | expired = False
216 |
217 | if (result and not regenerate) or not expired:
218 | # We have a result and were told not to regenerate so
219 | # we always return it immediately regardless of expiration,
220 | # or its not expired
221 | return cPickle.loads(result['value'])
222 |
223 | if not result and not regenerate:
224 | # No existing value, but we were told not to regenerate it and
225 | # there's no callable, so we return a NoneMarker
226 | return NoneMarker
227 |
228 | # Don't wait for the lock if we have an old value
229 | if result and 'value' in result:
230 | timeout = 0
231 | else:
232 | timeout = 60 * 60
233 |
234 | try:
235 | with Lock(keys.lock_key, expires=expires, timeout=timeout):
236 | # Did someone else already create it?
237 | result = redis.hgetall(keys.redis_key)
238 | now = time.time()
239 | if result and 'value' in result and \
240 | now - float(result['created']) < expires:
241 | return cPickle.loads(result['value'])
242 |
243 | value = callable()
244 |
245 | p = redis.pipeline(transaction=True)
246 | p.hmset(keys.redis_key, {'created': now,
247 | 'value': cPickle.dumps(value)})
248 | p.expire(keys.redis_key, redis_expiration)
249 | cls._add_tracking(p, region, namespace, key)
250 | if statistics:
251 | p.getset(keys.redis_hit_key, 0)
252 | new_hits = int(p.execute()[0])
253 | else:
254 | p.execute()
255 | except LockTimeout:
256 | if result:
257 | return cPickle.loads(result['value'])
258 | else:
259 | # log some sort of error?
260 | return NoneMarker
261 |
262 | # Nothing else to do if not recording stats
263 | if not statistics:
264 | return value
265 |
266 | misses = new_hits - existing_hits
267 | if misses:
268 | p = redis.pipeline(transaction=True)
269 | p.incr(keys.redis_hit_key, amount=existing_hits)
270 | p.incr(keys.redis_miss_key, amount=misses)
271 | p.execute()
272 | else:
273 | redis.incr(keys.redis_hit_key, amount=existing_hits)
274 | return value
275 |
276 |
277 | def invalidate_region(region):
278 | """Invalidate all the namespace's in a given region
279 |
280 | .. note::
281 |
282 | This does not actually *clear* the region of data, but
283 | just sets the value to expire on next access.
284 |
285 | :param region: Region name
286 | :type region: string
287 |
288 | """
289 | CacheRegion.invalidate(region)
290 |
291 |
292 | def invalidate_callable(callable, *args):
293 | """Invalidate the cache for a callable
294 |
295 | :param callable: The callable that was cached
296 | :type callable: callable object
297 | :param \*args: Arguments the function was called with that
298 | should be invalidated. If the args is just the
299 | differentiator for the function, or not present, then all
300 | values for the function will be invalidated.
301 |
302 | Example::
303 |
304 | @cache_region('short_term', 'small_engine')
305 | def local_search(search_term):
306 | # do search and return it
307 |
308 | @cache_region('long_term')
309 | def lookup_folks():
310 | # look them up and return them
311 |
312 | # To clear local_search for search_term = 'fred'
313 | invalidate_function(local_search, 'fred')
314 |
315 | # To clear all cached variations of the local_search function
316 | invalidate_function(local_search)
317 |
318 | # To clear out lookup_folks
319 | invalidate_function(lookup_folks)
320 |
321 | """
322 | redis = global_connection.redis
323 | region = callable._region
324 | namespace = callable._namespace
325 |
326 | # Get the expiration for this region
327 | new_created = time.time() - CacheRegion.regions[region]['expires'] - 3600
328 |
329 | if args:
330 | try:
331 | cache_key = " ".join(map(str, args))
332 | except UnicodeEncodeError:
333 | cache_key = " ".join(map(unicode, args))
334 | redis.hset('retools:%s:%s:%s' % (region, namespace, cache_key),
335 | 'created', new_created)
336 | else:
337 | cache_keyset_key = 'retools:%s:%s:keys' % (region, namespace)
338 | keys = set(['']) | redis.smembers(cache_keyset_key)
339 | p = redis.pipeline(transaction=True)
340 | for key in keys:
341 | p.hset('retools:%s:%s:%s' % (region, namespace, key), 'created',
342 | new_created)
343 | p.execute()
344 | return None
345 | invalidate_function = invalidate_callable
346 |
347 |
348 | def cache_region(region, *deco_args, **kwargs):
349 | """Decorate a function such that its return result is cached,
350 | using a "region" to indicate the cache arguments.
351 |
352 | :param region: Name of the region to cache to
353 | :type region: string
354 | :param \*deco_args: Optional ``str()``-compatible arguments which will
355 | uniquely identify the key used by this decorated function, in addition
356 | to the positional arguments passed to the function itself at call time.
357 | This is recommended as it is needed to distinguish between any two
358 | functions or methods that have the same name (regardless of parent
359 | class or not).
360 | :type deco_args: list
361 |
362 | .. note::
363 |
364 | The function being decorated must only be called with
365 | positional arguments, and the arguments must support
366 | being stringified with ``str()``. The concatenation
367 | of the ``str()`` version of each argument, combined
368 | with that of the ``*args`` sent to the decorator,
369 | forms the unique cache key.
370 |
371 | Example::
372 |
373 | from retools.cache import cache_region
374 |
375 | @cache_region('short_term', 'load_things')
376 | def load(search_term, limit, offset):
377 | '''Load from a database given a search term, limit, offset.'''
378 | return database.query(search_term)[offset:offset + limit]
379 |
380 | The decorator can also be used with object methods. The ``self``
381 | argument is not part of the cache key. This is based on the
382 | actual string name ``self`` being in the first argument
383 | position::
384 |
385 | class MyThing(object):
386 | @cache_region('short_term', 'load_things')
387 | def load(self, search_term, limit, offset):
388 | '''Load from a database given a search term, limit, offset.'''
389 | return database.query(search_term)[offset:offset + limit]
390 |
391 | Classmethods work as well - use ``cls`` as the name of the class argument,
392 | and place the decorator around the function underneath ``@classmethod``::
393 |
394 | class MyThing(object):
395 | @classmethod
396 | @cache_region('short_term', 'load_things')
397 | def load(cls, search_term, limit, offset):
398 | '''Load from a database given a search term, limit, offset.'''
399 | return database.query(search_term)[offset:offset + limit]
400 |
401 | .. note::
402 |
403 | When a method on a class is decorated, the ``self`` or ``cls``
404 | argument in the first position is
405 | not included in the "key" used for caching.
406 |
407 | """
408 | def decorate(func):
409 | namespace = func_namespace(func, deco_args)
410 | skip_self = has_self_arg(func)
411 | regenerate = kwargs.get('regenerate', True)
412 |
413 | @wraps(func)
414 | def cached(*args):
415 | if region not in CacheRegion.regions:
416 | raise CacheConfigurationError(
417 | 'Cache region not configured: %s' % region)
418 | if not CacheRegion.enabled:
419 | return func(*args)
420 |
421 | if skip_self:
422 | try:
423 | cache_key = " ".join(map(str, args[1:]))
424 | except UnicodeEncodeError:
425 | cache_key = " ".join(map(unicode, args[1:]))
426 | else:
427 | try:
428 | cache_key = " ".join(map(str, args))
429 | except UnicodeEncodeError:
430 | cache_key = " ".join(map(unicode, args))
431 |
432 | def go():
433 | return func(*args)
434 | return CacheRegion.load(region, namespace, cache_key,
435 | regenerate=regenerate, callable=go)
436 | cached._region = region
437 | cached._namespace = namespace
438 | return cached
439 | return decorate
440 |
--------------------------------------------------------------------------------
/retools/event.py:
--------------------------------------------------------------------------------
1 | """Queue Events
2 |
3 | Queue events allow for custimization of what occurs when a worker runs, and
4 | adds extension points for job execution.
5 |
6 | Job Events
7 | ==========
8 |
9 | job_prerun
10 | ----------
11 |
12 | Runs in the child process immediately before the job is performed.
13 |
14 | If a :obj:`job_prerun` function raises :exc:`~retools.exc.AbortJob`then the
15 | job will be aborted gracefully and the :obj:`job_failure` will not be called.
16 |
17 | Signal handler will recieve the job function as the sender with the keyword
18 | argument ``job``, which is a :class:`~retools.queue.Job` instance.
19 |
20 |
21 | job_postrun
22 | -----------
23 |
24 | Runs in the child process after the job is performed.
25 |
26 | These will be skipped if the job segfaults or raises an exception.
27 |
28 | Signal handler will recieve the job function as the sender with the keyword
29 | arguments ``job`` and ``result``, which is the :class:`~retools.queue.Job`
30 | instance and the result of the function.
31 |
32 |
33 | job_wrapper
34 | -----------
35 |
36 | Runs in the child process and wraps the job execution
37 |
38 | Objects configured for this signal must be context managers, and can be
39 | ensured they will have the opportunity to before and after the job. Commonly
40 | used for locking and other events which require ensuring cleanup of a resource
41 | after the job is called regardless of the outcome.
42 |
43 | Signal handler will be called with the job function, the
44 | :class:`~retools.queue.Job` instance, and the keyword arguments for the job.
45 |
46 |
47 | job_failure
48 | -----------
49 |
50 | Runs in the child process when a job throws an exception
51 |
52 | Signal handler will be called with the job function, the
53 | :class:`~retools.queue.Job` instance, and the exception object. The signal
54 | handler **should not raise an exception**.
55 |
56 | """
57 |
--------------------------------------------------------------------------------
/retools/exc.py:
--------------------------------------------------------------------------------
1 | """retools exceptions"""
2 |
3 |
4 | class RetoolsException(BaseException):
5 | """retools package base exception"""
6 |
7 |
8 | class ConfigurationError(RetoolsException):
9 | """Raised for general configuration errors"""
10 |
11 |
12 | class CacheConfigurationError(RetoolsException):
13 | """Raised when there's a cache configuration error"""
14 |
15 |
16 | class QueueError(RetoolsException):
17 | """Raised when there's an error in the queue code"""
18 |
19 |
20 | class AbortJob(RetoolsException):
21 | """Raised to abort execution of a job"""
22 |
--------------------------------------------------------------------------------
/retools/jobs.py:
--------------------------------------------------------------------------------
1 | import json
2 |
3 | def simplemath(arg1=0, arg2=2):
4 | return arg1 + arg2
5 |
6 |
7 | # Return the result after it runs
8 | def return_result(job=None, result=None):
9 | pl = job.redis.pipeline()
10 | result = json.dumps({'data': result})
11 | pl.lpush('retools:result:%s' % job.job_id, result)
12 | pl.expire('retools:result:%s', 3600)
13 | pl.execute()
14 |
15 | # If it fails, return a 'ERROR'
16 | def return_failure(job=None, exc=None):
17 | pl = job.redis.pipeline()
18 | exc = json.dumps({'data': 'ERROR: %s' % exc})
19 | pl.lpush('retools:result:%s' % job.job_id, exc)
20 | pl.expire('retools:result:%s', 3600)
21 | pl.execute()
22 |
23 |
24 | def add_events(qm):
25 | qm.subscriber('job_postrun', 'retools.jobs:simplemath',
26 | handler='retools.jobs:return_result')
27 | qm.subscriber('job_failure', 'retools.jobs:simplemath',
28 | handler='retools.jobs:return_failure')
29 |
30 | def wait_for_result(qm, job, **kwargs):
31 | job_id = qm.enqueue(job, **kwargs)
32 | result = qm.redis.blpop('retools:result:%s' % job_id)[1]
33 | return json.loads(result)['data']
34 |
--------------------------------------------------------------------------------
/retools/limiter.py:
--------------------------------------------------------------------------------
1 | """Generic Limiter to ensure N parallel operations
2 |
3 | .. note::
4 |
5 | The limiter functionality is new.
6 | Please report any issues found on `the retools Github issue
7 | tracker `_.
8 |
9 | The limiter is useful when you want to make sure that only N operations for a given process happen at the same time,
10 | i.e.: concurrent requests to the same domain.
11 |
12 | The limiter works by acquiring and releasing limits.
13 |
14 | Creating a limiter::
15 |
16 | from retools.limiter import Limiter
17 |
18 | def do_something():
19 | limiter = Limiter(limit=10, prefix='my-operation') # using default redis connection
20 |
21 | for i in range(100):
22 | if limiter.acquire_limit('operation-%d' % i):
23 | execute_my_operation()
24 | limiter.release_limit('operation-%d' % i) # since we are releasing it synchronously
25 | # all the 100 operations will be performed with
26 | # one of them locked at a time
27 |
28 | Specifying a default expiration in seconds::
29 |
30 | def do_something():
31 | limiter = Limiter(limit=10, expiration_in_seconds=45) # using default redis connection
32 |
33 | Specifying a redis connection::
34 |
35 | def do_something():
36 | limiter = Limiter(limit=10, redis=my_redis_connection)
37 |
38 | Every time you try to acquire a limit, the expired limits you previously acquired get removed from the set.
39 |
40 | This way if your process dies in the mid of its operation, the keys will eventually expire.
41 | """
42 |
43 | import time
44 |
45 | import redis
46 |
47 | from retools import global_connection
48 | from retools.util import flip_pairs
49 |
50 |
51 | class Limiter(object):
52 | '''Configures and limits operations'''
53 | def __init__(self, limit, redis=None, prefix='retools_limiter', expiration_in_seconds=10):
54 | """Initializes a Limiter.
55 |
56 | :param limit: An integer that describes the limit on the number of items
57 | :param redis: A Redis instance. Defaults to the redis instance
58 | on the global_connection.
59 | :param prefix: The default limit set name. Defaults to 'retools_limiter'.
60 | :param expiration_in_seconds: The number in seconds that keys should be locked if not
61 | explicitly released.
62 | """
63 |
64 | self.limit = limit
65 | self.redis = redis or global_connection.redis
66 | self.prefix = prefix
67 | self.expiration_in_seconds = expiration_in_seconds
68 |
69 | def acquire_limit(self, key, expiration_in_seconds=None, retry=True):
70 | """Tries to acquire a limit for a given key. Returns True if the limit can be acquired.
71 |
72 | :param key: A string with the key to acquire the limit for.
73 | This key should be used when releasing.
74 | :param expiration_in_seconds: The number in seconds that this key should be locked if not
75 | explicitly released. If this is not passed, the default is used.
76 | :param key: Internal parameter that specifies if the operation should be retried.
77 | Defaults to True.
78 | """
79 |
80 | limit_available = self.redis.zcard(self.prefix) < self.limit
81 |
82 | if limit_available:
83 | self.__lock_limit(key, expiration_in_seconds)
84 | return True
85 |
86 | if retry:
87 | self.redis.zremrangebyscore(self.prefix, '-inf', time.time())
88 | return self.acquire_limit(key, expiration_in_seconds, retry=False)
89 |
90 | return False
91 |
92 | def release_limit(self, key):
93 | """Releases a limit for a given key.
94 |
95 | :param key: A string with the key to release the limit on.
96 | """
97 |
98 | self.redis.zrem(self.prefix, key)
99 |
100 | def __lock_limit(self, key, expiration_in_seconds=None):
101 | expiration = expiration_in_seconds or self.expiration_in_seconds
102 | self.__zadd(self.prefix, key, time.time() + expiration)
103 |
104 | def __zadd(self, set_name, *args, **kwargs):
105 | """
106 | Custom ZADD interface that adapts to match the argument order of the currently
107 | used backend. Using this method makes it transparent whether you use a Redis
108 | or a StrictRedis connection.
109 |
110 | Found this code at https://github.com/ui/rq-scheduler/pull/17.
111 | """
112 | conn = self.redis
113 |
114 | # If we're dealing with StrictRedis, flip each pair of imaginary
115 | # (name, score) tuples in the args list
116 | if conn.__class__ is redis.StrictRedis: # StrictPipeline is a subclass of StrictRedis, too
117 | args = tuple(flip_pairs(args))
118 |
119 | return conn.zadd(set_name, *args)
120 |
--------------------------------------------------------------------------------
/retools/lock.py:
--------------------------------------------------------------------------------
1 | """A Redis backed distributed global lock
2 |
3 | This code uses the formula here:
4 | https://github.com/jeffomatic/redis-exp-lock-js
5 |
6 | It provides several improvements over the original version based on:
7 | http://chris-lamb.co.uk/2010/06/07/distributing-locking-python-and-redis/
8 |
9 | It provides a few improvements over the one present in the Python redis
10 | library, for example since it utilizes the Lua functionality, it no longer
11 | requires every client to have synchronized time.
12 |
13 | """
14 | # Copyright 2010,2011 Chris Lamb
15 |
16 | import time
17 | import uuid
18 | import random
19 |
20 | from retools import global_connection
21 |
22 |
23 | acquire_lua = """
24 | local result = redis.call('SETNX', KEYS[1], ARGV[1])
25 | if result == 1 then
26 | redis.call('EXPIRE', KEYS[1], ARGV[2])
27 | end
28 | return result"""
29 |
30 |
31 | release_lua = """
32 | if redis.call('GET', KEYS[1]) == ARGV[1] then
33 | return redis.call('DEL', KEYS[1])
34 | end
35 | return 0
36 | """
37 |
38 |
39 | class Lock(object):
40 | def __init__(self, key, expires=60, timeout=10, redis=None):
41 | """Distributed locking using Redis Lua scripting for CAS operations.
42 |
43 | Usage::
44 |
45 | with Lock('my_lock'):
46 | print "Critical section"
47 |
48 | :param expires: We consider any existing lock older than
49 | ``expires`` seconds to be invalid in order to
50 | detect crashed clients. This value must be higher
51 | than it takes the critical section to execute.
52 | :param timeout: If another client has already obtained the lock,
53 | sleep for a maximum of ``timeout`` seconds before
54 | giving up. A value of 0 means we never wait.
55 | :param redis: The redis instance to use if the default global
56 | redis connection is not desired.
57 |
58 | """
59 | self.key = key
60 | self.timeout = timeout
61 | self.expires = expires
62 | if not redis:
63 | redis = global_connection.redis
64 | self.redis = redis
65 | self._acquire_lua = redis.register_script(acquire_lua)
66 | self._release_lua = redis.register_script(release_lua)
67 | self.lock_key = None
68 |
69 | def __enter__(self):
70 | self.acquire()
71 |
72 | def __exit__(self, exc_type, exc_value, traceback):
73 | self.release()
74 |
75 | def acquire(self):
76 | """Acquire the lock
77 |
78 | :returns: Whether the lock was acquired or not
79 | :rtype: bool
80 |
81 | """
82 | self.lock_key = uuid.uuid4().hex
83 | timeout = self.timeout
84 | retry_sleep = 0.005
85 | while timeout >= 0:
86 | if self._acquire_lua(keys=[self.key],
87 | args=[self.lock_key, self.expires]):
88 | return
89 | timeout -= 1
90 | if timeout >= 0:
91 | time.sleep(random.uniform(0, retry_sleep))
92 | retry_sleep = min(retry_sleep*2, 1)
93 | raise LockTimeout("Timeout while waiting for lock")
94 |
95 | def release(self):
96 | """Release the lock
97 |
98 | This only releases the lock if it matches the UUID we think it
99 | should have, to prevent deleting someone else's lock if we
100 | lagged.
101 |
102 | """
103 | if self.lock_key:
104 | self._release_lua(keys=[self.key], args=[self.lock_key])
105 | self.lock_key = None
106 |
107 |
108 | class LockTimeout(BaseException):
109 | """Raised in the event a timeout occurs while waiting for a lock"""
110 |
--------------------------------------------------------------------------------
/retools/queue.py:
--------------------------------------------------------------------------------
1 | """Queue worker and manager
2 |
3 | .. note::
4 |
5 | The queueing functionality is new, and has gone through some preliminary
6 | testing. Please report any issues found on `the retools Github issue
7 | tracker `_.
8 |
9 | Any function that takes keyword arguments can be a ``job`` that a worker runs.
10 | The :class:`~retools.queue.QueueManager` handles configuration and enqueing jobs
11 | to be run.
12 |
13 | Declaring jobs:
14 |
15 | .. code-block:: python
16 |
17 | # mypackage/jobs.py
18 |
19 | # jobs
20 |
21 | def default_job():
22 | # do some basic thing
23 |
24 | def important(somearg=None):
25 | # do an important thing
26 |
27 | # event handlers
28 |
29 | def my_event_handler(sender, **kwargs):
30 | # do something
31 |
32 | def save_error(sender, **kwargs):
33 | # record error
34 |
35 |
36 | Running Jobs::
37 |
38 | from retools.queue import QueueManager
39 |
40 | qm = QueueManager()
41 | qm.subscriber('job_failure', handler='mypackage.jobs:save_error')
42 | qm.subscriber('job_postrun', 'mypackage.jobs:important',
43 | handler='mypackage.jobs:my_event_handler')
44 | qm.enqueue('mypackage.jobs:important', somearg='fred')
45 |
46 |
47 | .. note::
48 |
49 | The events for a job are registered with the :class:`QueueManager` and are
50 | encoded in the job's JSON blob. Updating events for a job will therefore
51 | only take effect for new jobs queued, and not existing ones on the queue.
52 |
53 | .. _queue_events:
54 |
55 | Events
56 | ======
57 |
58 | The retools queue has events available for additional functionality without
59 | having to subclass or directly extend retools. These functions will be run by
60 | the worker when the job is handled.
61 |
62 | Available events to register for:
63 |
64 | * **job_prerun**: Runs immediately before the job is run.
65 | * **job_wrapper**: Wraps the execution of the job, these should be context
66 | managers.
67 | * **job_postrun**: Runs after the job completes *successfully*, this will not
68 | be run if the job throws an exception.
69 | * **job_failure**: Runs when a job throws an exception.
70 |
71 | Event Function Signatures
72 | -------------------------
73 |
74 | Event functions have different call semantics, the following is a list of how
75 | the event functions will be called:
76 |
77 | * **job_prerun**: (job=job_instance)
78 | * **job_wrapper**: (job_function, job_instance, \*\*job_keyword_arguments)
79 | * **job_postrun**: (job=job_instance, result=job_function_result)
80 | * **job_failure**: (job=job_instance, exc=job_exception)
81 |
82 | Attributes of interest on the job instance are documented in the
83 | :meth:`Job.__init__` method.
84 |
85 | .. _queue_worker:
86 |
87 | Running the Worker
88 | ==================
89 |
90 | After installing ``retools``, a ``retools-worker`` command will be available
91 | that can spawn a worker. Queues to watch can be listed in order for priority
92 | queueing, in which case the worker will try each queue in order looking for jobs
93 | to process.
94 |
95 | Example invocation:
96 |
97 | .. code-block:: bash
98 |
99 | $ retools-worker high,medium,main
100 |
101 | """
102 | import os
103 | import signal
104 | import socket
105 | import subprocess
106 | import sys
107 | import time
108 | import uuid
109 | from datetime import datetime
110 | from optparse import OptionParser
111 |
112 | import pkg_resources
113 |
114 | try:
115 | import json
116 | except ImportError: # pragma: nocover
117 | import simplejson as json
118 |
119 | from setproctitle import setproctitle
120 |
121 | from retools import global_connection
122 | from retools.exc import ConfigurationError
123 | from retools.util import with_nested_contexts
124 |
125 |
126 | class QueueManager(object):
127 | """Configures and enqueues jobs"""
128 | def __init__(self, redis=None, default_queue_name='main',
129 | serializer=json.dumps, deserializer=json.loads):
130 | """Initialize a QueueManager
131 |
132 | :param redis: A Redis instance. Defaults to the redis instance
133 | on the global_connection.
134 | :param default_queue_name: The default queue name. Defaults to 'main'.
135 | :param serializer: A callable to serialize json data, defaults
136 | to json.dumps().
137 | :param deserializer: A callable to deserialize json data, defaults
138 | to json.loads().
139 |
140 | """
141 | self.default_queue_name = default_queue_name
142 | self.redis = redis or global_connection.redis
143 | self.names = {} # cache name lookups
144 | self.job_config = {}
145 | self.job_events = {}
146 | self.global_events = {}
147 | self.serializer = serializer
148 | self.deserializer = deserializer
149 |
150 | def set_queue_for_job(self, job_name, queue_name):
151 | """Set the queue that a given job name will go to
152 |
153 | :param job_name: The pkg_resource name of the job function. I.e.
154 | retools.jobs:my_function
155 | :param queue_name: Name of the queue on Redis job payloads should go
156 | to
157 |
158 | """
159 | self.job_config[job_name] = queue_name
160 |
161 | def get_job(self, job_id, queue_name=None, full_job=True):
162 | if queue_name is None:
163 | queue_name = self.default_queue_name
164 |
165 | full_queue_name = 'retools:queue:' + queue_name
166 | current_len = self.redis.llen(full_queue_name)
167 |
168 | # that's O(n), we should do better
169 | for i in range(current_len):
170 | # the list can change while doing this
171 | # so we need to catch any index error
172 | job = self.redis.lindex(full_queue_name, i)
173 | job_data = self.deserializer(job)
174 |
175 | if job_data['job_id'] == job_id:
176 | if not full_job:
177 | return job_data['job_id']
178 |
179 | return Job(full_queue_name, job, self.redis,
180 | serializer=self.serializer,
181 | deserializer=self.deserializer)
182 |
183 | raise IndexError(job_id)
184 |
185 | def get_jobs(self, queue_name=None, full_job=True):
186 | if queue_name is None:
187 | queue_name = self.default_queue_name
188 |
189 | full_queue_name = 'retools:queue:' + queue_name
190 | current_len = self.redis.llen(full_queue_name)
191 |
192 | for i in range(current_len):
193 | # the list can change while doing this
194 | # so we need to catch any index error
195 | job = self.redis.lindex(full_queue_name, i)
196 | if not full_job:
197 | job_dict = self.deserializer(job)
198 | yield job_dict['job_id']
199 |
200 | yield Job(full_queue_name, job, self.redis,
201 | serializer=self.serializer,
202 | deserializer=self.deserializer)
203 |
204 | def subscriber(self, event, job=None, handler=None):
205 | """Set events for a specific job or for all jobs
206 |
207 | :param event: The name of the event to subscribe to.
208 | :param job: Optional, a specific job to bind to.
209 | :param handler: The location of the handler to call.
210 |
211 | """
212 | if job:
213 | job_events = self.job_events.setdefault(job, {})
214 | job_events.setdefault(event, []).append(handler)
215 | else:
216 | self.global_events.setdefault(event, []).append(handler)
217 |
218 | def enqueue(self, job, **kwargs):
219 | """Enqueue a job
220 |
221 | :param job: The pkg_resouce name of the function. I.e.
222 | retools.jobs:my_function
223 | :param kwargs: Keyword arguments the job should be called with.
224 | These arguments must be serializeable by JSON.
225 | :returns: The job id that was queued.
226 |
227 | """
228 | if job not in self.names:
229 | job_func = pkg_resources.EntryPoint.parse('x=%s' % job).load(False)
230 | self.names[job] = job_func
231 |
232 | queue_name = kwargs.pop('queue_name', None)
233 | if not queue_name:
234 | queue_name = self.job_config.get('job', self.default_queue_name)
235 |
236 | metadata = kwargs.pop('metadata', None)
237 | if metadata is None:
238 | metadata = {}
239 |
240 | full_queue_name = 'retools:queue:' + queue_name
241 | job_id = uuid.uuid4().hex
242 | events = self.global_events.copy()
243 | if job in self.job_events:
244 | for k, v in self.job_events[job].items():
245 | events.setdefault(k, []).extend(v)
246 |
247 | job_dct = {
248 | 'job_id': job_id,
249 | 'job': job,
250 | 'kwargs': kwargs,
251 | 'events': events,
252 | 'metadata': metadata,
253 | 'state': {}
254 | }
255 | pipeline = self.redis.pipeline()
256 | pipeline.rpush(full_queue_name, self.serializer(job_dct))
257 | pipeline.sadd('retools:queues', queue_name)
258 | pipeline.execute()
259 | return job_id
260 |
261 |
262 | class Job(object):
263 | def __init__(self, queue_name, job_payload, redis,
264 | serializer=json.dumps, deserializer=json.loads):
265 | """Create a job instance given a JSON job payload
266 |
267 | :param job_payload: A JSON string representing a job.
268 | :param queue_name: The queue this job was pulled off of.
269 | :param redis: The redis instance used to pull this job.
270 |
271 | A :class:`Job` instance is created when the Worker pulls a
272 | job payload off the queue. The ``current_job`` global is set
273 | upon creation to indicate the current job being processed.
274 |
275 | Attributes of interest for event functions:
276 |
277 | * **job_id**: The Job's ID
278 | * **job_name**: The Job's name (it's package + function name)
279 | * **queue_name**: The queue this job came from
280 | * **kwargs**: The keyword arguments the job is called with
281 | * **state**: The state dict, this can be used by events to retain
282 | additional arguments. I.e. for a retry extension, retry information
283 | can be stored in the ``state`` dict.
284 | * **func**: A reference to the job function
285 | * **redis**: A :class:`redis.Redis` instance.
286 | * **serializer**: A callable to serialize json data, defaults
287 | to :func:`json.dumps`.
288 | * **deserializer**: A callable to deserialize json data, defaults
289 | to :func:`json.loads`.
290 | """
291 | global current_job
292 | current_job = self
293 |
294 | self.deserializer = deserializer
295 | self.serializer = serializer
296 | self.payload = payload = deserializer(job_payload)
297 | self.job_id = payload['job_id']
298 | self.job_name = payload['job']
299 | self.queue_name = queue_name
300 | self.kwargs = payload['kwargs']
301 | self.state = payload['state']
302 | self.metadata = payload.get('metadata', {})
303 | self.events = {}
304 | self.redis = redis
305 | self.func = None
306 | self.events = self.load_events(event_dict=payload['events'])
307 |
308 | def __repr__(self):
309 | """Display representation of self"""
310 | res = '<%s object at %s: ' % (self.__class__.__name__, hex(id(self)))
311 | res += 'Events: %s, ' % self.events
312 | res += 'State: %s, ' % self.state
313 | res += 'Job ID: %s, ' % self.job_id
314 | res += 'Job Name: %s, ' % self.job_name
315 | res += 'Queue: %s' % self.queue_name
316 | res += '>'
317 | return res
318 |
319 | @staticmethod
320 | def load_events(event_dict):
321 | """Load all the events given the references
322 |
323 | :param event_dict: A dictionary of events keyed by event name
324 | to a list of handlers for the event.
325 |
326 | """
327 | events = {}
328 | for k, v in event_dict.items():
329 | funcs = []
330 | for name in v:
331 | mod_name, func_name = name.split(':')
332 | try:
333 | mod = sys.modules[mod_name]
334 | except KeyError:
335 | __import__(mod_name)
336 | mod = sys.modules[mod_name]
337 | funcs.append(getattr(mod, func_name))
338 | events[k] = funcs
339 | return events
340 |
341 | def perform(self):
342 | """Runs the job calling all the job signals as appropriate"""
343 | self.run_event('job_prerun')
344 | try:
345 | if 'job_wrapper' in self.events:
346 | result = with_nested_contexts(self.events['job_wrapper'],
347 | self.func, [self], self.kwargs)
348 | else:
349 | result = self.func(**self.kwargs)
350 | self.run_event('job_postrun', result=result)
351 | return True
352 | except Exception, exc:
353 | self.run_event('job_failure', exc=exc)
354 | return False
355 |
356 | def to_dict(self):
357 | return {
358 | 'job_id': self.job_id,
359 | 'job': self.job_name,
360 | 'kwargs': self.kwargs,
361 | 'events': self.payload['events'],
362 | 'state': self.state,
363 | 'metadata': self.metadata}
364 |
365 | def to_json(self):
366 | return self.serializer(self.to_dict())
367 |
368 | def enqueue(self):
369 | """Queue this job in Redis"""
370 | full_queue_name = self.queue_name
371 | queue_name = full_queue_name[len('retools:queue:'):]
372 | pipeline = self.redis.pipeline()
373 | pipeline.rpush(full_queue_name, self.to_json())
374 | pipeline.sadd('retools:queues', queue_name)
375 | pipeline.execute()
376 | return self.job_id
377 |
378 | def run_event(self, event, **kwargs):
379 | """Run all registered events for this job"""
380 | for event_func in self.events.get(event, []):
381 | event_func(job=self, **kwargs)
382 |
383 |
384 | class Worker(object):
385 | """A Worker works on jobs"""
386 | def __init__(self, queues, redis=None, serializer=json.dumps, deserializer=json.loads):
387 | """Create a worker
388 |
389 | :param queues: List of queues to process
390 | :type queues: list
391 | :param redis: Redis instance to use, defaults to the global_connection.
392 |
393 | In the event that there is only a single queue in the list
394 | Redis list blocking will be used for lower latency job
395 | processing
396 |
397 | """
398 | self.redis = redis or global_connection.redis
399 | self.serializer = serializer
400 | self.deserializer = deserializer
401 | if not queues:
402 | raise ConfigurationError(
403 | "No queues were configured for this worker")
404 | self.queues = ['retools:queue:%s' % x for x in queues]
405 | self.paused = self.shutdown = False
406 | self.job = None
407 | self.child_id = None
408 | self.jobs = {} # job function import cache
409 |
410 | @classmethod
411 | def get_workers(cls, redis=None):
412 | redis = redis or global_connection.redis
413 | for worker_id in redis.smembers('retools:workers'):
414 | yield cls.from_id(worker_id)
415 |
416 | @classmethod
417 | def get_worker_ids(cls, redis=None):
418 | redis = redis or global_connection.redis
419 | return redis.smembers('retools:workers')
420 |
421 | @classmethod
422 | def from_id(cls, worker_id, redis=None):
423 | redis = redis or global_connection.redis
424 | if not redis.sismember("retools:workers", worker_id):
425 | raise IndexError(worker_id)
426 | queues = redis.get("retools:worker:%s:queues" % worker_id)
427 | queues = queues.split(',')
428 | return Worker(queues, redis)
429 |
430 | @property
431 | def worker_id(self):
432 | """Returns this workers id based on hostname, pid, queues"""
433 | return '%s:%s:%s' % (socket.gethostname(), os.getpid(),
434 | self.queue_names)
435 |
436 | @property
437 | def queue_names(self):
438 | names = [x[len('retools:queue:'):] for x in self.queues]
439 | return ','.join(names)
440 |
441 | def work(self, interval=5, blocking=False):
442 | """Work on jobs
443 |
444 | This is the main method of the Worker, and will register itself
445 | with Redis as a Worker, wait for jobs, then process them.
446 |
447 | :param interval: Time in seconds between polling.
448 | :type interval: int
449 | :param blocking: Whether or not blocking pop should be used. If the
450 | blocking pop is used, then the worker will block for
451 | ``interval`` seconds at a time waiting for a new
452 | job. This affects how often the worker can respond to
453 | signals.
454 | :type blocking: bool
455 |
456 | """
457 | self.set_proc_title('Starting')
458 | self.startup()
459 |
460 | try:
461 | while 1:
462 | if self.shutdown:
463 | break
464 |
465 | # Set this first since reserve may block for awhile
466 | self.set_proc_title("Waiting for %s" % self.queue_names)
467 |
468 | if not self.paused and self.reserve(interval, blocking):
469 | self.working_on()
470 | self.child_id = os.fork()
471 | if self.child_id:
472 | self.set_proc_title("Forked %s at %s" % (
473 | self.child_id, datetime.now()))
474 | try:
475 | os.wait()
476 | except OSError:
477 | # got killed
478 | pass
479 | else:
480 | self.set_proc_title("Processing %s since %s" % (
481 | self.job.queue_name, datetime.now()))
482 | self.perform()
483 | sys.exit()
484 | self.done_working()
485 | self.child_id = None
486 | self.job = None
487 | else:
488 | if self.paused:
489 | self.set_proc_title("Paused")
490 | elif not blocking:
491 | self.set_proc_title(
492 | "Waiting for %s" % self.queue_names)
493 | time.sleep(interval)
494 | finally:
495 | self.unregister_worker()
496 |
497 | def reserve(self, interval, blocking):
498 | """Attempts to pull a job off the queue(s)"""
499 | queue_name = None
500 | if blocking:
501 | result = self.redis.blpop(self.queues, timeout=interval)
502 | if result:
503 | queue_name, job_payload = result
504 | else:
505 | for queue in self.queues:
506 | job_payload = self.redis.lpop(queue)
507 | if job_payload:
508 | queue_name = queue
509 | break
510 | if not queue_name:
511 | return False
512 |
513 | self.job = job = Job(queue_name=queue_name, job_payload=job_payload,
514 | redis=self.redis, serializer=self.serializer,
515 | deserializer=self.deserializer)
516 | try:
517 | job.func = self.jobs[job.job_name]
518 | except KeyError:
519 | mod_name, func_name = job.job_name.split(':')
520 | __import__(mod_name)
521 | mod = sys.modules[mod_name]
522 | job.func = self.jobs[job.job_name] = getattr(mod, func_name)
523 | return True
524 |
525 | def set_proc_title(self, title):
526 | """Sets the active process title, retains the retools prefic"""
527 | setproctitle('retools: ' + title)
528 |
529 | def register_worker(self):
530 | """Register this worker with Redis"""
531 | pipeline = self.redis.pipeline()
532 | pipeline.sadd("retools:workers", self.worker_id)
533 | pipeline.set("retools:worker:%s:started" % self.worker_id, time.time())
534 | pipeline.set("retools:worker:%s:queues" % self.worker_id,
535 | self.queue_names)
536 | pipeline.execute()
537 |
538 | def unregister_worker(self, worker_id=None):
539 | """Unregister this worker with Redis"""
540 | worker_id = worker_id or self.worker_id
541 | pipeline = self.redis.pipeline()
542 | pipeline.srem("retools:workers", worker_id)
543 | pipeline.delete("retools:worker:%s" % worker_id)
544 | pipeline.delete("retools:worker:%s:started" % worker_id)
545 | pipeline.delete("retools:worker:%s:queues" % worker_id)
546 | pipeline.execute()
547 |
548 | def startup(self):
549 | """Runs basic startup tasks"""
550 | self.register_signal_handlers()
551 | self.prune_dead_workers()
552 | self.register_worker()
553 |
554 | def trigger_shutdown(self, *args):
555 | """Graceful shutdown of the worker"""
556 | self.shutdown = True
557 |
558 | def immediate_shutdown(self, *args):
559 | """Immediately shutdown the worker, kill child process if needed"""
560 | self.shutdown = True
561 | self.kill_child()
562 |
563 | def kill_child(self, *args):
564 | """Kill the child process immediately"""
565 | if self.child_id:
566 | os.kill(self.child_id, signal.SIGTERM)
567 |
568 | def pause_processing(self, *args):
569 | """Cease pulling jobs off the queue for processing"""
570 | self.paused = True
571 |
572 | def resume_processing(self, *args):
573 | """Resume pulling jobs for processing off the queue"""
574 | self.paused = False
575 |
576 | def prune_dead_workers(self):
577 | """Prune dead workers from Redis"""
578 | all_workers = self.redis.smembers("retools:workers")
579 | known_workers = self.worker_pids()
580 | hostname = socket.gethostname()
581 | for worker in all_workers:
582 | host, pid, queues = worker.split(':')
583 | if host != hostname or pid in known_workers:
584 | continue
585 | self.unregister_worker(worker)
586 |
587 | def register_signal_handlers(self):
588 | """Setup all the signal handlers"""
589 | signal.signal(signal.SIGTERM, self.immediate_shutdown)
590 | signal.signal(signal.SIGINT, self.immediate_shutdown)
591 | signal.signal(signal.SIGQUIT, self.trigger_shutdown)
592 | signal.signal(signal.SIGUSR1, self.kill_child)
593 | signal.signal(signal.SIGUSR2, self.pause_processing)
594 | signal.signal(signal.SIGCONT, self.resume_processing)
595 |
596 | def working_on(self):
597 | """Indicate with Redis what we're working on"""
598 | data = {
599 | 'queue': self.job.queue_name,
600 | 'run_at': time.time(),
601 | 'payload': self.job.payload
602 | }
603 | self.redis.set("retools:worker:%s" % self.worker_id,
604 | self.serializer(data))
605 |
606 | def done_working(self):
607 | """Called when we're done working on a job"""
608 | self.redis.delete("retools:worker:%s" % self.worker_id)
609 |
610 | def worker_pids(self):
611 | """Returns a list of all the worker processes"""
612 | ps = subprocess.Popen("ps -U 0 -A | grep 'retools:'", shell=True,
613 | stdout=subprocess.PIPE)
614 | data = ps.stdout.read()
615 | ps.stdout.close()
616 | ps.wait()
617 | return [x.split()[0] for x in data.split('\n') if x]
618 |
619 | def perform(self):
620 | """Run the job and call the appropriate signal handlers"""
621 | self.job.perform()
622 |
623 |
624 | def run_worker():
625 | usage = "usage: %prog queues"
626 | parser = OptionParser(usage=usage)
627 | parser.add_option("--interval", dest="interval", type="int", default=5,
628 | help="Polling interval")
629 | parser.add_option("-b", dest="blocking", action="store_true",
630 | default=False,
631 | help="Whether to use blocking queue semantics")
632 | (options, args) = parser.parse_args()
633 |
634 | if len(args) < 1:
635 | sys.exit("Error: Failed to provide queues or packages_to_scan args")
636 |
637 | worker = Worker(queues=args[0].split(','))
638 | worker.work(interval=options.interval, blocking=options.blocking)
639 | sys.exit()
640 |
--------------------------------------------------------------------------------
/retools/tests/__init__.py:
--------------------------------------------------------------------------------
1 | #
2 |
--------------------------------------------------------------------------------
/retools/tests/jobs.py:
--------------------------------------------------------------------------------
1 | def echo_default(default='hello'): # pragma: nocover
2 | return default
3 |
4 |
5 | def echo_back(): # pragma: nocover
6 | return 'howdy all'
7 |
--------------------------------------------------------------------------------
/retools/tests/test_cache.py:
--------------------------------------------------------------------------------
1 | # coding: utf-8
2 | import unittest
3 | import time
4 | import cPickle
5 | from contextlib import nested
6 |
7 | import redis
8 | import redis.client
9 |
10 | from nose.tools import raises
11 | from nose.tools import eq_
12 | from mock import Mock
13 | from mock import patch
14 |
15 |
16 | class TestCacheKey(unittest.TestCase):
17 | def _makeOne(self):
18 | from retools.cache import CacheKey
19 | return CacheKey
20 |
21 | def test_key_config(self):
22 | CK = self._makeOne()('home', 'my_func', '1 2 3', today='2004-02-02')
23 | eq_(CK.redis_hit_key, 'retools:hits:2004-02-02:home:my_func:1 2 3')
24 |
25 |
26 | class TestCacheRegion(unittest.TestCase):
27 | def _makeOne(self):
28 | from retools.cache import CacheRegion
29 | CacheRegion.enabled = True
30 | return CacheRegion
31 |
32 | def _marker(self):
33 | from retools.cache import NoneMarker
34 | return NoneMarker
35 |
36 | def test_add_region(self):
37 | CR = self._makeOne()
38 | CR.add_region('short_term', 60)
39 | eq_(CR.regions['short_term']['expires'], 60)
40 |
41 | def test_generate_value(self):
42 | mock_redis = Mock(spec=redis.Redis)
43 | mock_pipeline = Mock(spec=redis.client.Pipeline)
44 | results = ['0', (None, '0')]
45 |
46 | def side_effect(*args, **kwargs):
47 | return results.pop()
48 |
49 | mock_redis.pipeline.return_value = mock_pipeline
50 | mock_pipeline.execute.side_effect = side_effect
51 | mock_redis.hgetall.return_value = {}
52 | with patch('retools.global_connection._redis', mock_redis):
53 | CR = self._makeOne()
54 | CR.add_region('short_term', 60)
55 |
56 | def a_func():
57 | return "This is a value: %s" % time.time()
58 | value = CR.load('short_term', 'my_func', '1 2 3', callable=a_func)
59 | assert 'This is a value' in value
60 | exec_calls = [x for x in mock_pipeline.method_calls \
61 | if x[0] == 'execute']
62 | eq_(len(mock_pipeline.method_calls), 11)
63 | eq_(len(exec_calls), 2)
64 |
65 | def test_existing_value_no_regen(self):
66 | mock_redis = Mock(spec=redis.client.Redis)
67 | mock_pipeline = Mock(spec=redis.client.Pipeline)
68 | results = ['0', ({'created': '111',
69 | 'value': "S'This is a value: 1311702429.28'\n."},
70 | '0')]
71 |
72 | def side_effect(*args, **kwargs):
73 | return results.pop()
74 |
75 | mock_redis.pipeline.return_value = mock_pipeline
76 | mock_pipeline.execute.side_effect = side_effect
77 | mock_redis.hgetall.return_value = {}
78 | with patch('retools.global_connection._redis', mock_redis):
79 | CR = self._makeOne()
80 | CR.add_region('short_term', 60)
81 | value = CR.load('short_term', 'my_func', '1 2 3', regenerate=False)
82 | assert 'This is a value' in value
83 | exec_calls = [x for x in mock_pipeline.method_calls \
84 | if x[0] == 'execute']
85 | eq_(len(mock_pipeline.method_calls), 4)
86 | eq_(len(exec_calls), 1)
87 |
88 | def test_value_created_after_check_but_expired(self):
89 | mock_redis = Mock(spec=redis.client.Redis)
90 | mock_pipeline = Mock(spec=redis.client.Pipeline)
91 | results = ['0', (None, '0')]
92 |
93 | def side_effect(*args, **kwargs):
94 | return results.pop()
95 |
96 | mock_redis.pipeline.return_value = mock_pipeline
97 | mock_pipeline.execute.side_effect = side_effect
98 | mock_redis.hgetall.return_value = {'created': '1',
99 | 'value': "S'This is a value: 1311702429.28'\n."}
100 | with patch('retools.global_connection._redis', mock_redis):
101 | CR = self._makeOne()
102 | CR.add_region('short_term', 60)
103 |
104 | def a_func():
105 | return "This is a value: %s" % time.time()
106 |
107 | value = CR.load('short_term', 'my_func', '1 2 3', callable=a_func)
108 | assert 'This is a value' in value
109 | exec_calls = [x for x in mock_pipeline.method_calls \
110 | if x[0] == 'execute']
111 | eq_(len(mock_pipeline.method_calls), 11)
112 | eq_(len(exec_calls), 2)
113 |
114 | def test_value_expired_and_no_lock(self):
115 | mock_redis = Mock(spec=redis.client.Redis)
116 | mock_pipeline = Mock(spec=redis.client.Pipeline)
117 | results = ['0', ({'created': '111',
118 | 'value': "S'This is a value: 1311702429.28'\n."},
119 | '0')]
120 |
121 | def side_effect(*args, **kwargs):
122 | return results.pop()
123 |
124 | mock_redis.pipeline.return_value = mock_pipeline
125 | mock_pipeline.execute.side_effect = side_effect
126 | mock_redis.hgetall.return_value = {}
127 | mock_redis.exists.return_value = False
128 | with patch('retools.global_connection._redis', mock_redis):
129 | CR = self._makeOne()
130 | CR.add_region('short_term', 60)
131 |
132 | def a_func():
133 | return "This is a value: %s" % time.time()
134 |
135 | value = CR.load('short_term', 'my_func', '1 2 3', callable=a_func)
136 | assert 'This is a value' in value
137 | exec_calls = [x for x in mock_pipeline.method_calls \
138 | if x[0] == 'execute']
139 | eq_(len(mock_pipeline.method_calls), 11)
140 | eq_(len(exec_calls), 2)
141 |
142 | def test_generate_value_no_stats(self):
143 | mock_redis = Mock(spec=redis.client.Redis)
144 | mock_pipeline = Mock(spec=redis.client.Pipeline)
145 | results = ['0', (None, '0')]
146 |
147 | def side_effect(*args, **kwargs):
148 | return results.pop()
149 |
150 | mock_redis.pipeline.return_value = mock_pipeline
151 | mock_pipeline.execute.side_effect = side_effect
152 | mock_redis.hgetall.return_value = {}
153 | with patch('retools.global_connection._redis', mock_redis):
154 | CR = self._makeOne()
155 | CR.add_region('short_term', 60)
156 |
157 | now = time.time()
158 |
159 | def a_func():
160 | return "This is a value: %s" % now
161 | value = CR.load('short_term', 'my_func', '1 2 3', callable=a_func,
162 | statistics=False)
163 | assert 'This is a value' in value
164 | assert str(now) in value
165 | exec_calls = [x for x in mock_pipeline.method_calls \
166 | if x[0] == 'execute']
167 | eq_(len(mock_pipeline.method_calls), 6)
168 | eq_(len(exec_calls), 1)
169 |
170 | def test_generate_value_other_creator(self):
171 | mock_redis = Mock(spec=redis.client.Redis)
172 | mock_pipeline = Mock(spec=redis.client.Pipeline)
173 | now = time.time()
174 | results = ['0', (None, None)]
175 |
176 | def side_effect(*args, **kwargs):
177 | return results.pop()
178 |
179 | mock_redis.pipeline.return_value = mock_pipeline
180 | mock_pipeline.execute.side_effect = side_effect
181 | mock_redis.hgetall.return_value = {'created': now,
182 | 'value': cPickle.dumps("This is a NEW value")}
183 | with patch('retools.global_connection._redis', mock_redis):
184 | CR = self._makeOne()
185 | CR.add_region('short_term', 60)
186 |
187 | def a_func(): # pragma: nocover
188 | return "This is a value: %s" % time.time()
189 | value = CR.load('short_term', 'my_func', '1 2 3', callable=a_func)
190 | assert 'This is a NEW value' in value
191 | exec_calls = [x for x in mock_pipeline.method_calls \
192 | if x[0] == 'execute']
193 | eq_(len(exec_calls), 1)
194 |
195 | def test_existing_value(self):
196 | mock_redis = Mock(spec=redis.client.Redis)
197 | mock_pipeline = Mock(spec=redis.client.Pipeline)
198 | now = time.time()
199 | mock_redis.pipeline.return_value = mock_pipeline
200 | mock_pipeline.execute.return_value = ({'created': now,
201 | 'value': cPickle.dumps("This is a value")}, '0')
202 | with patch('retools.global_connection._redis', mock_redis):
203 | CR = self._makeOne()
204 | CR.add_region('short_term', 60)
205 |
206 | called = []
207 |
208 | def a_func(): # pragma: nocover
209 | called.append(1)
210 | return "This is a value: %s" % time.time()
211 | value = CR.load('short_term', 'my_func', '1 2 3', callable=a_func)
212 | assert 'This is a value' in value
213 | eq_(called, [])
214 | exec_calls = [x for x in mock_pipeline.method_calls \
215 | if x[0] == 'execute']
216 | eq_(len(exec_calls), 1)
217 |
218 | def test_new_value_and_misses(self):
219 | mock_redis = Mock(spec=redis.client.Redis)
220 | mock_pipeline = Mock(spec=redis.client.Pipeline)
221 | results = [None, ['30'], (None, '0')]
222 |
223 | def side_effect(*args, **kwargs):
224 | return results.pop()
225 |
226 | mock_redis.pipeline.return_value = mock_pipeline
227 | mock_pipeline.execute.side_effect = side_effect
228 | mock_redis.hgetall.return_value = {}
229 | with patch('retools.global_connection._redis', mock_redis):
230 | CR = self._makeOne()
231 | CR.add_region('short_term', 60)
232 |
233 | called = []
234 |
235 | def a_func(): # pragma: nocover
236 | called.append(1)
237 | return "This is a value: %s" % time.time()
238 | value = CR.load('short_term', 'my_func', '1 2 3', callable=a_func)
239 | assert 'This is a value' in value
240 | exec_calls = [x for x in mock_pipeline.method_calls \
241 | if x[0] == 'execute']
242 | eq_(len(exec_calls), 3)
243 |
244 | # Check that we increment the miss counter by 30
245 | last_incr_call = filter(lambda x: x[0] == 'incr',
246 | mock_pipeline.method_calls)[-1]
247 | eq_(last_incr_call[2], {'amount': 30})
248 |
249 | def test_return_marker(self):
250 | mock_redis = Mock(spec=redis.client.Redis)
251 | mock_pipeline = Mock(spec=redis.client.Pipeline)
252 | mock_redis.pipeline.return_value = mock_pipeline
253 | mock_pipeline.execute.return_value = (None, '0')
254 | with patch('retools.global_connection._redis', mock_redis):
255 | CR = self._makeOne()
256 | CR.add_region('short_term', 60)
257 |
258 | value = CR.load('short_term', 'my_func', '1 2 3', regenerate=False)
259 | eq_(value, self._marker())
260 | exec_calls = [x for x in mock_pipeline.method_calls \
261 | if x[0] == 'execute']
262 | eq_(len(exec_calls), 1)
263 |
264 |
265 | class TestInvalidateRegion(unittest.TestCase):
266 | def _makeOne(self):
267 | from retools.cache import invalidate_region
268 | return invalidate_region
269 |
270 | def _makeCR(self):
271 | from retools.cache import CacheRegion
272 | CacheRegion.regions = {}
273 | return CacheRegion
274 |
275 | def test_invalidate_region_empty(self):
276 | mock_redis = Mock(spec=redis.client.Redis)
277 | mock_redis.smembers.return_value = set([])
278 |
279 | invalidate_region = self._makeOne()
280 | with patch('retools.global_connection._redis', mock_redis):
281 | CR = self._makeCR()
282 | CR.add_region('short_term', expires=600)
283 |
284 | invalidate_region('short_term')
285 | eq_(len(mock_redis.method_calls), 1)
286 |
287 | def test_invalidate_small_region(self):
288 | mock_redis = Mock(spec=redis.client.Redis)
289 | results = [set(['keyspace']), set(['a_func'])]
290 |
291 | def side_effect(*args):
292 | return results.pop()
293 |
294 | mock_redis.smembers.side_effect = side_effect
295 |
296 | invalidate_region = self._makeOne()
297 | with patch('retools.global_connection._redis', mock_redis):
298 | CR = self._makeCR()
299 | CR.add_region('short_term', expires=600)
300 |
301 | invalidate_region('short_term')
302 | calls = mock_redis.method_calls
303 | eq_(calls[0][1], ('retools:short_term:namespaces',))
304 | eq_(len(calls), 6)
305 |
306 | def test_remove_nonexistent_key(self):
307 | mock_redis = Mock(spec=redis.client.Redis)
308 | results = [set(['keyspace']), set(['a_func'])]
309 |
310 | def side_effect(*args):
311 | return results.pop()
312 |
313 | mock_redis.smembers.side_effect = side_effect
314 | mock_redis.exists.return_value = False
315 |
316 | invalidate_region = self._makeOne()
317 | with patch('retools.global_connection._redis', mock_redis):
318 | CR = self._makeCR()
319 | CR.add_region('short_term', expires=600)
320 |
321 | invalidate_region('short_term')
322 | calls = mock_redis.method_calls
323 | eq_(calls[0][1], ('retools:short_term:namespaces',))
324 | eq_(len(calls), 6)
325 |
326 |
327 | class TestInvalidFunction(unittest.TestCase):
328 | def _makeOne(self):
329 | from retools.cache import invalidate_function
330 | return invalidate_function
331 |
332 | def _makeCR(self):
333 | from retools.cache import CacheRegion
334 | CacheRegion.regions = {}
335 | return CacheRegion
336 |
337 | def test_invalidate_function_without_args(self):
338 |
339 | def my_func(): # pragma: nocover
340 | return "Hello"
341 | my_func._region = 'short_term'
342 | my_func._namespace = 'retools:a_key'
343 |
344 | mock_redis = Mock(spec=redis.client.Redis)
345 | mock_redis.smembers.return_value = set(['1'])
346 |
347 | mock_pipeline = Mock(spec=redis.client.Pipeline)
348 | mock_redis.pipeline.return_value = mock_pipeline
349 |
350 | invalidate_function = self._makeOne()
351 | with patch('retools.global_connection._redis', mock_redis):
352 | CR = self._makeCR()
353 | CR.add_region('short_term', expires=600)
354 |
355 | invalidate_function(my_func)
356 | calls = mock_redis.method_calls
357 | eq_(calls[0][1], ('retools:short_term:retools:a_key:keys',))
358 | eq_(len(calls), 2)
359 |
360 | def test_invalidate_function_with_args(self):
361 |
362 | def my_func(name): # pragma: nocover
363 | return "Hello %s" % name
364 | my_func._region = 'short_term'
365 | my_func._namespace = 'retools:a_key decarg'
366 |
367 | mock_redis = Mock(spec=redis.client.Redis)
368 | mock_redis.smembers.return_value = set(['1'])
369 |
370 | invalidate_function = self._makeOne()
371 | with patch('retools.global_connection._redis', mock_redis):
372 | CR = self._makeCR()
373 | CR.add_region('short_term', expires=600)
374 |
375 | invalidate_function(my_func, 'fred')
376 | calls = mock_redis.method_calls
377 | eq_(calls[0][1][0], 'retools:short_term:retools:a_key decarg:fred')
378 | eq_(calls[0][0], 'hset')
379 | eq_(len(calls), 1)
380 |
381 | # And a unicode key
382 | mock_redis.reset_mock()
383 | invalidate_function(my_func,
384 | u"\u03b5\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03ac")
385 | calls = mock_redis.method_calls
386 | eq_(calls[0][1][0],
387 | u'retools:short_term:retools:a_key' \
388 | u' decarg:\u03b5\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03ac')
389 | eq_(calls[0][0], 'hset')
390 | eq_(len(calls), 1)
391 |
392 |
393 | class TestCacheDecorator(unittest.TestCase):
394 | def _makeOne(self):
395 | from retools.cache import CacheRegion
396 | CacheRegion.enabled = True
397 | CacheRegion.regions = {}
398 | return CacheRegion
399 |
400 | def _decorateFunc(self, func, *args):
401 | from retools.cache import cache_region
402 | return cache_region(*args)(func)
403 |
404 | def test_no_region(self):
405 | from retools.exc import CacheConfigurationError
406 |
407 | @raises(CacheConfigurationError)
408 | def test_it():
409 | CR = self._makeOne()
410 | CR.add_region('short_term', 60)
411 |
412 | def dummy_func(): # pragma: nocover
413 | return "This is a value: %s" % time.time()
414 | decorated = self._decorateFunc(dummy_func, 'long_term')
415 | decorated()
416 | test_it()
417 |
418 | def test_generate(self):
419 | mock_redis = Mock(spec=redis.client.Redis)
420 | mock_pipeline = Mock(spec=redis.client.Pipeline)
421 | results = ['0', (None, '0')]
422 |
423 | def side_effect(*args, **kwargs):
424 | return results.pop()
425 | mock_redis.pipeline.return_value = mock_pipeline
426 | mock_pipeline.execute.side_effect = side_effect
427 | mock_redis.hgetall.return_value = {}
428 |
429 | def dummy_func():
430 | return "This is a value: %s" % time.time()
431 |
432 | with patch('retools.global_connection._redis', mock_redis):
433 | CR = self._makeOne()
434 | CR.add_region('short_term', 60)
435 | decorated = self._decorateFunc(dummy_func, 'short_term')
436 | value = decorated()
437 | assert 'This is a value' in value
438 | exec_calls = [x for x in mock_pipeline.method_calls \
439 | if x[0] == 'execute']
440 | eq_(len(exec_calls), 2)
441 |
442 | def test_cache_disabled(self):
443 | mock_redis = Mock(spec=redis.client.Redis)
444 | mock_pipeline = Mock(spec=redis.client.Pipeline)
445 | mock_redis.pipeline.return_value = mock_pipeline
446 |
447 | def dummy_func():
448 | return "This is a value: %s" % time.time()
449 |
450 | with patch('retools.global_connection._redis', mock_redis):
451 | CR = self._makeOne()
452 | CR.add_region('short_term', 60)
453 | CR.enabled = False
454 | decorated = self._decorateFunc(dummy_func, 'short_term')
455 | value = decorated()
456 | assert 'This is a value' in value
457 | exec_calls = [x for x in mock_pipeline.method_calls \
458 | if x[0] == 'execute']
459 | eq_(len(exec_calls), 0)
460 |
461 | def test_unicode_keys(self):
462 | keys = [
463 | # arabic (egyptian)
464 | u"\u0644\u064a\u0647\u0645\u0627\u0628\u062a\u0643\u0644\u0645" \
465 | u"\u0648\u0634\u0639\u0631\u0628\u064a\u061f",
466 | # Chinese (simplified)
467 | u"\u4ed6\u4eec\u4e3a\u4ec0\u4e48\u4e0d\u8bf4\u4e2d\u6587",
468 | # Chinese (traditional)
469 | u"\u4ed6\u5011\u7232\u4ec0\u9ebd\u4e0d\u8aaa\u4e2d\u6587",
470 | # czech
471 | u"\u0050\u0072\u006f\u010d\u0070\u0072\u006f\u0073\u0074\u011b" \
472 | u"\u006e\u0065\u006d\u006c\u0075\u0076\u00ed\u010d\u0065\u0073" \
473 | u"\u006b\u0079",
474 | # hebrew
475 | u"\u05dc\u05de\u05d4\u05d4\u05dd\u05e4\u05e9\u05d5\u05d8\u05dc" \
476 | u"\u05d0\u05de\u05d3\u05d1\u05e8\u05d9\u05dd\u05e2\u05d1\u05e8" \
477 | u"\u05d9\u05ea",
478 | # Hindi (Devanagari)
479 | u"\u092f\u0939\u0932\u094b\u0917\u0939\u093f\u0928\u094d\u0926" \
480 | u"\u0940\u0915\u094d\u092f\u094b\u0902\u0928\u0939\u0940\u0902" \
481 | u"\u092c\u094b\u0932\u0938\u0915\u0924\u0947\u0939\u0948\u0902",
482 | # Japanese (kanji and hiragana)
483 | u"\u306a\u305c\u307f\u3093\u306a\u65e5\u672c\u8a9e\u3092\u8a71" \
484 | u"\u3057\u3066\u304f\u308c\u306a\u3044\u306e\u304b",
485 | # Russian (Cyrillic)
486 | u"\u043f\u043e\u0447\u0435\u043c\u0443\u0436\u0435\u043e\u043d" \
487 | u"\u0438\u043d\u0435\u0433\u043e\u0432\u043e\u0440\u044f\u0442" \
488 | u"\u043f\u043e\u0440\u0443\u0441\u0441\u043a\u0438",
489 | # Spanish
490 | u"\u0050\u006f\u0072\u0071\u0075\u00e9\u006e\u006f\u0070\u0075" \
491 | u"\u0065\u0064\u0065\u006e\u0073\u0069\u006d\u0070\u006c\u0065" \
492 | u"\u006d\u0065\u006e\u0074\u0065\u0068\u0061\u0062\u006c\u0061" \
493 | u"\u0072\u0065\u006e\u0045\u0073\u0070\u0061\u00f1\u006f\u006c",
494 | # Vietnamese
495 | u"\u0054\u1ea1\u0069\u0073\u0061\u006f\u0068\u1ecd\u006b\u0068" \
496 | u"\u00f4\u006e\u0067\u0074\u0068\u1ec3\u0063\u0068\u1ec9\u006e" \
497 | u"\u00f3\u0069\u0074\u0069\u1ebf\u006e\u0067\u0056\u0069\u1ec7" \
498 | u"\u0074",
499 | # Japanese
500 | u"\u0033\u5e74\u0042\u7d44\u91d1\u516b\u5148\u751f",
501 | # Japanese
502 | u"\u5b89\u5ba4\u5948\u7f8e\u6075\u002d\u0077\u0069\u0074\u0068" \
503 | u"\u002d\u0053\u0055\u0050\u0045\u0052\u002d\u004d\u004f\u004e" \
504 | u"\u004b\u0045\u0059\u0053",
505 | # Japanese
506 | u"\u0048\u0065\u006c\u006c\u006f\u002d\u0041\u006e\u006f\u0074" \
507 | u"\u0068\u0065\u0072\u002d\u0057\u0061\u0079\u002d\u305d\u308c" \
508 | u"\u305e\u308c\u306e\u5834\u6240",
509 | # Japanese
510 | u"\u3072\u3068\u3064\u5c4b\u6839\u306e\u4e0b\u0032",
511 | # Japanese
512 | u"\u004d\u0061\u006a\u0069\u3067\u004b\u006f\u0069\u3059\u308b" \
513 | u"\u0035\u79d2\u524d",
514 | # Japanese
515 | u"\u30d1\u30d5\u30a3\u30fc\u0064\u0065\u30eb\u30f3\u30d0",
516 | # Japanese
517 | u"\u305d\u306e\u30b9\u30d4\u30fc\u30c9\u3067",
518 | # greek
519 | u"\u03b5\u03bb\u03bb\u03b7\u03bd\u03b9\u03ba\u03ac",
520 | # Maltese (Malti)
521 | u"\u0062\u006f\u006e\u0121\u0075\u0073\u0061\u0127\u0127\u0061",
522 | # Russian (Cyrillic)
523 | u"\u043f\u043e\u0447\u0435\u043c\u0443\u0436\u0435\u043e\u043d" \
524 | u"\u0438\u043d\u0435\u0433\u043e\u0432\u043e\u0440\u044f\u0442" \
525 | u"\u043f\u043e\u0440\u0443\u0441\u0441\u043a\u0438"
526 | ]
527 | mock_redis = Mock(spec=redis.client.Redis)
528 | mock_pipeline = Mock(spec=redis.client.Pipeline)
529 | results = ['0', (None, '0')]
530 |
531 | def side_effect(*args, **kwargs):
532 | return results.pop()
533 | mock_redis.pipeline.return_value = mock_pipeline
534 | mock_pipeline.execute.side_effect = side_effect
535 | mock_redis.hgetall.return_value = {}
536 |
537 | def dummy_func(arg):
538 | return "This is a value: %s" % time.time()
539 |
540 | for key in keys:
541 | with patch('retools.global_connection._redis', mock_redis):
542 | CR = self._makeOne()
543 | CR.add_region('short_term', 60)
544 | decorated = self._decorateFunc(dummy_func, 'short_term')
545 | value = decorated(key)
546 | assert 'This is a value' in value
547 | exec_calls = [x for x in mock_pipeline.method_calls \
548 | if x[0] == 'execute']
549 | eq_(len(exec_calls), 2)
550 | mock_pipeline.reset_mock()
551 | results.extend(['0', (None, '0')])
552 |
553 | for key in keys:
554 | with patch('retools.global_connection._redis', mock_redis):
555 | CR = self._makeOne()
556 | CR.add_region('short_term', 60)
557 |
558 | class DummyClass(object):
559 | def dummy_func(self, arg):
560 | return "This is a value: %s" % time.time()
561 | dummy_func = self._decorateFunc(dummy_func, 'short_term')
562 | cl_inst = DummyClass()
563 | value = cl_inst.dummy_func(key)
564 | assert 'This is a value' in value
565 | exec_calls = [x for x in mock_pipeline.method_calls \
566 | if x[0] == 'execute']
567 | eq_(len(exec_calls), 2)
568 | mock_pipeline.reset_mock()
569 | results.extend(['0', (None, '0')])
570 |
--------------------------------------------------------------------------------
/retools/tests/test_limiter.py:
--------------------------------------------------------------------------------
1 | # coding: utf-8
2 |
3 | import unittest
4 | import time
5 |
6 | import redis
7 | from nose.tools import eq_
8 | from mock import Mock
9 | from mock import patch
10 |
11 | from retools.limiter import Limiter
12 | from retools import global_connection
13 |
14 |
15 | class TestLimiterWithMockRedis(unittest.TestCase):
16 | def test_can_create_limiter_without_prefix_and_without_connection(self):
17 | limiter = Limiter(limit=10)
18 |
19 | eq_(limiter.redis, global_connection.redis)
20 | eq_(limiter.limit, 10)
21 | eq_(limiter.prefix, 'retools_limiter')
22 |
23 | def test_can_create_limiter_without_prefix(self):
24 | mock_redis = Mock(spec=redis.Redis)
25 |
26 | limiter = Limiter(limit=10, redis=mock_redis)
27 |
28 | eq_(limiter.redis, mock_redis)
29 | eq_(limiter.prefix, 'retools_limiter')
30 |
31 | def test_can_create_limiter_with_prefix(self):
32 | mock_redis = Mock(spec=redis.Redis)
33 |
34 | limiter = Limiter(limit=10, redis=mock_redis, prefix='something')
35 |
36 | eq_(limiter.redis, mock_redis)
37 | eq_(limiter.prefix, 'something')
38 |
39 | def test_can_create_limiter_with_expiration(self):
40 | mock_redis = Mock(spec=redis.Redis)
41 |
42 | limiter = Limiter(limit=10, redis=mock_redis, expiration_in_seconds=20)
43 |
44 | eq_(limiter.expiration_in_seconds, 20)
45 |
46 | def test_has_limit(self):
47 | mock_time = Mock()
48 | mock_time.return_value = 40.5
49 |
50 | mock_redis = Mock(spec=redis.Redis)
51 | mock_redis.zcard.return_value = 0
52 |
53 | limiter = Limiter(limit=10, redis=mock_redis, expiration_in_seconds=20)
54 |
55 | with patch('time.time', mock_time):
56 | has_limit = limiter.acquire_limit(key='test1')
57 |
58 | eq_(has_limit, True)
59 |
60 | mock_redis.zadd.assert_called_once_with('retools_limiter', 'test1', 60.5)
61 |
62 | def test_acquire_limit_after_removing_items(self):
63 | mock_time = Mock()
64 | mock_time.return_value = 40.5
65 |
66 | mock_redis = Mock(spec=redis.Redis)
67 | mock_redis.zcard.side_effect = [10, 8]
68 |
69 | limiter = Limiter(limit=10, redis=mock_redis, expiration_in_seconds=20)
70 |
71 | with patch('time.time', mock_time):
72 | has_limit = limiter.acquire_limit(key='test1')
73 |
74 | eq_(has_limit, True)
75 |
76 | mock_redis.zadd.assert_called_once_with('retools_limiter', 'test1', 60.5)
77 | mock_redis.zremrangebyscore.assert_called_once_with('retools_limiter', '-inf', 40.5)
78 |
79 | def test_acquire_limit_fails_even_after_removing_items(self):
80 | mock_time = Mock()
81 | mock_time.return_value = 40.5
82 |
83 | mock_redis = Mock(spec=redis.Redis)
84 | mock_redis.zcard.side_effect = [10, 10]
85 |
86 | limiter = Limiter(limit=10, redis=mock_redis, expiration_in_seconds=20)
87 |
88 | with patch('time.time', mock_time):
89 | has_limit = limiter.acquire_limit(key='test1')
90 |
91 | eq_(has_limit, False)
92 |
93 | eq_(mock_redis.zadd.called, False)
94 | mock_redis.zremrangebyscore.assert_called_once_with('retools_limiter', '-inf', 40.5)
95 |
96 | def test_release_limit(self):
97 | mock_redis = Mock(spec=redis.Redis)
98 |
99 | limiter = Limiter(limit=10, redis=mock_redis, expiration_in_seconds=20)
100 |
101 | limiter.release_limit(key='test1')
102 |
103 | mock_redis.zrem.assert_called_once_with('retools_limiter', 'test1')
104 |
105 |
106 | class TestLimiterWithActualRedis(unittest.TestCase):
107 | def test_has_limit(self):
108 | limiter = Limiter(prefix='test-%.6f' % time.time(), limit=2, expiration_in_seconds=400)
109 |
110 | has_limit = limiter.acquire_limit(key='test1')
111 | eq_(has_limit, True)
112 |
113 | has_limit = limiter.acquire_limit(key='test2')
114 | eq_(has_limit, True)
115 |
116 | has_limit = limiter.acquire_limit(key='test3')
117 | eq_(has_limit, False)
118 |
119 | def test_has_limit_after_removing_items(self):
120 | limiter = Limiter(prefix='test-%.6f' % time.time(), limit=2, expiration_in_seconds=400)
121 |
122 | has_limit = limiter.acquire_limit(key='test1')
123 | eq_(has_limit, True)
124 |
125 | has_limit = limiter.acquire_limit(key='test2', expiration_in_seconds=-1)
126 | eq_(has_limit, True)
127 |
128 | has_limit = limiter.acquire_limit(key='test3')
129 | eq_(has_limit, True)
130 |
131 | def test_has_limit_after_releasing_items(self):
132 | limiter = Limiter(prefix='test-%.6f' % time.time(), limit=2, expiration_in_seconds=400)
133 |
134 | has_limit = limiter.acquire_limit(key='test1')
135 | eq_(has_limit, True)
136 |
137 | has_limit = limiter.acquire_limit(key='test2')
138 | eq_(has_limit, True)
139 |
140 | limiter.release_limit(key='test2')
141 |
142 | has_limit = limiter.acquire_limit(key='test3')
143 | eq_(has_limit, True)
144 |
145 | class TestLimiterWithStrictRedis(unittest.TestCase):
146 | def setUp(self):
147 | self.redis = redis.StrictRedis()
148 |
149 | def test_has_limit(self):
150 | limiter = Limiter(prefix='test-%.6f' % time.time(), limit=2, expiration_in_seconds=400, redis=self.redis)
151 |
152 | has_limit = limiter.acquire_limit(key='test1')
153 | eq_(has_limit, True)
154 |
155 | has_limit = limiter.acquire_limit(key='test2')
156 | eq_(has_limit, True)
157 |
158 | has_limit = limiter.acquire_limit(key='test3')
159 | eq_(has_limit, False)
160 |
161 | def test_has_limit_after_removing_items(self):
162 | limiter = Limiter(prefix='test-%.6f' % time.time(), limit=2, expiration_in_seconds=400, redis=self.redis)
163 |
164 | has_limit = limiter.acquire_limit(key='test1')
165 | eq_(has_limit, True)
166 |
167 | has_limit = limiter.acquire_limit(key='test2', expiration_in_seconds=-1)
168 | eq_(has_limit, True)
169 |
170 | has_limit = limiter.acquire_limit(key='test3')
171 | eq_(has_limit, True)
172 |
173 | def test_has_limit_after_releasing_items(self):
174 | limiter = Limiter(prefix='test-%.6f' % time.time(), limit=2, expiration_in_seconds=400, redis=self.redis)
175 |
176 | has_limit = limiter.acquire_limit(key='test1')
177 | eq_(has_limit, True)
178 |
179 | has_limit = limiter.acquire_limit(key='test2')
180 | eq_(has_limit, True)
181 |
182 | limiter.release_limit(key='test2')
183 |
184 | has_limit = limiter.acquire_limit(key='test3')
185 | eq_(has_limit, True)
186 |
--------------------------------------------------------------------------------
/retools/tests/test_lock.py:
--------------------------------------------------------------------------------
1 | import unittest
2 | import time
3 | import threading
4 | import uuid
5 |
6 | import redis
7 | from nose.tools import raises
8 | from nose.tools import eq_
9 |
10 | from retools import global_connection
11 |
12 |
13 | class TestLock(unittest.TestCase):
14 | def _makeOne(self):
15 | from retools.lock import Lock
16 | return Lock
17 |
18 | def _lockException(self):
19 | from retools.lock import LockTimeout
20 | return LockTimeout
21 |
22 | def setUp(self):
23 | self.key = uuid.uuid4()
24 |
25 | def tearDown(self):
26 | global_connection.redis.delete(self.key)
27 |
28 | def test_lock_runs(self):
29 | Lock = self._makeOne()
30 | x = 0
31 | with Lock(self.key):
32 | x += 1
33 |
34 | def test_lock_fail(self):
35 | Lock = self._makeOne()
36 |
37 | bv = threading.Event()
38 | ev = threading.Event()
39 |
40 | def get_lock():
41 | with Lock(self.key):
42 | bv.set()
43 | ev.wait()
44 | t = threading.Thread(target=get_lock)
45 | t.start()
46 | ac = []
47 |
48 | @raises(self._lockException())
49 | def test_it():
50 | with Lock(self.key, timeout=0):
51 | ac.append(10) # pragma: nocover
52 | bv.wait()
53 | test_it()
54 | eq_(ac, [])
55 | ev.set()
56 | t.join()
57 | with Lock(self.key, timeout=0):
58 | ac.append(10)
59 | eq_(ac, [10])
60 |
61 | def test_lock_retry(self):
62 | Lock = self._makeOne()
63 | bv = threading.Event()
64 | ev = threading.Event()
65 |
66 | def get_lock():
67 | with Lock(self.key):
68 | bv.set()
69 | ev.wait()
70 | t = threading.Thread(target=get_lock)
71 | t.start()
72 | ac = []
73 |
74 | bv.wait()
75 |
76 | @raises(self._lockException())
77 | def test_it():
78 | with Lock(self.key, timeout=1):
79 | ac.append(10) # pragma: nocover
80 | test_it()
81 | ev.set()
82 | t.join()
83 |
--------------------------------------------------------------------------------
/retools/tests/test_queue.py:
--------------------------------------------------------------------------------
1 | # coding: utf-8
2 | import unittest
3 | import time
4 |
5 | import redis
6 | import redis.client
7 | import json
8 | from decimal import Decimal
9 |
10 | from nose.tools import raises
11 | from nose.tools import eq_
12 | from mock import Mock
13 | from mock import patch
14 |
15 |
16 | class TestQueue(unittest.TestCase):
17 | def _makeQM(self, **kwargs):
18 | from retools.queue import QueueManager
19 | return QueueManager(**kwargs)
20 |
21 |
22 | class TestJob(TestQueue):
23 | def test_enqueue_job(self):
24 | mock_redis = Mock(spec=redis.Redis)
25 | mock_pipeline = Mock(spec=redis.client.Pipeline)
26 | mock_redis.pipeline.return_value = mock_pipeline
27 | qm = self._makeQM(redis=mock_redis)
28 | job_id = qm.enqueue('retools.tests.jobs:echo_default',
29 | default='hi there')
30 | meth, args, kw = mock_pipeline.method_calls[0]
31 | eq_('rpush', meth)
32 | eq_(kw, {})
33 | queue_name, job_body = args
34 | job_data = json.loads(job_body)
35 | eq_(job_data['job_id'], job_id)
36 | eq_(job_data['kwargs'], {"default": "hi there"})
37 |
38 | def test_enqueue_job_by_name(self):
39 | mock_redis = Mock(spec=redis.Redis)
40 | mock_pipeline = Mock(spec=redis.client.Pipeline)
41 | mock_redis.pipeline.return_value = mock_pipeline
42 | qm = self._makeQM(redis=mock_redis)
43 |
44 | job_id = qm.enqueue('retools.tests.jobs:echo_default',
45 | default='hi there')
46 | meth, args, kw = mock_pipeline.method_calls[0]
47 | eq_('rpush', meth)
48 | eq_(kw, {})
49 | queue_name, job_body = args
50 | job_data = json.loads(job_body)
51 | eq_(job_data['job_id'], job_id)
52 | eq_(job_data['kwargs'], {"default": "hi there"})
53 | mock_redis.llen = Mock(return_value=1)
54 |
55 | created = time.time()
56 |
57 | # trying get_jobs/get_job
58 | job = json.dumps({'job_id': job_id,
59 | 'job': 'retools.tests.jobs:echo_default',
60 | 'kwargs': {},
61 | 'state': '',
62 | 'events': {},
63 | 'metadata': {'created': created}
64 | })
65 |
66 | mock_redis.lindex = Mock(return_value=job)
67 |
68 | jobs = list(qm.get_jobs())
69 | self.assertEqual(len(jobs), 1)
70 | my_job = qm.get_job(job_id)
71 | self.assertEqual(my_job.job_name, 'retools.tests.jobs:echo_default')
72 | self.assertEqual(my_job.metadata['created'], created)
73 |
74 | # testing the Worker class methods
75 | from retools.queue import Worker
76 | mock_redis = Mock(spec=redis.Redis)
77 | mock_pipeline = Mock(spec=redis.client.Pipeline)
78 | mock_redis.pipeline.return_value = mock_pipeline
79 | mock_redis.smembers = Mock(return_value=[])
80 |
81 | workers = list(Worker.get_workers(redis=mock_redis))
82 | self.assertEqual(len(workers), 0)
83 |
84 | worker = Worker(queues=['main'])
85 | mock_redis.smembers = Mock(return_value=[worker.worker_id])
86 | worker.register_worker()
87 | try:
88 | workers = list(Worker.get_workers(redis=mock_redis))
89 | self.assertEqual(len(workers), 1, workers)
90 | ids = Worker.get_worker_ids(redis=mock_redis)
91 | self.assertEqual(ids, [worker.worker_id])
92 | finally:
93 | worker.unregister_worker()
94 |
95 | def test_custom_serializer(self):
96 | mock_redis = Mock(spec=redis.Redis)
97 | mock_pipeline = Mock(spec=redis.client.Pipeline)
98 | mock_redis.pipeline.return_value = mock_pipeline
99 |
100 | def serialize(data):
101 | import simplejson
102 | return simplejson.dumps(data, use_decimal=True)
103 |
104 |
105 | def deserialize(data):
106 | import simplejson
107 | return simplejson.loads(data, use_decimal=True)
108 |
109 | qm = self._makeQM(redis=mock_redis, serializer=serialize,
110 | deserializer=deserialize)
111 |
112 | job_id = qm.enqueue('retools.tests.jobs:echo_default',
113 | decimal_value=Decimal('1.2'))
114 | meth, args, kw = mock_pipeline.method_calls[0]
115 | eq_('rpush', meth)
116 | eq_(kw, {})
117 | queue_name, job_body = args
118 | job_data = deserialize(job_body)
119 | eq_(job_data['job_id'], job_id)
120 | eq_(job_data['kwargs'], {'decimal_value': Decimal('1.2')})
121 |
122 |
123 |
--------------------------------------------------------------------------------
/retools/tests/test_util.py:
--------------------------------------------------------------------------------
1 | import unittest
2 |
3 | from contextlib import contextmanager
4 |
5 | from nose.tools import eq_
6 |
7 |
8 | class TestNamespaceFunc(unittest.TestCase):
9 | def _makeKey(self, func, deco_args):
10 | from retools.util import func_namespace
11 | return func_namespace(func, deco_args)
12 |
13 | def test_func_name(self):
14 | def a_func(): pass
15 | eq_('retools.tests.test_util.a_func.', self._makeKey(a_func, []))
16 |
17 | def test_class_method_name(self):
18 | # This simulates the class method by checking for 'cls' arg
19 | eq_('retools.tests.test_util.DummyClass.',
20 | self._makeKey(DummyClass.class_method, []))
21 |
22 |
23 | class TestContextManager(unittest.TestCase):
24 | def _call_with_contexts(self, ctx_managers, func, kwargs):
25 | from retools.util import with_nested_contexts
26 | return with_nested_contexts(ctx_managers, func, [], kwargs)
27 |
28 | def test_nest_call(self):
29 | def a_func(**kwargs):
30 | kwargs['list'].append('here')
31 | return kwargs['list']
32 |
33 | @contextmanager
34 | def ctx_a(func, *args, **kwargs):
35 | eq_(func, a_func)
36 | kwargs['list'].append(0)
37 | yield
38 | kwargs['list'].append(1)
39 |
40 | @contextmanager
41 | def ctx_b(func, *args, **kwargs):
42 | eq_(func, a_func)
43 | kwargs['list'].append(2)
44 | yield
45 | kwargs['list'].append(3)
46 |
47 | lst = []
48 | kwargs = dict(list=lst)
49 | result = self._call_with_contexts([ctx_a, ctx_b], a_func, kwargs)
50 | eq_([0, 2, 'here', 3, 1], result)
51 |
52 |
53 | class DummyClass(object): # pragma: nocover
54 | def class_method(cls):
55 | return arg
56 |
57 |
58 | class TestChunks(unittest.TestCase):
59 | def test_can_get_chunks(self):
60 | from retools.util import chunks
61 | items = [1, 2, 3, 4]
62 |
63 | eq_(list(chunks(items, 2)), [(1, 2), (3, 4)])
64 |
65 |
66 | class TestFlipPairs(unittest.TestCase):
67 | def test_can_flip_pairs(self):
68 | from retools.util import flip_pairs
69 | items = [1, 2, 3, 4]
70 |
71 | eq_(list(flip_pairs(items)), [2, 1, 4, 3])
72 |
73 |
74 | #def flip_pairs(l):
75 | #for x, y in chunks(l, 2):
76 | #yield y
77 | #yield x
78 |
--------------------------------------------------------------------------------
/retools/util.py:
--------------------------------------------------------------------------------
1 | """Utility functions"""
2 | import inspect
3 | from itertools import izip
4 |
5 |
6 | def func_namespace(func, deco_args):
7 | """Generates a unique namespace for a function"""
8 | kls = None
9 | if hasattr(func, 'im_func'):
10 | kls = func.im_class
11 | func = func.im_func
12 |
13 | deco_key = " ".join(map(str, deco_args))
14 | if kls:
15 | return '%s.%s.%s' % (kls.__module__, kls.__name__, deco_key)
16 | else:
17 | return '%s.%s.%s' % (func.__module__, func.__name__, deco_key)
18 |
19 |
20 | def has_self_arg(func):
21 | """Return True if the given function has a 'self' argument."""
22 | return inspect.getargspec(func)[0] and \
23 | inspect.getargspec(func)[0][0] in ('self', 'cls')
24 |
25 |
26 | def with_nested_contexts(context_managers, func, args, kwargs):
27 | """Nested context manager calling
28 |
29 | Given a function, and keyword arguments to call it with, it will
30 | be wrapped in a with statment using every context manager in the
31 | context_managers list for nested with calling.
32 |
33 | Every context_manager will get the function reference, and keyword
34 | arguments.
35 |
36 | Example::
37 |
38 | with ContextA(func, *args, **kwargs):
39 | with ContextB(func, *args, **kwargs):
40 | return func(**kwargs)
41 |
42 | # is equivilant to
43 | ctx_managers = [ContextA, ContextB]
44 | return with_nested_contexts(ctx_managers, func, kwargs)
45 |
46 | """
47 | if not context_managers:
48 | return func(**kwargs)
49 | else:
50 | ctx_manager = context_managers[0]
51 | with ctx_manager(func, *args, **kwargs):
52 | return with_nested_contexts(context_managers[1:],
53 | func, args, kwargs)
54 |
55 |
56 | def chunks(iterable, n):
57 | args = [iter(iterable)] * n
58 | return izip(*args)
59 |
60 |
61 | def flip_pairs(l):
62 | for x, y in chunks(l, 2):
63 | yield y
64 | yield x
65 |
--------------------------------------------------------------------------------
/setup.cfg:
--------------------------------------------------------------------------------
1 | [wheel]
2 | universal = 1
3 |
4 | [egg_info]
5 | tag_build = dev
6 |
7 | [nosetests]
8 | where=retools
9 | nocapture=1
10 | match=^test
11 | cover-package=retools
12 | cover-erase=1
13 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | __version__ = '0.4.1'
2 |
3 | import os
4 |
5 | from setuptools import setup, find_packages
6 |
7 | here = os.path.abspath(os.path.dirname(__file__))
8 | README = open(os.path.join(here, 'README.rst')).read()
9 | CHANGES = open(os.path.join(here, 'CHANGES.rst')).read()
10 |
11 | setup(name='retools',
12 | version=__version__,
13 | description='Redis Tools',
14 | long_description=README + '\n\n' + CHANGES,
15 | classifiers=[
16 | "Intended Audience :: Developers",
17 | "Programming Language :: Python",
18 | ],
19 | keywords='cache redis queue lock',
20 | author="Ben Bangert",
21 | author_email="ben@groovie.org",
22 | url="http://readthedocs.org/docs/retools/",
23 | license="MIT",
24 | packages=find_packages(),
25 | test_suite="retools.tests",
26 | include_package_data=True,
27 | zip_safe=False,
28 | tests_require = ['pkginfo', 'Mock>=0.8rc2', 'nose',
29 | 'simplejson'],
30 | install_requires=[
31 | "setproctitle>=1.1.2",
32 | "redis>=2.7.3",
33 | ],
34 | entry_points="""
35 | [console_scripts]
36 | retools-worker = retools.queue:run_worker
37 |
38 | """
39 | )
40 |
--------------------------------------------------------------------------------