├── .gitignore ├── LICENSE ├── README.rst ├── debian ├── changelog ├── compat ├── control ├── copyright └── rules ├── pqueue ├── __init__.py ├── pqueue.py └── tests │ ├── __init__.py │ └── test_queue.py ├── runtests.py └── setup.py /.gitignore: -------------------------------------------------------------------------------- 1 | .coverage 2 | *.pyc 3 | *.swp 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) G. B. Versiani. 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 10 | 2. Redistributions in binary form must reproduce the above copyright 11 | notice, this list of conditions and the following disclaimer in the 12 | documentation and/or other materials provided with the distribution. 13 | 14 | 3. Neither the name of python-pqueue nor the names of its contributors may be used 15 | to endorse or promote products derived from this software without 16 | specific prior written permission. 17 | 18 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 19 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 20 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 21 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR 22 | ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 23 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 24 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 25 | ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 26 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 27 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 28 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | ====== 2 | pqueue 3 | ====== 4 | 5 | **pqueue** is a simple persistent (disk-based) FIFO queue for Python. 6 | 7 | **pqueue** goals are speed and simplicity. The development was initially based 8 | on the `Queuelib`_ code. Entries are saved on disk using ``pickle``. 9 | 10 | Requirements 11 | ============ 12 | 13 | * Python 2.7 or Python 3.x 14 | * no external libraries requirements 15 | 16 | Installation 17 | ============ 18 | 19 | You can install **pqueue** either via Python Package Index (PyPI) or from 20 | source. 21 | 22 | To install using pip:: 23 | 24 | $ pip install pqueue 25 | 26 | To install using easy_install:: 27 | 28 | $ easy_install pqueue 29 | 30 | If you have downloaded a source tarball you can install it by running the 31 | following (as root):: 32 | 33 | # python setup.py install 34 | 35 | How to use 36 | ========== 37 | 38 | **pqueue** provides a single FIFO queue implementation. 39 | 40 | Here is an example usage of the FIFO queue:: 41 | 42 | >>> from pqueue import Queue 43 | >>> q = Queue("tmpqueue") 44 | >>> q.put(b'a') 45 | >>> q.put(b'b') 46 | >>> q.put(b'c') 47 | >>> q.get() 48 | b'a' 49 | >>> del q 50 | >>> q = Queue("tmpqueue") 51 | >>> q.get() 52 | b'b' 53 | >>> q.get() 54 | b'c' 55 | >>> q.get_nowait() 56 | Traceback (most recent call last): 57 | File "", line 1, in 58 | File "/usr/lib/python2.7/Queue.py", line 190, in get_nowait 59 | return self.get(False) 60 | File "/usr/lib/python2.7/Queue.py", line 165, in get 61 | raise Empty 62 | Queue.Empty 63 | 64 | The ``Queue`` object is identical to Python's ``Queue`` module (or ``queue`` in 65 | Python 3.x), with the difference that it requires a parameter ``path`` 66 | indicating where to persist the queue data and ``chunksize`` indicating how 67 | many enqueued items should be stored per file. The same ``maxsize`` parameter 68 | available on the system wise ``Queue`` has been maintained. 69 | 70 | In other words, it works exactly as Python's ``Queue``, with the difference any 71 | abrupt interruption is `ACID-guaranteed`_:: 72 | 73 | q = Queue() 74 | 75 | def worker(): 76 | while True: 77 | item = q.get() 78 | do_work(item) 79 | q.task_done() 80 | 81 | for i in range(num_worker_threads): 82 | t = Thread(target=worker) 83 | t.daemon = True 84 | t.start() 85 | 86 | for item in source(): 87 | q.put(item) 88 | 89 | q.join() # block until all tasks are done 90 | 91 | Note that pqueue *is not intended to be used by multiple processes*. 92 | 93 | How it works? 94 | ============= 95 | 96 | Pushed data is serialized using pickle in sequence, on chunked files named as 97 | ``qNNNNN``, with a maximum of ``chunksize`` elements, all stored on the given 98 | ``path``. 99 | 100 | The queue is formed by a ``head`` and a ``tail``. Pushed data goes on ``head``, 101 | pulled data goes on ``tail``. 102 | 103 | An ``info`` file is pickled in the ``path``, having the following ``dict``: 104 | 105 | * ``head``: a list of three integers, an index of the ``head`` file, the number 106 | of elements written, and the file position of the last write. 107 | * ``tail``: a list of three integers, an index of the ``tail`` file, the number 108 | of elements read, and the file position of the last read. 109 | * ``size``: number of elements in the queue. 110 | * ``chunksize``: number of elements that should be stored in each disk queue 111 | file. 112 | 113 | Both read and write operations depend on sequential transactions on disk. In 114 | order to accomplish ACID requirements, these modifications are protected by the 115 | Queue locks. 116 | 117 | If, for any reason, the application stops working in the middle of a head 118 | write, a second execution will remove any inconsistency by truncating the 119 | partial head write. 120 | 121 | On ``get``, the ``info`` file is not updated, only when you first call 122 | ``task_done``, and only on the first time case you have to call it 123 | sequentially. 124 | 125 | The ``info`` file is updated in the following way: a temporary file (using 126 | 'mkstemp') is created with the new data and then moved over the previous 127 | ``info`` file. This was designed this way as POSIX 'rename' is guaranteed to be 128 | atomic. 129 | 130 | In case of abrupt interruptions, one of the following conditions may happen: 131 | 132 | * A partial write of the last pushed element may occur and in this case only 133 | this last element pushed will be discarded. 134 | * An element pulled from the queue may be processing, and in this case a second 135 | run will consume same element again. 136 | 137 | Tests 138 | ===== 139 | 140 | Tests are located in **pqueue/tests** directory. They can be run using 141 | Python's default **unittest** module with the following command:: 142 | 143 | ./runtests.py 144 | 145 | The output should be something like the following:: 146 | 147 | ./runtests.py 148 | test_GarbageOnHead (pqueue.tests.test_queue.PersistenceTest) 149 | Adds garbage to the queue head and let the internal integrity ... ok 150 | test_MultiThreaded (pqueue.tests.test_queue.PersistenceTest) 151 | Create consumer and producer threads, check parallelism ... ok 152 | test_OpenCloseOneHundred (pqueue.tests.test_queue.PersistenceTest) 153 | Write 1000 items, close, reopen checking if all items are there ... ok 154 | test_OpenCloseSingle (pqueue.tests.test_queue.PersistenceTest) 155 | Write 1 item, close, reopen checking if same item is there ... ok 156 | test_PartialWrite (pqueue.tests.test_queue.PersistenceTest) 157 | Test recovery from previous crash w/ partial write ... ok 158 | test_RandomReadWrite (pqueue.tests.test_queue.PersistenceTest) 159 | Test random read/write ... ok 160 | 161 | ---------------------------------------------------------------------- 162 | Ran 6 tests in 1.301s 163 | 164 | OK 165 | 166 | License 167 | ======= 168 | 169 | This software is licensed under the BSD License. See the LICENSE file in the 170 | top distribution directory for the full license text. 171 | 172 | Versioning 173 | ========== 174 | 175 | This software follows `Semantic Versioning`_ 176 | 177 | .. _Queuelib: http://github.com/scrapy/queuelib 178 | .. _ACID-guaranteed: http://en.wikipedia.org/wiki/ACID 179 | .. _Semantic Versioning: http://semver.org/ 180 | -------------------------------------------------------------------------------- /debian/changelog: -------------------------------------------------------------------------------- 1 | python-pqueue (0.1.6) unstable; urgency=high 2 | 3 | * BUG Fix: function _truncate was wrong 4 | 5 | -- G. B. Versiani Mon, 14 Nov 2016 09:57:43 -0200 6 | 7 | python-pqueue (0.1.5) unstable; urgency=low 8 | 9 | * Cosmetic changes on the documentation 10 | 11 | -- G. B. Versiani Mon, 14 Nov 2016 09:35:17 -0200 12 | 13 | python-pqueue (0.1.4) unstable; urgency=medium 14 | 15 | * Added support to python 3.4 and python 3.5 16 | 17 | -- G. B. Versiani Mon, 14 Nov 2016 09:00:35 -0200 18 | 19 | python-pqueue (0.1.3) unstable; urgency=medium 20 | 21 | * Added constructor parameter to use alternative directory instead of /tmp 22 | for temporary files. 23 | 24 | -- G. B. Versiani Mon, 04 Jul 2016 08:39:51 -0300 25 | 26 | python-pqueue (0.1.2) unstable; urgency=medium 27 | 28 | * Initial release. 29 | 30 | -- G. B. Versiani Mon, 28 Dec 2015 19:57:18 -0200 31 | -------------------------------------------------------------------------------- /debian/compat: -------------------------------------------------------------------------------- 1 | 7 2 | -------------------------------------------------------------------------------- /debian/control: -------------------------------------------------------------------------------- 1 | Source: python-pqueue 2 | Section: python 3 | Priority: optional 4 | Maintainer: G. B. Versiani 5 | Build-Depends: python-all (>= 2.6.6-3), debhelper (>= 7) 6 | Standards-Version: 3.9.2 7 | Homepage: http://github.com/balena/python-pqueue 8 | Vcs-Git: http://github.com/balena/python-pqueue.git 9 | X-Python-Version: >= 2.7 10 | 11 | Package: python-pqueue 12 | Architecture: all 13 | Depends: ${misc:Depends}, ${python:Depends} 14 | Description: A synchronized persistent Queue class for Python 15 | pqueue is a simple persistent (disk-based) FIFO queue for Python. 16 | . 17 | pqueue goals are speed and simplicity. The development was initially based on 18 | the Queuelib code. 19 | -------------------------------------------------------------------------------- /debian/copyright: -------------------------------------------------------------------------------- 1 | Format: http://svn.debian.org/wsvn/dep/web/deps/dep5.mdwn?op=file&rev=166 2 | Upstream-Name: pyrocksdb 3 | Upstream-Contact: G. B. Versiani 4 | Source: http://github.com/balena/python-pqueue 5 | 6 | Files: * 7 | Copyright: 2015, G. B. Versiani 8 | License: BSD-3-clause 9 | 10 | License: BSD-3-clause 11 | Redistribution and use in source and binary forms, with or without 12 | modification, are permitted provided that the following conditions are met: 13 | . 14 | 1. Redistributions of source code must retain the above copyright notice, 15 | this list of conditions and the following disclaimer. 16 | . 17 | 2. Redistributions in binary form must reproduce the above copyright 18 | notice, this list of conditions and the following disclaimer in the 19 | documentation and/or other materials provided with the distribution. 20 | . 21 | 3. Neither the name of Scrapy nor the names of its contributors may be used 22 | to endorse or promote products derived from this software without 23 | specific prior written permission. 24 | . 25 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 26 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 27 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 28 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE 29 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 30 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 31 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 32 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 33 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 34 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 35 | -------------------------------------------------------------------------------- /debian/rules: -------------------------------------------------------------------------------- 1 | #!/usr/bin/make -f 2 | # -*- makefile -*- 3 | # Sample debian/rules that uses debhelper. 4 | # This file was originally written by Joey Hess and Craig Small. 5 | # As a special exception, when this file is copied by dh-make into a 6 | # dh-make output file, you may use that output file without restriction. 7 | # This special exception was added by Craig Small in version 0.37 of dh-make. 8 | 9 | # Uncomment this to turn on verbose mode. 10 | #export DH_VERBOSE=1 11 | 12 | %: 13 | dh $@ --buildsystem=python_distutils --with python2 14 | -------------------------------------------------------------------------------- /pqueue/__init__.py: -------------------------------------------------------------------------------- 1 | __author__ = 'G. B. Versiani' 2 | __license__ = 'BSD' 3 | __version__ = '0.1.7' 4 | 5 | import sys 6 | if sys.version_info < (3, 0): 7 | from Queue import Empty, Full 8 | else: 9 | from queue import Empty, Full 10 | 11 | from .pqueue import Queue 12 | 13 | __all__ = [Queue, Empty, Full, __author__, __license__, __version__] 14 | -------------------------------------------------------------------------------- /pqueue/pqueue.py: -------------------------------------------------------------------------------- 1 | """A single process, persistent multi-producer, multi-consumer queue.""" 2 | 3 | import os 4 | import pickle 5 | import sys 6 | import tempfile 7 | 8 | if sys.version_info < (3, 0): 9 | from Queue import Queue as SyncQ 10 | else: 11 | from queue import Queue as SyncQ 12 | 13 | 14 | def _truncate(fn, length): 15 | fd = os.open(fn, os.O_RDWR) 16 | os.ftruncate(fd, length) 17 | os.close(fd) 18 | 19 | 20 | class Queue(SyncQ): 21 | def __init__(self, path, maxsize=0, chunksize=100, tempdir=None): 22 | """Create a persistent queue object on a given path. 23 | 24 | The argument path indicates a directory where enqueued data should be 25 | persisted. If the directory doesn't exist, one will be created. If maxsize 26 | is <= 0, the queue size is infinite. The optional argument chunksize 27 | indicates how many entries should exist in each chunk file on disk. 28 | 29 | The tempdir parameter indicates where temporary files should be stored. 30 | The tempdir has to be located on the same disk as the enqueued data in 31 | order to obtain atomic operations. 32 | """ 33 | 34 | self.path = path 35 | self.chunksize = chunksize 36 | self.tempdir = tempdir 37 | if self.tempdir: 38 | if os.stat(self.path).st_dev != os.stat(self.tempdir).st_dev: 39 | raise ValueError("tempdir has to be located " 40 | "on same path filesystem") 41 | 42 | SyncQ.__init__(self, maxsize) 43 | self.info = self._loadinfo() 44 | # truncate head case it contains garbage 45 | hnum, hcnt, hoffset = self.info['head'] 46 | headfn = self._qfile(hnum) 47 | if os.path.exists(headfn): 48 | if hoffset < os.path.getsize(headfn): 49 | _truncate(headfn, hoffset) 50 | # let the head file open 51 | self.headf = self._openchunk(hnum, 'ab+') 52 | # let the tail file open 53 | tnum, _, toffset = self.info['tail'] 54 | self.tailf = self._openchunk(tnum) 55 | self.tailf.seek(toffset) 56 | # update unfinished tasks with the current number of enqueued tasks 57 | self.unfinished_tasks = self.info['size'] 58 | # optimize info file updates 59 | self.update_info = True 60 | 61 | def _init(self, maxsize): 62 | if not os.path.exists(self.path): 63 | os.makedirs(self.path) 64 | 65 | def _qsize(self, len=len): 66 | return self.info['size'] 67 | 68 | def _put(self, item): 69 | pickle.dump(item, self.headf) 70 | self.headf.flush() 71 | hnum, hpos, _ = self.info['head'] 72 | hpos += 1 73 | if hpos == self.info['chunksize']: 74 | hpos = 0 75 | hnum += 1 76 | self.headf.close() 77 | self.headf = self._openchunk(hnum, 'ab+') 78 | self.info['size'] += 1 79 | self.info['head'] = [hnum, hpos, self.headf.tell()] 80 | self._saveinfo() 81 | 82 | def _get(self): 83 | tnum, tcnt, toffset = self.info['tail'] 84 | hnum, hcnt, _ = self.info['head'] 85 | if [tnum, tcnt] >= [hnum, hcnt]: 86 | return None 87 | data = pickle.load(self.tailf) 88 | toffset = self.tailf.tell() 89 | tcnt += 1 90 | if tcnt == self.info['chunksize'] and tnum <= hnum: 91 | tcnt = toffset = 0 92 | tnum += 1 93 | self.tailf.close() 94 | self.tailf = self._openchunk(tnum) 95 | self.info['size'] -= 1 96 | self.info['tail'] = [tnum, tcnt, toffset] 97 | self.update_info = True 98 | return data 99 | 100 | def task_done(self): 101 | SyncQ.task_done(self) 102 | if self.update_info: 103 | self._saveinfo() 104 | self.update_info = False 105 | 106 | def _openchunk(self, number, mode='rb'): 107 | return open(self._qfile(number), mode) 108 | 109 | def _loadinfo(self): 110 | infopath = self._infopath() 111 | if os.path.exists(infopath): 112 | with open(infopath, 'rb') as f: 113 | info = pickle.load(f) 114 | else: 115 | info = { 116 | 'chunksize': self.chunksize, 117 | 'size': 0, 118 | 'tail': [0, 0, 0], 119 | 'head': [0, 0, 0], 120 | } 121 | return info 122 | 123 | def _gettempfile(self): 124 | if self.tempdir: 125 | return tempfile.mkstemp(dir=self.tempdir) 126 | else: 127 | return tempfile.mkstemp() 128 | 129 | def _saveinfo(self): 130 | tmpfd, tmpfn = self._gettempfile() 131 | os.write(tmpfd, pickle.dumps(self.info)) 132 | os.close(tmpfd) 133 | # POSIX requires that 'rename' is an atomic operation 134 | os.rename(tmpfn, self._infopath()) 135 | self._clear_old_file() 136 | 137 | def _clear_old_file(self): 138 | tnum, _, _ = self.info['tail'] 139 | while tnum >= 1: 140 | tnum -= 1 141 | path = self._qfile(tnum) 142 | if os.path.exists(path): 143 | os.remove(path) 144 | else: 145 | break 146 | 147 | def _qfile(self, number): 148 | return os.path.join(self.path, 'q%05d' % number) 149 | 150 | def _infopath(self): 151 | return os.path.join(self.path, 'info') 152 | -------------------------------------------------------------------------------- /pqueue/tests/__init__.py: -------------------------------------------------------------------------------- 1 | from pqueue.tests.test_queue import * 2 | -------------------------------------------------------------------------------- /pqueue/tests/test_queue.py: -------------------------------------------------------------------------------- 1 | # coding=utf-8 2 | 3 | import os 4 | import pickle 5 | import random 6 | import tempfile 7 | import unittest 8 | from threading import Thread 9 | 10 | from pqueue import Queue, Empty 11 | 12 | 13 | class PersistenceTest(unittest.TestCase): 14 | def setUp(self): 15 | self.path = tempfile.mkdtemp() 16 | 17 | def tearDown(self): 18 | #shutil.rmtree(cls.path, ignore_errors=True) 19 | pass 20 | 21 | def test_OpenCloseSingle(self): 22 | """Write 1 item, close, reopen checking if same item is there""" 23 | 24 | q = Queue(self.path) 25 | q.put('var1') 26 | del q 27 | q = Queue(self.path) 28 | self.assertEqual(1, q.qsize()) 29 | self.assertEqual('var1', q.get()) 30 | q.task_done() 31 | 32 | def test_OpenCloseOneHundred(self): 33 | """Write 1000 items, close, reopen checking if all items are there""" 34 | 35 | q = Queue(self.path) 36 | for i in range(1000): 37 | q.put('var%d' % i) 38 | del q 39 | q = Queue(self.path) 40 | self.assertEqual(1000, q.qsize()) 41 | for i in range(1000): 42 | data = q.get() 43 | self.assertEqual('var%d' % i, data) 44 | q.task_done() 45 | with self.assertRaises(Empty): 46 | q.get_nowait() 47 | # assert adding another one still works 48 | q.put('foobar') 49 | data = q.get() 50 | 51 | def test_PartialWrite(self): 52 | """Test recovery from previous crash w/ partial write""" 53 | 54 | q = Queue(self.path) 55 | for i in range(100): 56 | q.put('var%d' % i) 57 | del q 58 | with open(os.path.join(self.path, 'q00000'), 'ab') as f: 59 | pickle.dump('文字化け', f) 60 | q = Queue(self.path) 61 | self.assertEqual(100, q.qsize()) 62 | for i in range(100): 63 | self.assertEqual('var%d' % i, q.get()) 64 | q.task_done() 65 | with self.assertRaises(Empty): 66 | q.get_nowait() 67 | 68 | def test_RandomReadWrite(self): 69 | """Test random read/write""" 70 | 71 | q = Queue(self.path) 72 | n = 0 73 | for i in range(1000): 74 | if random.random() < 0.5: 75 | if n > 0: 76 | q.get_nowait() 77 | q.task_done() 78 | n -= 1 79 | else: 80 | with self.assertRaises(Empty): 81 | q.get_nowait() 82 | else: 83 | q.put('var%d' % random.getrandbits(16)) 84 | n += 1 85 | 86 | def test_MultiThreaded(self): 87 | """Create consumer and producer threads, check parallelism""" 88 | 89 | q = Queue(self.path) 90 | def producer(): 91 | for i in range(1000): 92 | q.put('var%d' % i) 93 | 94 | def consumer(): 95 | for i in range(1000): 96 | q.get() 97 | q.task_done() 98 | 99 | c = Thread(target = consumer) 100 | c.start() 101 | p = Thread(target = producer) 102 | p.start() 103 | c.join() 104 | p.join() 105 | with self.assertRaises(Empty): 106 | q.get_nowait() 107 | 108 | def test_GarbageOnHead(self): 109 | """Adds garbage to the queue head and let the internal integrity 110 | checks fix it""" 111 | 112 | q = Queue(self.path) 113 | q.put('var1') 114 | del q 115 | 116 | with open(os.path.join(self.path, 'q00001'), 'a') as fd: 117 | fd.write('garbage') 118 | 119 | q = Queue(self.path) 120 | q.put('var2') 121 | 122 | self.assertEqual(2, q.qsize()) 123 | self.assertEqual('var1', q.get()) 124 | q.task_done() 125 | 126 | def test_ClearOldFile(self): 127 | """put until reaching chunksize, then get without calling task_done""" 128 | q = Queue(self.path, chunksize=10) 129 | for i in range(15): 130 | q.put('var1') 131 | 132 | for i in range(11): 133 | q.get() 134 | 135 | q = Queue(self.path, chunksize=10) 136 | self.assertEqual(q.qsize(), 15) 137 | 138 | for i in range(11): 139 | q.get() 140 | q.task_done() 141 | self.assertEqual(q.qsize(), 4) 142 | -------------------------------------------------------------------------------- /runtests.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import sys 3 | import unittest 4 | 5 | from pqueue import tests 6 | 7 | def runtests(*test_args): 8 | suite = unittest.TestLoader().loadTestsFromModule(tests) 9 | result = unittest.TextTestRunner(verbosity=2).run(suite) 10 | if result.failures: 11 | sys.exit(1) 12 | elif result.errors: 13 | sys.exit(2) 14 | sys.exit(0) 15 | 16 | if __name__ == '__main__': 17 | runtests(*sys.argv[1:]) 18 | 19 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | # -*- coding: utf-8 -*- 3 | 4 | from setuptools import setup, find_packages 5 | 6 | setup( 7 | name='pqueue', 8 | version=__import__('pqueue').__version__, 9 | description=( 10 | 'A single process, persistent multi-producer, multi-consumer queue.' 11 | ), 12 | long_description=open('README.rst').read(), 13 | author='G. B. Versiani', 14 | author_email='guibv@yahoo.com', 15 | maintainer='G. B. Versiani', 16 | maintainer_email='guibv@yahoo.com', 17 | license='BSD', 18 | packages=find_packages(), 19 | platforms=["all"], 20 | url='http://github.com/balena/python-pqueue', 21 | classifiers=[ 22 | 'Development Status :: 4 - Beta', 23 | 'Operating System :: POSIX', 24 | 'Intended Audience :: Developers', 25 | 'License :: OSI Approved :: BSD License', 26 | 'Programming Language :: Python :: 2', 27 | 'Programming Language :: Python :: 2.7', 28 | 'Programming Language :: Python :: 3', 29 | 'Programming Language :: Python :: 3.3', 30 | 'Programming Language :: Python :: 3.4', 31 | 'Programming Language :: Python :: 3.5', 32 | 'Topic :: Software Development :: Libraries' 33 | ], 34 | test_suite = 'runtests.runtests', 35 | ) 36 | --------------------------------------------------------------------------------