├── .gitignore ├── .travis.yml ├── CHANGES.rst ├── LICENSE ├── MANIFEST.in ├── README.rst ├── pgqueue.py ├── setup.cfg ├── setup.py └── tests ├── __init__.py ├── test_consumer.py ├── test_producer.py ├── test_quoting.py └── test_ticker.py /.gitignore: -------------------------------------------------------------------------------- 1 | *.py[co] 2 | 3 | /*.egg 4 | /*.egg-info 5 | /build/ 6 | /dist/ 7 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | python: 3 | - 2.6 4 | - 2.7 5 | - 3.2 6 | - 3.3 7 | - 3.4 8 | - 3.5 9 | install: 10 | - pip install flake8==2.1.0 pep8==1.5.6 pyflakes==0.8.1 11 | - python setup.py install 12 | - pip list 13 | script: 14 | - python setup.py test 15 | - flake8 pgqueue.py tests setup.py 16 | sudo: false 17 | -------------------------------------------------------------------------------- /CHANGES.rst: -------------------------------------------------------------------------------- 1 | Changelog 2 | ========= 3 | 4 | 5 | 0.6 (2016-01-14) 6 | ~~~~~~~~~~~~~~~~ 7 | 8 | * Speed up ``Event.tag_retry()``. 9 | 10 | * Instances of ``Event`` are hashable. 11 | 12 | * Preserve order of events on retry. 13 | 14 | 15 | 0.5 (2015-12-16) 16 | ~~~~~~~~~~~~~~~~ 17 | 18 | * Log the reconnections. 19 | 20 | * Single function call to retry any number of events. 21 | 22 | * Lower memory usage. 23 | 24 | 25 | 0.4.1 (2015-10-17) 26 | ~~~~~~~~~~~~~~~~~~ 27 | 28 | * Fix a bug with PgQ Ticker after January 18th, 2038. 29 | Now it's safe until December 31st, 9999. 30 | 31 | 32 | 0.4 (2014-09-22) 33 | ~~~~~~~~~~~~~~~~ 34 | 35 | * Ensure ``Event.retry`` is numeric, never ``None``. 36 | 37 | * Reset the PgQ Ticker connection after a database outage. 38 | 39 | 40 | 0.3 (2014-05-25) 41 | ~~~~~~~~~~~~~~~~ 42 | 43 | * First public release 44 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright © 2014 Florent Xicluna 2 | Copyright © 2007 Marko Kreen, Skype Technologies OÜ 3 | 4 | Permission to use, copy, modify, and/or distribute this software for any 5 | purpose with or without fee is hereby granted, provided that the above 6 | copyright notice and this permission notice appear in all copies. 7 | 8 | THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 9 | WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 10 | MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 11 | ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 12 | WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 13 | ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 14 | OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 15 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include LICENSE README.rst CHANGES.rst 2 | recursive-include tests * 3 | recursive-exclude tests *.pyc 4 | recursive-exclude tests *.pyo 5 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | =================== 2 | Light PgQ Framework 3 | =================== 4 | 5 | This module provides a convenient Python API to integrate 6 | PostgreSQL PgQ features with any Python application. 7 | 8 | 9 | Presentation of PgQ 10 | ------------------- 11 | 12 | *(from SkyTools README)* 13 | 14 | PgQ is a queuing system written in PL/pgSQL, Python and C code. It is 15 | based on snapshot-based event handling ideas from Slony-I, and is 16 | written for general usage. 17 | 18 | PgQ provides an efficient, transactional queueing system with 19 | multi-node support (including work sharing and splitting, failover and 20 | switchover, for queues and for consumers). 21 | 22 | Rules: 23 | 24 | - There can be several queues in a database. 25 | - There can be several producers than can insert into any queue. 26 | - There can be several consumers on one queue. 27 | - There can be several subconsumers on a consumer. 28 | 29 | PgQ is split into 3 layers: Producers, Ticker and Consumers. 30 | 31 | **Producers** and **Consumers** respectively push and read events into 32 | a queue. Producers just need to call PostgreSQL stored procedures 33 | (like a trigger on a table or a PostgreSQL call from the application). 34 | Consumers are frequently written in Python, but any language able to 35 | run PostgreSQL stored procedures can be used. 36 | 37 | **Ticker** is a daemon which splits the queues into batches of events and 38 | handle the maintenance of the system. 39 | 40 | 41 | The PgQueue module 42 | ------------------ 43 | 44 | This module provides Python functions and classes to write **Producers** 45 | and **Consumers**. 46 | It contains also a Python implementation of the **Ticker** engine, which 47 | mimics the original C Ticker from SkyTools: it splits batches of events, 48 | and execute maintenance tasks. 49 | 50 | 51 | Installation 52 | ------------ 53 | 54 | Prerequisites: 55 | 56 | - Python >= 2.6 or Python 3 57 | - psycopg2 is automatically installed as a dependency 58 | - (on the server) the ``PgQ`` extension version >= 3.1 59 | 60 | On Debian / Ubuntu you will add `the PostgreSQL APT repository 61 | `_, then install the package 62 | ``postgresql-x.x-pgq3`` depending on the PostgreSQL version. 63 | 64 | Finally create the extension in the database: 65 | 66 | :: 67 | 68 | CREATE EXTENSION IF NOT EXISTS pgq; 69 | 70 | You can install the ``pgqueue`` module into your environment. 71 | 72 | :: 73 | 74 | pip install --update pgqueue 75 | 76 | 77 | Example usage 78 | ------------- 79 | 80 | You need to run the **Ticker** permanently. 81 | If the Ticker is off, the events will be stored into the queues, 82 | but no batch will be prepared for the consumers, and event tables will 83 | grow quickly. 84 | 85 | For the Ticker, you have the choice between the optimized ``pgqd`` 86 | multi-database ticker written in C, and part of SkyTools, or use the 87 | simpler Python implementation provided with this module: 88 | 89 | :: 90 | 91 | python -m pgqueue 'host=127.0.0.1 port=5432 user=jules password=xxxx dbname=test_db' 92 | 93 | Let's create a new queue, and register a consumer: 94 | 95 | :: 96 | 97 | conn = psycopg2.connect("dbname=test user=postgres") 98 | conn.autocommit = True 99 | cursor = conn.cursor() 100 | 101 | first_q = pgqueue.Queue('first_queue') 102 | first_q.create(cursor, ticker_max_lag='4 seconds') 103 | 104 | consum_q = pgqueue.Consumer('first_queue', 'consumer_one') 105 | consum_q.register(cursor) 106 | 107 | 108 | We're ready to produce events into the queue, and consume events 109 | later in the application: 110 | 111 | :: 112 | 113 | first_q.insert_event(cursor, 'announce', 'Hello ...') 114 | first_q.insert_event(cursor, 'announce', 'Hello world!') 115 | 116 | # ... wait a little bit 117 | 118 | conn.autocommit = False 119 | for event in consum_q.next_events(cursor, commit=True): 120 | print(event) 121 | 122 | You can browse the source code for advanced usage, until we write 123 | more documentation (contributions are welcomed). 124 | 125 | Also refer to `the upstream documentation for more details 126 | `_. 127 | 128 | 129 | Credits 130 | ------- 131 | 132 | PgQ is a PostgreSQL extension which is developed by Marko Kreen. 133 | It is `part of SkyTools `_, 134 | a package of tools in use in Skype for replication and failover. 135 | 136 | SkyTools embeds also a ``pgq`` Python framework which provides a 137 | slightly different API. 138 | 139 | 140 | Links 141 | ----- 142 | 143 | .. image:: https://travis-ci.org/florentx/pgqueue.svg?branch=master 144 | :target: https://travis-ci.org/florentx/pgqueue 145 | :alt: Build status 146 | 147 | * `Fork me on GitHub `_ 148 | -------------------------------------------------------------------------------- /pgqueue.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | """Light PgQ framework. 3 | 4 | Stripped from Skytools 3. 5 | """ 6 | import logging 7 | import time 8 | from operator import itemgetter 9 | 10 | import psycopg2 11 | import psycopg2.extras 12 | 13 | __version__ = '0.6' 14 | __all__ = ['Event', 'Batch', 'Consumer', 'Queue', 'Ticker', 15 | 'bulk_insert_events', 'insert_event'] 16 | 17 | 18 | class _DisposableFile(dict): 19 | """Awfully limited file API for psycopg2 copy_from.""" 20 | __slots__ = () 21 | 22 | def read(self, size=-1): 23 | return self.pop('content', '') 24 | readline = read 25 | 26 | 27 | class _RetryList(dict): 28 | """Kind of ordered dict.""" 29 | __slots__ = ('_items',) 30 | 31 | def __init__(self): 32 | self._items = [] 33 | 34 | def __setitem__(self, key, value, _setitem=dict.__setitem__): 35 | if key not in self: 36 | self._items.append(key) 37 | _setitem(self, key, value) 38 | 39 | def __delitem__(self, key, _delitem=dict.__delitem__): 40 | _delitem(self, key) 41 | self._items.remove(key) 42 | 43 | def __iter__(self): 44 | for key in self._items: 45 | yield (key, self[key]) 46 | 47 | 48 | def quote_ident(s): 49 | """Quote SQL identifier.""" 50 | return ('"%s"' % s.replace('"', '""')) if s else '""' 51 | 52 | 53 | def quote_copy(s, _special='\n\\\t\r'): 54 | r"""Quote for COPY FROM command. None is converted to \N.""" 55 | if s is None: 56 | return r'\N' 57 | s = str(s) 58 | for char in _special: 59 | if char in s: 60 | return (s.replace('\\', '\\\\') 61 | .replace('\t', '\\t') 62 | .replace('\n', '\\n') 63 | .replace('\r', '\\r')) 64 | return s 65 | 66 | 67 | def quote_dsn_param(s, _special=' \\\''): 68 | """Apply the escaping rule required by PQconnectdb.""" 69 | if not s: 70 | return "''" 71 | for char in _special: 72 | if char in s: 73 | return "'%s'" % s.replace("\\", "\\\\").replace("'", "\\'") 74 | return s 75 | 76 | 77 | def insert_event(curs, queue_name, ev_type, ev_data, 78 | extra1=None, extra2=None, extra3=None, extra4=None): 79 | curs.execute("SELECT pgq.insert_event(%s, %s, %s, %s, %s, %s, %s);", 80 | (queue_name, ev_type, ev_data, 81 | extra1, extra2, extra3, extra4)) 82 | return curs.fetchone()[0] 83 | 84 | 85 | def bulk_insert_events(curs, queue_name, rows, columns): 86 | curs.execute("SELECT pgq.current_event_table(%s);", (queue_name,)) 87 | event_tbl = curs.fetchone()[0] 88 | db_fields = ['ev_' + fld for fld in columns] 89 | content = ('\t'.join([quote_copy(v) for v in row]) for row in rows) 90 | df = _DisposableFile(content='\n'.join(content) + '\n') 91 | curs.copy_from(df, event_tbl, columns=db_fields) 92 | 93 | 94 | class Event(tuple): 95 | """Event data for consumers. 96 | 97 | Consumer is supposed to tag them after processing. 98 | They will be removed from the queue by default. 99 | """ 100 | __slots__ = () 101 | _fields = ('id', 'txid', 'time', 'type', 'data', 102 | 'extra1', 'extra2', 'extra3', 'extra4', 103 | 'retry', '_failed') 104 | 105 | def __hash__(self): 106 | return tuple.__hash__(self[:4]) 107 | 108 | # Provide the event attributes as instance properties 109 | for _n, _attr in enumerate(_fields): 110 | locals()[_attr] = property(itemgetter(_n)) 111 | del _n, _attr 112 | 113 | @property 114 | def __dict__(self): 115 | return dict(zip(self._fields, self)) 116 | 117 | @property 118 | def failed(self): 119 | """Planned for retry?""" 120 | return self in self._failed 121 | 122 | @property 123 | def retry_time(self): 124 | """Interval before this event is retried. 125 | 126 | It returns None if this event is not planned for retry. 127 | """ 128 | return self._failed.get(self) 129 | 130 | def tag_done(self): 131 | """Flag this event done (not necessary).""" 132 | if self in self._failed: 133 | del self._failed[self] 134 | 135 | def tag_retry(self, retry_time=60): 136 | """Flag this event for retry. 137 | 138 | It will be put back in queue and included in a future batch. 139 | """ 140 | self._failed[self] = retry_time 141 | 142 | def __str__(self): 143 | return ("" % self.__dict__) 145 | __repr__ = __str__ 146 | 147 | 148 | class Batch(object): 149 | """Lazy iterator over batch events. 150 | 151 | Events are loaded using cursor. 152 | It allows: 153 | 154 | - one for-loop over events 155 | - len() after that 156 | """ 157 | 158 | _cursor_name = "batch_walker" 159 | # can retry events 160 | _retriable = True 161 | 162 | def __init__(self, curs, batch_id, queue_name, consumer_name, 163 | fetch_size=300, predicate=None): 164 | self.queue_name = queue_name 165 | self.consumer_name = consumer_name 166 | self.fetch_size = fetch_size 167 | self.batch_id = batch_id 168 | self.predicate = predicate 169 | self._curs = curs 170 | self.length = 0 171 | self.failed = _RetryList() # {event: retry_time} 172 | self.fetch_status = 0 # 0-not started, 1-in-progress, 2-done 173 | 174 | def _make_event(self, row): 175 | row['ev_retry'] = row['ev_retry'] or 0 176 | return Event([row['ev_' + fld] for fld in Event._fields[:-1]] + 177 | [self.failed]) 178 | 179 | def _fetch(self): 180 | q = "SELECT * FROM pgq.get_batch_events(%s)" 181 | if self.predicate: 182 | q += " WHERE %s" % self.predicate 183 | self._curs.execute(q, (self.batch_id,)) 184 | self.length = self._curs.rowcount 185 | # Cursor is an iterable 186 | return self._curs 187 | 188 | def _fetchcursor(self): 189 | q = "SELECT * FROM pgq.get_batch_cursor(%s, %s, %s, %s);" 190 | self._curs.execute(q, (self.batch_id, self._cursor_name, 191 | self.fetch_size, self.predicate)) 192 | # this will return first batch of rows 193 | 194 | q = "FETCH %d FROM %s;" % (self.fetch_size, self._cursor_name) 195 | while True: 196 | rowcount = self._curs.rowcount 197 | if not rowcount: 198 | break 199 | 200 | self.length += rowcount 201 | for row in self._curs: 202 | yield row 203 | 204 | # if less rows than requested, it was final block 205 | if rowcount < self.fetch_size: 206 | break 207 | 208 | # request next block of rows 209 | self._curs.execute(q) 210 | 211 | self._curs.execute("CLOSE %s;" % self._cursor_name) 212 | 213 | def __iter__(self): 214 | if self.fetch_status: 215 | raise RuntimeError("Batch: double fetch? (%d)" % self.fetch_status) 216 | self.fetch_status = 1 217 | fetchall = self._fetchcursor if self.fetch_size else self._fetch 218 | for row in fetchall(): 219 | yield self._make_event(row) 220 | self.fetch_status = 2 221 | 222 | def finish(self): 223 | """Tag events and notify that the batch is done.""" 224 | if self.fetch_status >= 3: 225 | return # already finished 226 | if self._retriable and self.failed: 227 | self._flush_retry() 228 | self._curs.execute("SELECT pgq.finish_batch(%s);", (self.batch_id,)) 229 | self.fetch_status = 3 230 | 231 | def _flush_retry(self): 232 | """Tag retry events.""" 233 | retried_events = (( 234 | self.queue_name, 235 | self.consumer_name, 236 | '%s seconds' % retry_time, 237 | ev.id, 238 | ev.time, 239 | ev.retry + 1, 240 | ev.type, 241 | ev.data, 242 | ev.extra1, 243 | ev.extra2, 244 | ev.extra3, 245 | ev.extra4, 246 | ) for (ev, retry_time) in self.failed) 247 | self._curs.executemany( 248 | "SELECT pgq.event_retry_raw(%s, %s, CURRENT_TIMESTAMP + INTERVAL " 249 | "%s, %s, %s, %s, %s, %s, %s, %s, %s, %s);", retried_events) 250 | 251 | def __enter__(self): 252 | return self 253 | 254 | def __exit__(self, exc_type, exc_value, tb): 255 | # if partially processed, do not 'finish_batch' 256 | if exc_value is None and self.fetch_status == 2: 257 | self.finish() 258 | 259 | def __len__(self): 260 | return self.length 261 | 262 | def __bool__(self): 263 | return self.batch_id is not None 264 | __nonzero__ = __bool__ # Python 2 265 | 266 | def __repr__(self): 267 | return '' % (self.queue_name, self.batch_id) 268 | 269 | 270 | class Queue(object): 271 | """Queue class.""" 272 | 273 | def __init__(self, queue_name): 274 | if queue_name: 275 | self.queue_name = queue_name 276 | assert self.queue_name 277 | 278 | def create(self, curs, **params): 279 | """Create queue, if it does not exists.""" 280 | curs.execute("SELECT pgq.create_queue(%s);", 281 | (self.queue_name,)) 282 | res = curs.fetchone()[0] 283 | for key, value in params.items(): 284 | self.set_config(curs, key, value) 285 | return res 286 | 287 | def drop(self, curs, force=False): 288 | """Drop queue and all associated tables.""" 289 | curs.execute("SELECT pgq.drop_queue(%s, %s);", 290 | (self.queue_name, force)) 291 | return curs.fetchone()[0] 292 | 293 | def set_config(self, curs, name, value): 294 | """Configure queue. 295 | 296 | Configurable parameters: 297 | - ticker_max_count default 500 298 | - ticker_max_lag default '3 seconds' 299 | - ticker_idle_period default '1 minute' 300 | - ticker_paused default False 301 | - rotation_period default '2 hours' 302 | - external_ticker default False 303 | """ 304 | curs.execute("SELECT pgq.set_queue_config(%s, %s, %s);", 305 | (self.queue_name, name, str(value))) 306 | return curs.fetchone()[0] 307 | 308 | def get_info(self, curs): 309 | """Get info about queue.""" 310 | curs.execute("SELECT * FROM pgq.get_queue_info(%s);", 311 | (self.queue_name,)) 312 | return curs.fetchone() 313 | 314 | def register_consumer(self, curs, consumer, tick_id=None): 315 | """Register the consumer on the queue.""" 316 | curs.execute("SELECT pgq.register_consumer_at(%s, %s, %s);", 317 | (self.queue_name, str(consumer), tick_id)) 318 | return curs.fetchone()[0] 319 | 320 | def unregister_consumer(self, curs, consumer): 321 | """Unregister the consumer from the queue.""" 322 | curs.execute("SELECT pgq.unregister_consumer(%s, %s);", 323 | (self.queue_name, str(consumer))) 324 | return curs.fetchone()[0] 325 | 326 | def get_consumer_info(self, curs, consumer=None): 327 | """Get info about consumer(s) on the queue.""" 328 | if consumer and not isinstance(consumer, str): 329 | consumer = str(consumer) 330 | curs.execute("SELECT * FROM pgq.get_consumer_info(%s, %s);", 331 | (self.queue_name, consumer)) 332 | return curs.fetchone() if consumer else curs.fetchall() 333 | 334 | def insert_event(self, curs, ev_type, ev_data, 335 | extra1=None, extra2=None, extra3=None, extra4=None): 336 | return insert_event(curs, self.queue_name, ev_type, ev_data, 337 | extra1, extra2, extra3, extra4) 338 | 339 | def insert_events(self, curs, rows, columns): 340 | return bulk_insert_events(curs, self.queue_name, rows, columns) 341 | 342 | def external_tick(self, curs, tick_id, timestamp, event_seq): 343 | """External ticker. 344 | 345 | Insert a tick with a particular tick_id and timestamp. 346 | """ 347 | curs.execute("SELECT pgq.ticker(%s, %s, %s, %s);", 348 | (self.queue_name, tick_id, timestamp, event_seq)) 349 | return curs.fetchone()[0] 350 | 351 | @classmethod 352 | def version(cls, curs): 353 | """Version string for PgQ.""" 354 | curs.execute("SELECT pgq.version();") 355 | return curs.fetchone()[0] 356 | 357 | @classmethod 358 | def get_all_queues_info(cls, curs): 359 | """Get info about all queues.""" 360 | curs.execute("SELECT * FROM pgq.get_queue_info();") 361 | return curs.fetchall() 362 | 363 | @classmethod 364 | def get_all_consumers_info(cls, curs): 365 | """Get info about all consumers on all queues.""" 366 | curs.execute("SELECT * FROM pgq.get_consumer_info();") 367 | return curs.fetchall() 368 | 369 | 370 | class Consumer(object): 371 | """Consumer base class.""" 372 | # queue name to read from 373 | queue_name = None 374 | 375 | # consumer_name 376 | consumer_name = None 377 | 378 | # filter out only events for specific tables 379 | # predicate = "ev_extra1 IN ('table1', 'table2')" 380 | predicate = None 381 | 382 | # by default use cursor-based fetch (0 disables) 383 | pgq_lazy_fetch = 300 384 | 385 | # batch covers at least this many events 386 | pgq_min_count = None 387 | 388 | # batch covers at least this much time (PostgreSQL interval) 389 | pgq_min_interval = None 390 | 391 | # batch covers events older than that (PostgreSQL interval) 392 | pgq_min_lag = None 393 | # Note: pgq_min_lag together with pgq_min_interval/count is inefficient. 394 | 395 | _batch_class = Batch 396 | _queue_class = Queue 397 | _queue = None 398 | 399 | def __init__(self, queue_name=None, consumer_name=None, predicate=None): 400 | if queue_name: 401 | self.queue_name = queue_name 402 | if consumer_name: 403 | self.consumer_name = consumer_name 404 | if predicate: 405 | self.predicate = predicate 406 | self.batch_info = None 407 | assert self.queue_name 408 | assert self.consumer_name 409 | 410 | def __str__(self): 411 | return self.consumer_name 412 | 413 | @property 414 | def queue(self): 415 | if self._queue is None: 416 | self._queue = self._queue_class(self.queue_name) 417 | return self._queue 418 | 419 | def next_batches(self, curs, limit=None, commit=False): 420 | """Return all the pending batches. 421 | 422 | Iterate on the batches, and yield an iterator on the events. 423 | After events are processed close the batch, yield the next one. 424 | """ 425 | n_batch = 0 426 | 427 | # Use a separate dict-like cursor 428 | with self._cursor(curs) as dict_cursor: 429 | while not limit or n_batch < limit: 430 | # acquire batch 431 | ev_list = self._load_next_batch(dict_cursor) 432 | if commit: 433 | dict_cursor.connection.commit() 434 | 435 | if ev_list is None: 436 | break 437 | n_batch += 1 438 | 439 | try: 440 | # load and process events 441 | yield ev_list 442 | except GeneratorExit: 443 | if ev_list.fetch_status != 2: 444 | # partially processed: do not 'finish_batch' 445 | return 446 | # all processed: break loop, but 'finish_batch' before 447 | limit = n_batch 448 | 449 | # done 450 | ev_list.finish() 451 | if commit: 452 | dict_cursor.connection.commit() 453 | 454 | def next_events(self, curs, limit=None, commit=False): 455 | """Return an iterator on the pending events. 456 | 457 | Iterate on the batches, and yield each event of each batch. 458 | After events are processed close the batch, yield the next one. 459 | """ 460 | for ev_list in self.next_batches(curs, limit=limit, commit=commit): 461 | for ev in ev_list: 462 | yield ev 463 | 464 | def _cursor(self, session): 465 | """Return a separate cursor, sharing the same connection.""" 466 | if hasattr(session, 'connection'): 467 | session = session.connection 468 | if session.autocommit and self.pgq_lazy_fetch: 469 | raise RuntimeError("autocommit mode is not compatible " 470 | "with pgq_lazy_fetch") 471 | return session.cursor(cursor_factory=psycopg2.extras.DictCursor) 472 | 473 | def _load_next_batch(self, curs): 474 | """Allocate next batch. (internal)""" 475 | q = "SELECT * FROM pgq.next_batch_custom(%s, %s, %s, %s, %s);" 476 | curs.execute(q, (self.queue_name, self.consumer_name, self.pgq_min_lag, 477 | self.pgq_min_count, self.pgq_min_interval)) 478 | inf = dict(curs.fetchone()) 479 | inf['tick_id'] = inf['cur_tick_id'] 480 | inf['batch_end'] = inf['cur_tick_time'] 481 | inf['batch_start'] = inf['prev_tick_time'] 482 | inf['seq_start'] = inf['prev_tick_event_seq'] 483 | inf['seq_end'] = inf['cur_tick_event_seq'] 484 | self.batch_info = inf 485 | batch_id = inf['batch_id'] 486 | if batch_id is None: 487 | return batch_id 488 | return self._batch_class(curs, batch_id, 489 | self.queue_name, self.consumer_name, 490 | self.pgq_lazy_fetch, self.predicate) 491 | 492 | def register(self, curs, tick_id=None): 493 | """Register the consumer on the queue.""" 494 | return self.queue.register_consumer(curs, self.consumer_name, tick_id) 495 | 496 | def unregister(self, curs): 497 | """Unregister the consumer from the queue.""" 498 | return self.queue.unregister_consumer(curs, self.consumer_name) 499 | 500 | def get_info(self, curs): 501 | """Get info about the queue.""" 502 | return self.queue.get_consumer_info(curs, self.consumer_name) 503 | 504 | 505 | class _Connection(object): 506 | """Create a new database connection. 507 | 508 | Default connect_timeout is 15, unless it's set in the dsn. 509 | """ 510 | 511 | def __init__(self, dsn, autocommit=False, appname=__file__): 512 | # allow override 513 | if 'connect_timeout' not in dsn: 514 | dsn += " connect_timeout=15" 515 | if 'application_name' not in dsn: 516 | dsn += " application_name=%s" % quote_dsn_param(appname) 517 | self._autocommit = autocommit 518 | self._connection = None 519 | self._dsn = dsn 520 | 521 | def cursor(self, cursor_factory=psycopg2.extras.DictCursor): 522 | connection = self._connection 523 | if not connection or connection.closed: 524 | connection = psycopg2.connect(self._dsn) 525 | connection.autocommit = self._autocommit 526 | self._connection = connection 527 | return connection.cursor(cursor_factory=cursor_factory) 528 | 529 | 530 | class Ticker(_Connection): 531 | """PgQ ticker daemon.""" 532 | _logger = logging.getLogger('PgQ Ticker') 533 | 534 | def __init__(self, dsn, config=None): 535 | if config is None: 536 | config = {} 537 | super(Ticker, self).__init__(dsn, autocommit=1, appname='PgQ Ticker') 538 | self.check_period = config.get('check_period', 60) 539 | self.maint_period = config.get('maint_period', 120) 540 | self.retry_period = config.get('retry_period', 30) 541 | self.stats_period = config.get('stats_period', 30) 542 | self.ticker_period = config.get('ticker_period', 1) 543 | self._next_ticker = self._next_maint = self._next_retry = 0 544 | self._next_stats = 0 if (self.stats_period and 545 | self.stats_period > 0) else 0x3afff43370 546 | self.n_ticks = self.n_maint = self.n_retry = 0 547 | 548 | def run(self): 549 | while True: 550 | self._logger.info("Starting Ticker %s", __version__) 551 | try: 552 | with self.cursor() as curs: 553 | if not self.try_lock(curs): 554 | self._logger.warning('Aborting.') 555 | return 556 | while True: 557 | self.run_once(curs) 558 | next_time = min(self._next_ticker, 559 | self._next_maint, 560 | self._next_retry) 561 | time.sleep(max(1, next_time - time.time())) 562 | except Exception as exc: 563 | # Something bad happened, re-check the connection 564 | self._logger.warning("%s: %s", type(exc).__name__, exc) 565 | curs = self._connection = None 566 | time.sleep(self.check_period) 567 | 568 | def check_pgq(self, curs): 569 | curs.execute("SELECT 1 FROM pg_catalog.pg_namespace" 570 | " WHERE nspname = 'pgq';") 571 | (res,) = curs.fetchone() 572 | if not res: 573 | self._logger.warning('no pgq installed') 574 | return False 575 | version = Queue.version(curs) 576 | if version < "3": 577 | self._logger.warning('bad pgq version: %s', version) 578 | return False 579 | return True 580 | 581 | def try_lock(self, curs): 582 | """Avoid running twice on the same database.""" 583 | if not self.check_pgq(curs): 584 | return False 585 | curs.execute( 586 | "SELECT pg_try_advisory_lock(catalog.oid::int, pgq.oid::int)" 587 | " FROM pg_database catalog, pg_namespace pgq" 588 | " WHERE datname=current_catalog AND nspname='pgq';") 589 | (res,) = curs.fetchone() 590 | if not res: 591 | self._logger.warning('already running') 592 | return False 593 | return True 594 | 595 | def run_once(self, curs): 596 | self.do_ticker(curs) 597 | 598 | if time.time() > self._next_maint: 599 | self.run_maint(curs) 600 | self._next_maint = time.time() + self.maint_period 601 | 602 | if time.time() > self._next_retry: 603 | self.run_retry(curs) 604 | self._next_retry = time.time() + self.retry_period 605 | 606 | if time.time() > self._next_stats: 607 | self.log_stats() 608 | self._next_stats = time.time() + self.stats_period 609 | 610 | def do_ticker(self, curs): 611 | if time.time() > self._next_ticker: 612 | curs.execute("SELECT pgq.ticker();") 613 | (res,) = curs.fetchone() 614 | self.n_ticks += 1 615 | self._next_ticker = time.time() + self.ticker_period 616 | return res 617 | 618 | def run_maint(self, curs): 619 | self._logger.debug("starting maintenance") 620 | curs.execute("SELECT func_name, func_arg FROM pgq.maint_operations();") 621 | for func_name, func_arg in curs.fetchall(): 622 | if func_name.lower().startswith('vacuum'): 623 | assert func_arg 624 | statement = "%s %s;" % (func_name, quote_ident(func_arg)) 625 | params = None 626 | elif func_arg: 627 | statement = "SELECT %s(%%s);" % func_name 628 | params = (func_arg,) 629 | else: 630 | statement = "SELECT %s();" % func_name 631 | params = None 632 | self._logger.debug("[%s]", statement) 633 | curs.execute(statement, params) 634 | self.n_maint += 1 635 | self.do_ticker(curs) 636 | 637 | def run_retry(self, curs): 638 | self._logger.debug("starting retry event processing") 639 | retry = True 640 | while retry: 641 | curs.execute("SELECT * FROM pgq.maint_retry_events();") 642 | (retry,) = curs.fetchone() 643 | self.n_retry += retry 644 | self.do_ticker(curs) 645 | 646 | def log_stats(self): 647 | self._logger.info(str(self)) 648 | 649 | def __str__(self): 650 | return ("{ticks: %(n_ticks)d, maint: %(n_maint)d," 651 | " retry: %(n_retry)d}" % self.__dict__) 652 | 653 | 654 | def _main(log_level=logging.INFO): 655 | import getpass 656 | import sys 657 | import threading 658 | 659 | if len(sys.argv) < 2 or '=' not in sys.argv[1]: 660 | print("""PgQ Ticker daemon. 661 | 662 | Usage: 663 | %s DSN 664 | 665 | If the password is missing, it will be requested interactively. 666 | """ % sys.argv[0]) 667 | sys.exit(1) 668 | 669 | dsn = ' '.join(sys.argv[1:]) 670 | if 'password' not in dsn: 671 | passwd = getpass.getpass("Password: ") 672 | if passwd is not None: 673 | dsn += " password=%s" % quote_dsn_param(passwd) 674 | 675 | # Configure the logger 676 | logging.basicConfig(level=log_level) 677 | 678 | ticker = Ticker(dsn) 679 | # ticker._logger.setLevel(logging.DEBUG) 680 | 681 | if sys.flags.interactive: 682 | t = threading.Thread(target=ticker.run) 683 | t.daemon = True 684 | t.start() 685 | else: 686 | try: 687 | ticker.run() 688 | except KeyboardInterrupt: 689 | print('\nstopped ...') 690 | 691 | return ticker 692 | 693 | 694 | if __name__ == '__main__': 695 | ticker = _main() 696 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [wheel] 2 | universal = 1 3 | 4 | [flake8] 5 | select = E,F,W 6 | max_line_length = 79 7 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | import sys 3 | 4 | from setuptools import setup 5 | 6 | 7 | def get_version(fname='pgqueue.py'): 8 | with open(fname) as f: 9 | for line in f: 10 | if line.startswith('__version__'): 11 | return eval(line.split('=')[-1]) 12 | 13 | 14 | def get_long_description(): 15 | descr = [] 16 | for fname in ('README.rst',): 17 | with open(fname) as f: 18 | descr.append(f.read()) 19 | return '\n\n'.join(descr) 20 | 21 | 22 | if sys.version_info < (3,): 23 | tests_require = ['mock', 'unittest2'], 24 | test_suite = 'unittest2.collector' 25 | else: 26 | tests_require = ['mock', 'unittest2py3k'] 27 | test_suite = 'unittest2.collector.collector' 28 | 29 | 30 | setup( 31 | name="pgqueue", 32 | license="ISC", 33 | version=get_version(), 34 | description="Light PgQ Framework - queuing system for PostgreSQL", 35 | long_description=get_long_description(), 36 | maintainer="Florent Xicluna", 37 | maintainer_email="florent.xicluna@gmail.com", 38 | url="https://github.com/florentx/pgqueue", 39 | py_modules=['pgqueue'], 40 | install_requires=[ 41 | 'psycopg2', 42 | ], 43 | zip_safe=False, 44 | keywords="postgresql pgq queue", 45 | classifiers=[ 46 | 'Development Status :: 4 - Beta', 47 | 'Environment :: Console', 48 | 'Intended Audience :: Developers', 49 | 'License :: OSI Approved :: ISC License (ISCL)', 50 | 'Operating System :: OS Independent', 51 | 'Programming Language :: Python', 52 | 'Programming Language :: Python :: 2.6', 53 | 'Programming Language :: Python :: 2.7', 54 | 'Programming Language :: Python :: 3', 55 | 'Topic :: Database', 56 | 'Topic :: Software Development :: Libraries :: Python Modules', 57 | ], 58 | tests_require=tests_require, 59 | test_suite=test_suite, 60 | ) 61 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/florentx/pgqueue/45cd6f348b7c5dd52cad4bf905abcd9159f73473/tests/__init__.py -------------------------------------------------------------------------------- /tests/test_consumer.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | from datetime import datetime, timedelta 4 | 5 | import mock 6 | import unittest2 7 | 8 | import pgqueue 9 | 10 | BATCH_INFO = { 11 | 'batch_id': 42, 12 | 'cur_tick_id': 0xabc, 13 | 'cur_tick_time': 0xba9, 14 | 'cur_tick_event_seq': 0x7a, 15 | 'prev_tick_time': 0xba1, 16 | 'prev_tick_event_seq': 0x79, 17 | } 18 | BATCH_NULL = dict.fromkeys(BATCH_INFO) 19 | 20 | EVENT0 = dict.fromkeys(('ev_id', 'ev_time', 'ev_txid', 'ev_retry', 21 | 'ev_type', 'ev_data', 22 | 'ev_extra1', 'ev_extra2', 'ev_extra3', 'ev_extra4')) 23 | EVENT1 = dict(EVENT0, ev_id=94768, ev_txid=2133514, 24 | ev_time=datetime.now() - timedelta(minutes=7), 25 | ev_type='NOTE', ev_data='the payload', ev_extra4='42') 26 | EVENT2 = dict(EVENT0, ev_id=94769) 27 | EVENT3 = dict(EVENT0, ev_id=94770) 28 | EVENT4 = dict(EVENT0, ev_id=94771) 29 | 30 | NEXT_BATCH = 'SELECT * FROM pgq.next_batch_custom(%s, %s, %s, %s, %s);' 31 | BATCH_CURS = 'SELECT * FROM pgq.get_batch_cursor(%s, %s, %s, %s);' 32 | ANY = mock.ANY 33 | C = mock.call 34 | # Should be C.__iter__(), but it is not supported by mock 35 | C__iter__ = ('__iter__', ()) 36 | 37 | 38 | def mock_cursor(autocommit=False): 39 | """Return a mock cursor.""" 40 | cursor = mock.MagicMock() 41 | cursor.connection.autocommit = autocommit 42 | # return the same cursor when used as a context manager 43 | cursor.connection.cursor.return_value.__enter__.return_value = cursor 44 | 45 | # http://bugs.python.org/issue18622 46 | def safe_reset_mock(): 47 | cursor.connection.cursor.return_value.__enter__.return_value = None 48 | cursor.reset_mock() 49 | cursor.connection.cursor.return_value.__enter__.return_value = cursor 50 | cursor.safe_reset_mock = safe_reset_mock 51 | return cursor 52 | 53 | 54 | class TestConsumer(unittest2.TestCase): 55 | 56 | def test_register(self): 57 | cur = mock_cursor(autocommit=True) 58 | 59 | consu = pgqueue.Consumer('main_q', 'first') 60 | self.assertEqual(consu.queue_name, 'main_q') 61 | self.assertEqual(consu.consumer_name, 'first') 62 | self.assertIsNone(consu.predicate) 63 | self.assertIsInstance(consu.queue, pgqueue.Queue) 64 | self.assertEqual(consu.queue.queue_name, 'main_q') 65 | 66 | # new consumer 67 | consu.register(cur) 68 | 69 | # retrieve information 70 | consu.get_info(cur) 71 | 72 | # remove consumer 73 | consu.unregister(cur) 74 | 75 | self.assertSequenceEqual(cur.execute.call_args_list, [ 76 | C('SELECT pgq.register_consumer_at(%s, %s, %s);', 77 | ('main_q', 'first', None)), 78 | C('SELECT * FROM pgq.get_consumer_info(%s, %s);', 79 | ('main_q', 'first')), 80 | C('SELECT pgq.unregister_consumer(%s, %s);', ('main_q', 'first')), 81 | ]) 82 | 83 | def test_next_batches(self): 84 | cur = mock_cursor() 85 | cur.fetchone.side_effect = [BATCH_INFO, BATCH_NULL] 86 | consu = pgqueue.Consumer('main_q', 'first') 87 | 88 | for batch in consu.next_batches(cur, limit=1, commit=True): 89 | self.assertIsInstance(batch, pgqueue.Batch) 90 | 91 | self.assertSequenceEqual(cur.execute.call_args_list, [ 92 | C(NEXT_BATCH, ('main_q', 'first', None, None, None)), 93 | C('SELECT pgq.finish_batch(%s);', (42,)), 94 | ]) 95 | 96 | def test_next_events(self): 97 | cur = mock_cursor() 98 | cur.fetchone.side_effect = [BATCH_INFO, BATCH_NULL] 99 | cur.rowcount = 42 100 | consu = pgqueue.Consumer('main_q', 'first') 101 | 102 | for event in consu.next_events(cur, limit=1, commit=True): 103 | self.assertIsInstance(event, pgqueue.Event) 104 | 105 | self.assertSequenceEqual(cur.execute.call_args_list, [ 106 | C(NEXT_BATCH, ('main_q', 'first', None, None, None)), 107 | C(BATCH_CURS, (42, 'batch_walker', 300, None)), 108 | C('CLOSE batch_walker;'), 109 | C('SELECT pgq.finish_batch(%s);', (42,)), 110 | ]) 111 | 112 | 113 | class TestEvent(unittest2.TestCase): 114 | maxDiff = 0x800 115 | 116 | def test_simple(self): 117 | cur = mock_cursor() 118 | cur.fetchone.side_effect = [BATCH_INFO, BATCH_NULL] 119 | cur.__iter__.side_effect = [iter([EVENT1, EVENT2])] 120 | cur.rowcount = 2 121 | consu = pgqueue.Consumer('main_q', 'first') 122 | 123 | events = [] 124 | for event in consu.next_events(cur, limit=1, commit=True): 125 | self.assertIsInstance(event, pgqueue.Event) 126 | events.append(str(event)) 127 | 128 | self.assertFalse(event.failed) 129 | self.assertIsNone(event.retry_time) 130 | self.assertEqual(event.retry, 0) 131 | # 11 attributes on the Event object (including _failed mapping) 132 | self.assertEqual(len(event._fields), 11) 133 | # the main interface is the namedtuple API 134 | for idx, name in enumerate(event._fields): 135 | self.assertIs(getattr(event, name), event[idx]) 136 | # use either vars(event) or event.__dict__ when you need a dict 137 | self.assertEqual(sorted(event.__dict__), sorted(event._fields)) 138 | self.assertEqual(vars(event), event.__dict__) 139 | 140 | self.assertEqual(events, [ 141 | '', 143 | '', 144 | ]) 145 | 146 | self.assertSequenceEqual(cur.mock_calls, [ 147 | C.connection.cursor(cursor_factory=ANY), 148 | C.connection.cursor().__enter__(), 149 | C.execute(NEXT_BATCH, ('main_q', 'first', None, None, None)), 150 | C.fetchone(), 151 | C.connection.commit(), 152 | C.execute(BATCH_CURS, (42, 'batch_walker', 300, None)), 153 | C__iter__, 154 | C.execute('CLOSE batch_walker;'), 155 | C.execute('SELECT pgq.finish_batch(%s);', (42,)), 156 | C.connection.commit(), 157 | C.connection.cursor().__exit__(None, None, None), 158 | ]) 159 | 160 | def test_retry(self): 161 | cur = mock_cursor() 162 | cur.fetchone.side_effect = [BATCH_INFO, BATCH_NULL] 163 | cur.__iter__.side_effect = [iter([EVENT1, EVENT2])] 164 | cur.rowcount = 2 165 | consu = pgqueue.Consumer('main_q', 'first') 166 | 167 | for event in consu.next_events(cur, limit=1, commit=True): 168 | self.assertIsInstance(event, pgqueue.Event) 169 | 170 | # Tag for retry 171 | event.tag_retry(199) 172 | self.assertTrue(event.failed) 173 | self.assertEqual(event.retry_time, 199) 174 | # This is the incremental counter of retries, None initially 175 | self.assertEqual(event.retry, 0) 176 | 177 | # Tag is reversible, until the batch is "finished" 178 | event.tag_done() 179 | self.assertFalse(event.failed) 180 | self.assertIsNone(event.retry_time) 181 | self.assertEqual(event.retry, 0) 182 | 183 | event.tag_retry(99) 184 | self.assertTrue(event.failed) 185 | self.assertEqual(event.retry_time, 99) 186 | self.assertEqual(event.retry, 0) 187 | 188 | self.assertSequenceEqual(cur.mock_calls, [ 189 | C.connection.cursor(cursor_factory=ANY), 190 | C.connection.cursor().__enter__(), 191 | C.execute(NEXT_BATCH, ('main_q', 'first', None, None, None)), 192 | C.fetchone(), 193 | C.connection.commit(), 194 | C.execute(BATCH_CURS, (42, 'batch_walker', 300, None)), 195 | C__iter__, 196 | C.execute('CLOSE batch_walker;'), 197 | # Argument is a generator 198 | C.executemany('SELECT pgq.event_retry_raw(%s, %s, ' 199 | 'CURRENT_TIMESTAMP + INTERVAL %s, %s, %s, %s, %s, ' 200 | '%s, %s, %s, %s, %s);', ANY), 201 | C.execute('SELECT pgq.finish_batch(%s);', (42,)), 202 | C.connection.commit(), 203 | C.connection.cursor().__exit__(None, None, None), 204 | ]) 205 | 206 | def test_large_batch(self): 207 | cur = mock_cursor() 208 | cur.fetchone.side_effect = [BATCH_INFO, BATCH_NULL] 209 | 210 | def execute(command, *params): 211 | if command == BATCH_CURS or command.startswith('FETCH'): 212 | cur.rowcount = rowcounts.pop(0) 213 | else: 214 | cur.rowcount = mock.Mock() 215 | 216 | cur.execute.side_effect = execute 217 | cur.__iter__.side_effect = [iter([EVENT1, EVENT2, EVENT3]), 218 | iter([EVENT4])] 219 | rowcounts = [3, 1, 0] 220 | consu = pgqueue.Consumer('main_q', 'first') 221 | consu.pgq_lazy_fetch = 3 # instead of 300 222 | 223 | events = [] 224 | for event in consu.next_events(cur, commit=True): 225 | self.assertIsInstance(event, pgqueue.Event) 226 | events.append(str(event)) 227 | 228 | self.assertEqual(events, [ 229 | '', 231 | '', 232 | '', 233 | '', 234 | ]) 235 | 236 | self.assertSequenceEqual(cur.mock_calls, [ 237 | C.connection.cursor(cursor_factory=ANY), 238 | C.connection.cursor().__enter__(), 239 | C.execute(NEXT_BATCH, ('main_q', 'first', None, None, None)), 240 | C.fetchone(), 241 | C.connection.commit(), 242 | C.execute(BATCH_CURS, (42, 'batch_walker', 3, None)), 243 | C__iter__, 244 | C.execute('FETCH 3 FROM batch_walker;'), 245 | C__iter__, 246 | C.execute('CLOSE batch_walker;'), 247 | C.execute('SELECT pgq.finish_batch(%s);', (42,)), 248 | C.connection.commit(), 249 | C.execute(NEXT_BATCH, ('main_q', 'first', None, None, None)), 250 | C.fetchone(), 251 | C.connection.commit(), 252 | C.connection.cursor().__exit__(None, None, None), 253 | ]) 254 | 255 | def test_abort(self): 256 | cur = mock_cursor() 257 | cur.fetchone.side_effect = [BATCH_INFO, BATCH_NULL] 258 | 259 | def execute(command, *params): 260 | if command == BATCH_CURS or command.startswith('FETCH'): 261 | cur.rowcount = rowcounts.pop(0) 262 | else: 263 | cur.rowcount = mock.Mock() 264 | 265 | cur.execute.side_effect = execute 266 | cur.__iter__.side_effect = [iter([EVENT1, EVENT2, EVENT3]), 267 | iter([EVENT4])] 268 | rowcounts = [3, 1, 0] 269 | consu = pgqueue.Consumer('main_q', 'first') 270 | consu.pgq_lazy_fetch = 3 # instead of 300 271 | 272 | events = [] 273 | for event in consu.next_events(cur, commit=True): 274 | events.append(str(event)) 275 | if len(events) == 4: 276 | break 277 | 278 | self.assertSequenceEqual(cur.mock_calls, [ 279 | C.connection.cursor(cursor_factory=ANY), 280 | C.connection.cursor().__enter__(), 281 | C.execute(NEXT_BATCH, ('main_q', 'first', None, None, None)), 282 | C.fetchone(), 283 | C.connection.commit(), 284 | C.execute(BATCH_CURS, (42, 'batch_walker', 3, None)), 285 | C__iter__, 286 | C.execute('FETCH 3 FROM batch_walker;'), 287 | C__iter__, 288 | # the transaction is not committed: implicit rollback 289 | C.connection.cursor().__exit__(None, None, None), 290 | ]) 291 | 292 | 293 | class TestBatch(unittest2.TestCase): 294 | 295 | def test_simple(self): 296 | cur = mock_cursor() 297 | cur.fetchone.side_effect = [BATCH_INFO, BATCH_NULL] 298 | consu = pgqueue.Consumer('main_q', 'first') 299 | 300 | for batch in consu.next_batches(cur, commit=True): 301 | self.assertIsInstance(batch, pgqueue.Batch) 302 | self.assertEqual(str(batch), '') 303 | 304 | self.assertSequenceEqual(cur.mock_calls, [ 305 | C.connection.cursor(cursor_factory=ANY), 306 | C.connection.cursor().__enter__(), 307 | C.execute(NEXT_BATCH, ('main_q', 'first', None, None, None)), 308 | C.fetchone(), 309 | C.connection.commit(), 310 | C.execute('SELECT pgq.finish_batch(%s);', (42,)), 311 | C.connection.commit(), 312 | C.execute(NEXT_BATCH, ('main_q', 'first', None, None, None)), 313 | C.fetchone(), 314 | C.connection.commit(), 315 | C.connection.cursor().__exit__(None, None, None), 316 | ]) 317 | -------------------------------------------------------------------------------- /tests/test_producer.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import mock 4 | import unittest2 5 | 6 | import pgqueue 7 | 8 | 9 | class TestProducer(unittest2.TestCase): 10 | 11 | def test_insert_event(self): 12 | cur = mock.MagicMock() 13 | 14 | pgqueue.insert_event(cur, 'main_q', 'NOTE', 'Hello world!') 15 | 16 | self.assertSequenceEqual(cur.execute.call_args_list, [ 17 | mock.call( 18 | 'SELECT pgq.insert_event(%s, %s, %s, %s, %s, %s, %s);', 19 | ('main_q', 'NOTE', 'Hello world!', None, None, None, None)), 20 | ]) 21 | cur.reset_mock() 22 | 23 | pgqueue.insert_event(cur, 'main_q', 'NOTE', 'Hello!', 24 | extra1='42', extra4='123"ac') 25 | 26 | self.assertSequenceEqual(cur.execute.call_args_list, [ 27 | mock.call( 28 | 'SELECT pgq.insert_event(%s, %s, %s, %s, %s, %s, %s);', 29 | ('main_q', 'NOTE', 'Hello!', '42', None, None, '123"ac')), 30 | ]) 31 | 32 | def test_bulk_insert_events(self): 33 | cur = mock.MagicMock() 34 | cur.fetchone.return_value = ('pgq.event_1_2',) 35 | rows = [('ab', '12'), (None, None), ('', '-3'), ('tab\t.', '42\r\n')] 36 | columns = ('data', 'extra3') 37 | 38 | pgqueue.bulk_insert_events(cur, 'main_q', rows, columns) 39 | 40 | # Fake file object is a subclass of dict 41 | fobj = {'content': 'ab\t12\n' 42 | '\\N\t\\N\n' 43 | '\t-3\n' 44 | 'tab\\t.\t42\\r\\n\n'} 45 | self.assertSequenceEqual(cur.execute.call_args_list, [ 46 | mock.call('SELECT pgq.current_event_table(%s);', ('main_q',)), 47 | ]) 48 | self.assertSequenceEqual(cur.copy_from.call_args_list, [ 49 | mock.call(fobj, 'pgq.event_1_2', columns=['ev_data', 'ev_extra3']), 50 | ]) 51 | 52 | def test_queue_insert_event(self): 53 | cur = mock.MagicMock() 54 | 55 | queue = pgqueue.Queue('main_q') 56 | queue.insert_event(cur, 'NOTE', 'Hello world!') 57 | 58 | self.assertSequenceEqual(cur.execute.call_args_list, [ 59 | mock.call( 60 | 'SELECT pgq.insert_event(%s, %s, %s, %s, %s, %s, %s);', 61 | ('main_q', 'NOTE', 'Hello world!', None, None, None, None)), 62 | ]) 63 | cur.reset_mock() 64 | 65 | queue.insert_event(cur, 'NOTE', 'Hello!', extra1='42', extra4='123"ac') 66 | 67 | self.assertSequenceEqual(cur.execute.call_args_list, [ 68 | mock.call( 69 | 'SELECT pgq.insert_event(%s, %s, %s, %s, %s, %s, %s);', 70 | ('main_q', 'NOTE', 'Hello!', '42', None, None, '123"ac')), 71 | ]) 72 | 73 | def test_queue_insert_events(self): 74 | cur = mock.MagicMock() 75 | cur.fetchone.return_value = ('pgq.event_1_2',) 76 | rows = [('ab', '12'), (None, None), ('', '-3'), ('tab\t.', '42\r\n')] 77 | columns = ('data', 'extra3') 78 | 79 | queue = pgqueue.Queue('main_q') 80 | queue.insert_events(cur, rows, columns) 81 | 82 | # Fake file object is a subclass of dict 83 | fobj = {'content': 'ab\t12\n' 84 | '\\N\t\\N\n' 85 | '\t-3\n' 86 | 'tab\\t.\t42\\r\\n\n'} 87 | self.assertSequenceEqual(cur.execute.call_args_list, [ 88 | mock.call('SELECT pgq.current_event_table(%s);', ('main_q',)), 89 | ]) 90 | self.assertSequenceEqual(cur.copy_from.call_args_list, [ 91 | mock.call(fobj, 'pgq.event_1_2', columns=['ev_data', 'ev_extra3']), 92 | ]) 93 | 94 | 95 | class TestQueue(unittest2.TestCase): 96 | maxDiff = 0x800 97 | 98 | def test_create_queue(self): 99 | cur = mock.MagicMock() 100 | 101 | queue = pgqueue.Queue('main_queue') 102 | queue.create(cur) 103 | queue.drop(cur) 104 | 105 | self.assertSequenceEqual(cur.execute.call_args_list, [ 106 | mock.call('SELECT pgq.create_queue(%s);', ('main_queue',)), 107 | mock.call('SELECT pgq.drop_queue(%s, %s);', ('main_queue', False)), 108 | ]) 109 | cur.reset_mock() 110 | 111 | queue = pgqueue.Queue('main_queue') 112 | queue.create(cur, ticker_max_count=500, ticker_max_lag='3 seconds', 113 | ticker_idle_period='1 minute', rotation_period='2 hours') 114 | queue.register_consumer(cur, 'first_consumer') 115 | queue.drop(cur) 116 | queue.drop(cur, force=True) 117 | 118 | # re-order the call_args_list to verify assertions 119 | execute_args = list(cur.execute.call_args_list) 120 | execute_args[1:5] = sorted(execute_args[1:5]) 121 | qset = 'SELECT pgq.set_queue_config(%s, %s, %s);' 122 | self.assertSequenceEqual(execute_args, [ 123 | mock.call('SELECT pgq.create_queue(%s);', ('main_queue',)), 124 | mock.call(qset, ('main_queue', 'rotation_period', '2 hours')), 125 | mock.call(qset, ('main_queue', 'ticker_idle_period', '1 minute')), 126 | mock.call(qset, ('main_queue', 'ticker_max_count', '500')), 127 | mock.call(qset, ('main_queue', 'ticker_max_lag', '3 seconds')), 128 | mock.call('SELECT pgq.register_consumer_at(%s, %s, %s);', 129 | ('main_queue', 'first_consumer', None)), 130 | mock.call('SELECT pgq.drop_queue(%s, %s);', ('main_queue', False)), 131 | mock.call('SELECT pgq.drop_queue(%s, %s);', ('main_queue', True)), 132 | ]) 133 | 134 | def test_api(self): 135 | cur = mock.MagicMock() 136 | 137 | queue = pgqueue.Queue('main_queue') 138 | self.assertEqual(queue.queue_name, 'main_queue') 139 | 140 | # Configure queue 141 | queue.create(cur) 142 | queue.set_config(cur, 'ticker_paused', True) 143 | queue.register_consumer(cur, 'first_consumer') 144 | queue.unregister_consumer(cur, 'first_consumer') 145 | 146 | # Retrieve queue information 147 | queue.get_info(cur) 148 | queue.get_consumer_info(cur) 149 | queue.get_consumer_info(cur, 'first_consumer') 150 | 151 | # Global information 152 | queue.version(cur) 153 | queue.get_all_queues_info(cur) 154 | queue.get_all_consumers_info(cur) 155 | 156 | self.assertSequenceEqual(cur.execute.call_args_list, [ 157 | # configuration 158 | mock.call('SELECT pgq.create_queue(%s);', ('main_queue',)), 159 | mock.call('SELECT pgq.set_queue_config(%s, %s, %s);', 160 | ('main_queue', 'ticker_paused', 'True')), 161 | mock.call('SELECT pgq.register_consumer_at(%s, %s, %s);', 162 | ('main_queue', 'first_consumer', None)), 163 | mock.call('SELECT pgq.unregister_consumer(%s, %s);', 164 | ('main_queue', 'first_consumer')), 165 | # queue information 166 | mock.call('SELECT * FROM pgq.get_queue_info(%s);', 167 | ('main_queue',)), 168 | mock.call('SELECT * FROM pgq.get_consumer_info(%s, %s);', 169 | ('main_queue', None)), 170 | mock.call('SELECT * FROM pgq.get_consumer_info(%s, %s);', 171 | ('main_queue', 'first_consumer')), 172 | # global information 173 | mock.call('SELECT pgq.version();'), 174 | mock.call('SELECT * FROM pgq.get_queue_info();'), 175 | mock.call('SELECT * FROM pgq.get_consumer_info();'), 176 | ]) 177 | -------------------------------------------------------------------------------- /tests/test_quoting.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | from decimal import Decimal 4 | 5 | import unittest2 6 | import pgqueue 7 | 8 | 9 | class TestQuoting(unittest2.TestCase): 10 | 11 | def test_quote_ident(self): 12 | # Used for SQL identifiers 13 | quote_ident = pgqueue.quote_ident 14 | self.assertEqual(quote_ident(''), '""') 15 | self.assertEqual(quote_ident('any_table'), '"any_table"') 16 | self.assertEqual(quote_ident('"other"_table'), '"""other""_table"') 17 | 18 | def test_quote_copy(self): 19 | # Used for passing values to COPY FROM command 20 | quote_copy = pgqueue.quote_copy 21 | self.assertEqual(quote_copy(None), r'\N') 22 | self.assertEqual(quote_copy(''), '') 23 | self.assertEqual(quote_copy(1.0), '1.0') 24 | self.assertEqual(quote_copy(True), 'True') 25 | self.assertEqual(quote_copy(Decimal("1")), '1') 26 | self.assertEqual(quote_copy('any value'), 'any value') 27 | self.assertEqual(quote_copy('any\tvalue'), r'any\tvalue') 28 | self.assertEqual(quote_copy('any\\tvalue'), r'any\\tvalue') 29 | self.assertEqual(quote_copy('a\r\nlong\ntext'), r'a\r\nlong\ntext') 30 | 31 | def test_quote_dsn_param(self): 32 | # Used for quoting dsn password, or other dsn parameters 33 | param = pgqueue.quote_dsn_param 34 | self.assertEqual(param(None), "''") 35 | self.assertEqual(param(""), "''") 36 | self.assertEqual(param("1password"), "1password") 37 | self.assertEqual(param(" p ssw rd"), "' p ssw rd'") 38 | self.assertEqual(param(r"p'ssw\rd"), r"'p\'ssw\\rd'") 39 | -------------------------------------------------------------------------------- /tests/test_ticker.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | import unittest2 4 | import mock 5 | 6 | import pgqueue 7 | 8 | DSN = "dbname=test_db user=postgres" 9 | C = mock.call 10 | 11 | 12 | class TestTicker(unittest2.TestCase): 13 | maxDiff = 0x800 14 | 15 | def setUp(self): 16 | self.pg_connect = mock.patch('pgqueue.psycopg2.connect').start() 17 | self.pg_cursor = mock.MagicMock() 18 | (self.pg_connect.return_value 19 | .cursor.return_value.__enter__.return_value) = self.pg_cursor 20 | 21 | def tearDown(self): 22 | mock.patch.stopall() 23 | del self.pg_connect, self.pg_cursor 24 | 25 | def test_create(self): 26 | ticker = pgqueue.Ticker(DSN) 27 | self.assertEqual(ticker.check_period, 60) 28 | self.assertEqual(ticker.maint_period, 120) 29 | self.assertEqual(ticker.retry_period, 30) 30 | self.assertEqual(ticker.stats_period, 30) 31 | self.assertEqual(ticker.ticker_period, 1) 32 | self.assertEqual(ticker.check_period, 60) 33 | 34 | def test_run(self): 35 | ticker = pgqueue.Ticker(DSN) 36 | maint_operations = [ 37 | ('pgq.maint_rotate_tables_step1', 'main_queue'), 38 | ('vacuum', 'pgq.dummy'), 39 | ] 40 | self.pg_cursor.fetchone.side_effect = [ 41 | (1,), ('3.1.5',), (42,), # pgq_check() and try_lock() 42 | (42,), (42,), (0,), # 1st run 43 | (42,), (0,), # 2nd run 44 | (42,)] # 3rd run 45 | self.pg_cursor.fetchall.return_value = maint_operations 46 | 47 | def force_run(): 48 | ticker._next_ticker = 0x411 49 | ticker._next_retry = 0x421 50 | ticker._next_maint = 0x431 51 | yield 52 | ticker._next_ticker = 0x511 53 | yield 54 | ticker._next_ticker = 0x611 55 | raise SystemExit 56 | with mock.patch('pgqueue.time.sleep', side_effect=force_run()): 57 | self.assertRaises(SystemExit, ticker.run) 58 | self.assertEqual(str(ticker), '{ticks: 3, maint: 4, retry: 42}') 59 | 60 | self.assertSequenceEqual(self.pg_connect.call_args_list, [ 61 | C("dbname=test_db user=postgres connect_timeout=15 " 62 | "application_name='PgQ Ticker'"), 63 | ]) 64 | self.assertSequenceEqual(self.pg_cursor.execute.call_args_list, [ 65 | C("SELECT 1 FROM pg_catalog.pg_namespace" 66 | " WHERE nspname = 'pgq';"), 67 | C("SELECT pgq.version();"), 68 | C("SELECT pg_try_advisory_lock(catalog.oid::int, pgq.oid::int)" 69 | " FROM pg_database catalog, pg_namespace pgq" 70 | " WHERE datname=current_catalog AND nspname='pgq';"), 71 | # 1st run 72 | C("SELECT pgq.ticker();"), 73 | C('SELECT func_name, func_arg FROM pgq.maint_operations();'), 74 | C('SELECT pgq.maint_rotate_tables_step1(%s);', ('main_queue',)), 75 | C('vacuum "pgq.dummy";', None), 76 | C('SELECT * FROM pgq.maint_retry_events();'), 77 | C('SELECT * FROM pgq.maint_retry_events();'), 78 | # 2nd run 79 | C("SELECT pgq.ticker();"), 80 | C('SELECT func_name, func_arg FROM pgq.maint_operations();'), 81 | C('SELECT pgq.maint_rotate_tables_step1(%s);', ('main_queue',)), 82 | C('vacuum "pgq.dummy";', None), 83 | C('SELECT * FROM pgq.maint_retry_events();'), 84 | # 3rd run 85 | C("SELECT pgq.ticker();"), 86 | ]) 87 | --------------------------------------------------------------------------------