├── benchmarks ├── __init__.py ├── comparison.py └── basic_operations.py ├── aredis ├── commands │ ├── __init__.py │ ├── connection.py │ ├── scripting.py │ ├── transaction.py │ ├── iter.py │ ├── pubsub.py │ ├── hyperlog.py │ ├── extra.py │ ├── hash.py │ ├── sentinel.py │ └── geo.py ├── speedups.pyi ├── compat.py ├── scripting.py ├── __init__.py ├── exceptions.py ├── speedups.c └── utils.py ├── tests ├── client │ ├── __init__.py │ ├── test_encoding.py │ ├── conftest.py │ ├── test_connection.py │ ├── test_scripting.py │ ├── test_lock.py │ ├── test_sentinel.py │ └── test_cache.py ├── cluster │ ├── __init__.py │ ├── conftest.py │ ├── test_lock.py │ ├── test_utils.py │ └── test_scripting.py └── __init__.py ├── test_requirements.txt ├── dev_requirements.txt ├── examples ├── __init__.py ├── use_with_curio.py ├── iter_functions.py ├── bitfield.py ├── pubsub2.py ├── connection.py ├── pipeline.py ├── sanic_server.py ├── cache.py ├── idle_connection_pool.py ├── tornado_server.py ├── cache_decorator.py ├── cluster_commands.py ├── keys.py ├── pubsub.py ├── client_reply.py └── cluster_transaction.py ├── setup.cfg ├── docs ├── source │ ├── todo.rst │ ├── authors.rst │ ├── license.rst │ ├── testing.rst │ ├── sentinel.rst │ ├── streams.rst │ ├── scripting.rst │ ├── benchmark.rst │ ├── aredis.commands.rst │ ├── conf.py │ ├── pipelines.rst │ ├── release_notes.rst │ ├── index.rst │ ├── notice.rst │ └── pubsub.rst ├── Makefile └── make.bat ├── MANIFEST.in ├── .gitignore ├── .github ├── PULL_REQUEST_TEMPLATE └── ISSUE_TEMPLATE ├── LICENSE ├── README.rst └── setup.py /benchmarks/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /aredis/commands/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/client/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/cluster/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /test_requirements.txt: -------------------------------------------------------------------------------- 1 | mock 2 | pytest 3 | pytest-asyncio 4 | contextvars -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | -------------------------------------------------------------------------------- /dev_requirements.txt: -------------------------------------------------------------------------------- 1 | -r test_requirements.txt 2 | hiredis 3 | uvloop 4 | contextvars -------------------------------------------------------------------------------- /aredis/speedups.pyi: -------------------------------------------------------------------------------- 1 | def crc16(data: bytes) -> int: ... 2 | def hash_slot(key: bytes) -> int: ... -------------------------------------------------------------------------------- /examples/__init__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | 5 | __author__ = 'chenming@bilibili.com' 6 | 7 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [pep8] 2 | show-source = 1 3 | exclude = .venv,.tox,dist,docs,build,*.egg 4 | 5 | [bdist_wheel] 6 | universal = 1 7 | -------------------------------------------------------------------------------- /docs/source/todo.rst: -------------------------------------------------------------------------------- 1 | Todo list 2 | ========= 3 | 4 | 1. more detailed doc for cluster 5 | 2. more tests on cluster part 6 | 3. more commands supported 7 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include LICENSE 2 | include README.rst 3 | exclude __pycache__ 4 | exclude benchmarks 5 | recursive-include tests * 6 | recursive-exclude tests *.pyc 7 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | *.so 3 | aredis.egg-info 4 | build/ 5 | dist/ 6 | .idea/ 7 | env/ 8 | env3.6/ 9 | dump.rdb 10 | /.tox 11 | _build 12 | vagrant/.vagrant 13 | .python-version 14 | .cache/ 15 | .vscode/ 16 | *.iml 17 | .pytest_cache/ 18 | -------------------------------------------------------------------------------- /aredis/compat.py: -------------------------------------------------------------------------------- 1 | """ 2 | compat package is used for import compat between different python version 3 | """ 4 | 5 | try: 6 | from asyncio import CancelledError, TimeoutError 7 | except ImportError: 8 | from asyncio.futures import CancelledError, TimeoutError 9 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE: -------------------------------------------------------------------------------- 1 | ## Description 2 | 3 | Please describe your pull request. 4 | 5 | NOTE: All patches should be made against master! 6 | 7 | If it fixes a bug or resolves a feature request be sure to link to that issue. 8 | It is appreciated if you can make an issue before making a pull request. -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE: -------------------------------------------------------------------------------- 1 | ## Checklist 2 | 3 | - Python version 4 | - Using hiredis or just Python parser 5 | - Using uvloop or just asyncio event loop 6 | - Does issue exists against the `master` branch of aredis? 7 | 8 | ## Steps to reproduce 9 | 10 | ## Expected behavior 11 | 12 | ## Actual behavior 13 | - It is appreciated if error log can be provided -------------------------------------------------------------------------------- /examples/use_with_curio.py: -------------------------------------------------------------------------------- 1 | import curio 2 | 3 | import aredis 4 | 5 | 6 | async def aio_child(): 7 | redis = aredis.StrictRedis(host='127.0.0.1', port=6379, db=0) 8 | await redis.flushdb() 9 | await redis.set('bar', 'foo') 10 | bar = await redis.get('bar') 11 | return bar 12 | 13 | 14 | async def wrapper(): 15 | async with curio.bridge.AsyncioLoop() as loop: 16 | return await loop.run_asyncio(aio_child) 17 | 18 | 19 | if __name__ == '__main__': 20 | print(curio.run(wrapper)) -------------------------------------------------------------------------------- /examples/iter_functions.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import aredis 5 | import asyncio 6 | 7 | 8 | async def example(): 9 | client = aredis.StrictRedis() 10 | # pay attention that async_generator don't need to be awaited 11 | keys = client.scan_iter() 12 | # use `async for` instead of `for` only 13 | async for key in keys: 14 | print(key) 15 | 16 | if __name__ == '__main__': 17 | loop = asyncio.get_event_loop() 18 | loop.run_until_complete(example()) 19 | -------------------------------------------------------------------------------- /examples/bitfield.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import asyncio 5 | from aredis import StrictRedis 6 | 7 | 8 | async def example_bitfield(): 9 | redis = StrictRedis(host='127.0.0.1') 10 | await redis.flushdb() 11 | bitfield = redis.bitfield('example') 12 | res = await (bitfield.set('i8', '#1', 100).get('i8', '#1')).exc() 13 | assert res == [0, 100] 14 | print(res) 15 | 16 | 17 | if __name__ == '__main__': 18 | loop = asyncio.get_event_loop() 19 | loop.run_until_complete(example_bitfield()) 20 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | SPHINXPROJ = aredis 8 | SOURCEDIR = source 9 | BUILDDIR = build 10 | 11 | # Put it first so that "make" without argument is like "make help". 12 | help: 13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 14 | 15 | .PHONY: help Makefile 16 | 17 | # Catch-all target: route all unknown targets to Sphinx using the new 18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 19 | %: Makefile 20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) -------------------------------------------------------------------------------- /examples/pubsub2.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import aredis 5 | import asyncio 6 | import logging 7 | 8 | 9 | def my_handler(x): 10 | print(x) 11 | 12 | 13 | async def use_pubsub_in_thread(): 14 | client = aredis.StrictRedis() 15 | pubsub = client.pubsub() 16 | await pubsub.subscribe(**{'my-channel': my_handler}) 17 | thread = pubsub.run_in_thread(daemon=True) 18 | for _ in range(10): 19 | await client.publish('my-channel', 'lalala') 20 | thread.stop() 21 | 22 | 23 | if __name__ == '__main__': 24 | logging.basicConfig(level=logging.DEBUG) 25 | loop = asyncio.get_event_loop() 26 | loop.set_debug(enabled=True) 27 | loop.run_until_complete(use_pubsub_in_thread()) 28 | -------------------------------------------------------------------------------- /docs/source/authors.rst: -------------------------------------------------------------------------------- 1 | Author 2 | ====== 3 | 4 | aredis is developed and maintained by Jason Chen (jason0916phoenix@gmail.com, please use 847671011@qq.com in case your email is not responsed) 5 | 6 | Most of its code come from `redis-py `_ written by Andy McCurdy (sedrik@gmail.com). 7 | 8 | The cluster part is ported from `redis-py-cluster `_ written by Grokzen 9 | 10 | Project Contributors 11 | ==================== 12 | 13 | Added in the order they contributed. Thank you for your help to make aredis better! 14 | 15 | 16 | Authors who contributed code or testing: 17 | 18 | - inytar - https://github.com/inytar 19 | - hqy - https://github.com/hqy 20 | - melhin - https://github.com/melhin 21 | - stj - https://github.com/stj 22 | -------------------------------------------------------------------------------- /examples/connection.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import asyncio 5 | from aredis import Connection 6 | 7 | 8 | async def example(conn): 9 | # connecting process will be done automatically when commands executed 10 | # you may not need to use that directly 11 | # directly use it if you want to reconnect a connection instance 12 | await conn.connect() 13 | assert not await conn.can_read() 14 | await conn.send_command('keys', '*') 15 | assert await conn.can_read() 16 | print(await conn.read_response()) 17 | conn.disconnect() 18 | await conn.send_command('set', 'foo', 1) 19 | print(await conn.read_response()) 20 | 21 | if __name__ == '__main__': 22 | conn = Connection(host='127.0.0.1', port=6379, db=0) 23 | loop = asyncio.get_event_loop() 24 | loop.run_until_complete(example(conn)) 25 | -------------------------------------------------------------------------------- /examples/pipeline.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import asyncio 5 | from aredis import StrictRedis 6 | 7 | 8 | async def pipeline(client): 9 | async with await client.pipeline(transaction=True) as pipe: 10 | # will return self to send another command 11 | pipe = await (await pipe.flushdb()).set('foo', 'bar') 12 | # can also directly send command 13 | await pipe.set('bar', 'foo') 14 | await pipe.keys('*') 15 | res = await pipe.execute() 16 | # results should be in order corresponding to your command 17 | assert res == [True, True, True, [b'bar', b'foo']] 18 | 19 | 20 | if __name__ == '__main__': 21 | # default to connect to local redis server at port 6379 22 | client = StrictRedis() 23 | loop = asyncio.get_event_loop() 24 | loop.run_until_complete(pipeline(client)) 25 | -------------------------------------------------------------------------------- /examples/sanic_server.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import aredis 3 | from sanic.app import Sanic 4 | from sanic.response import json, stream 5 | 6 | app = Sanic() 7 | 8 | @app.route("/") 9 | async def test(request): 10 | return json({"hello": "world"}) 11 | 12 | @app.route("/notifications") 13 | async def notification(request): 14 | async def _stream(res): 15 | redis = aredis.StrictRedis() 16 | pub = redis.pubsub() 17 | await pub.subscribe('test') 18 | end_time = app.loop.time() + 30 19 | while app.loop.time() < end_time: 20 | await redis.publish('test', 111) 21 | message = None 22 | while not message: 23 | message = await pub.get_message() 24 | res.write(message) 25 | await asyncio.sleep(0.1) 26 | return stream(_stream) 27 | 28 | if __name__ == "__main__": 29 | app.run(host="0.0.0.0", port=8000, debug=True) 30 | -------------------------------------------------------------------------------- /aredis/commands/connection.py: -------------------------------------------------------------------------------- 1 | from aredis.utils import (NodeFlag, 2 | bool_ok, 3 | nativestr) 4 | 5 | 6 | class ConnectionCommandMixin: 7 | 8 | RESPONSE_CALLBACKS = { 9 | 'AUTH': bool, 10 | 'PING': lambda r: nativestr(r) == 'PONG', 11 | 'SELECT': bool_ok, 12 | } 13 | 14 | async def echo(self, value): 15 | "Echo the string back from the server" 16 | return await self.execute_command('ECHO', value) 17 | 18 | async def ping(self): 19 | "Ping the Redis server" 20 | return await self.execute_command('PING') 21 | 22 | 23 | class ClusterConnectionCommandMixin(ConnectionCommandMixin): 24 | 25 | NODES_FLAGS = { 26 | 'PING': NodeFlag.ALL_NODES, 27 | 'ECHO': NodeFlag.ALL_NODES 28 | } 29 | 30 | RESULT_CALLBACKS = { 31 | 'ECHO': lambda res: res, 32 | 'PING': lambda res: res 33 | } 34 | -------------------------------------------------------------------------------- /examples/cache.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import aredis 5 | import asyncio 6 | from aredis.cache import IdentityGenerator 7 | 8 | 9 | class CustomIdentityGenerator(IdentityGenerator): 10 | 11 | def generate(self, key, content): 12 | return key 13 | 14 | 15 | def expensive_work(data): 16 | """some work that waits for io or occupy cpu""" 17 | return data 18 | 19 | 20 | async def example(): 21 | client = aredis.StrictRedis() 22 | await client.flushall() 23 | cache = client.cache('example_cache', 24 | identity_generator_class=CustomIdentityGenerator) 25 | data = {1: 1} 26 | await cache.set('example_key', expensive_work(data), data) 27 | res = await cache.get('example_key', data) 28 | assert res == expensive_work(data) 29 | 30 | 31 | if __name__ == '__main__': 32 | loop = asyncio.get_event_loop() 33 | loop.run_until_complete(example()) 34 | -------------------------------------------------------------------------------- /examples/idle_connection_pool.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import asyncio 5 | import time 6 | from aredis import StrictRedis 7 | 8 | 9 | async def example(): 10 | rs = StrictRedis(host='127.0.0.1', port=6379, db=0, max_idle_time=2, idle_check_interval=0.1) 11 | print(await rs.info()) 12 | print(rs.connection_pool._available_connections) 13 | print(rs.connection_pool._in_use_connections) 14 | conn = rs.connection_pool._available_connections[0] 15 | print(conn.last_active_at) 16 | await asyncio.sleep(5) 17 | print(conn.last_active_at) 18 | print(time.time() - conn.last_active_at) 19 | # we can see that the idle connection is removed from available conn list 20 | print(rs.connection_pool._available_connections) 21 | print(rs.connection_pool._in_use_connections) 22 | 23 | 24 | if __name__ == '__main__': 25 | loop = asyncio.get_event_loop() 26 | loop.run_until_complete(example()) 27 | -------------------------------------------------------------------------------- /examples/tornado_server.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | from aredis import StrictRedis 3 | from tornado.web import RequestHandler, Application 4 | from tornado.httpserver import HTTPServer 5 | from tornado.platform.asyncio import AsyncIOMainLoop 6 | 7 | 8 | class GetRedisKeyHandler(RequestHandler): 9 | 10 | def __init__(self, application, request, **kwargs): 11 | super(GetRedisKeyHandler, self).__init__(application, request, **kwargs) 12 | self.redis_client = StrictRedis() 13 | 14 | async def get(self): 15 | key = self.get_argument('key') 16 | res = await self.redis_client.get(key) 17 | print('key: {} val: {} in redis'.format(key, res)) 18 | self.write(res) 19 | 20 | 21 | 22 | if __name__ == '__main__': 23 | AsyncIOMainLoop().install() 24 | app = Application([('/', GetRedisKeyHandler)]) 25 | server = HTTPServer(app) 26 | server.bind(8000) 27 | server.start() 28 | asyncio.get_event_loop().run_forever() -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | pushd %~dp0 4 | 5 | REM Command file for Sphinx documentation 6 | 7 | if "%SPHINXBUILD%" == "" ( 8 | set SPHINXBUILD=sphinx-build 9 | ) 10 | set SOURCEDIR=source 11 | set BUILDDIR=build 12 | set SPHINXPROJ=aredis 13 | 14 | if "%1" == "" goto help 15 | 16 | %SPHINXBUILD% >NUL 2>NUL 17 | if errorlevel 9009 ( 18 | echo. 19 | echo.The 'sphinx-build' command was not found. Make sure you have Sphinx 20 | echo.installed, then set the SPHINXBUILD environment variable to point 21 | echo.to the full path of the 'sphinx-build' executable. Alternatively you 22 | echo.may add the Sphinx directory to PATH. 23 | echo. 24 | echo.If you don't have Sphinx installed, grab it from 25 | echo.http://sphinx-doc.org/ 26 | exit /b 1 27 | ) 28 | 29 | %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% 30 | goto end 31 | 32 | :help 33 | %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% 34 | 35 | :end 36 | popd 37 | -------------------------------------------------------------------------------- /examples/cache_decorator.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import aredis 5 | import asyncio 6 | import functools 7 | 8 | 9 | def cached(app, cache): 10 | def decorator(func): 11 | @functools.wraps(func) 12 | async def _inner(*args, **kwargs): 13 | key = func.__name__ 14 | res = await cache.get(key, (args, kwargs)) 15 | if res: 16 | print('using cache: {}'.format(res)) 17 | else: 18 | print('cache miss') 19 | res = func(*args, **kwargs) 20 | await cache.set(key, res, (args, kwargs)) 21 | return res 22 | return _inner 23 | return decorator 24 | 25 | 26 | cache = aredis.StrictRedis().cache('example_cache') 27 | 28 | 29 | @cached(app='example', cache=cache) 30 | def job(*args, **kwargs): 31 | return 'example_results' 32 | 33 | 34 | if __name__ == '__main__': 35 | loop = asyncio.get_event_loop() 36 | loop.run_until_complete(job(111)) 37 | 38 | 39 | -------------------------------------------------------------------------------- /examples/cluster_commands.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import asyncio 5 | import logging 6 | from aredis import StrictRedisCluster 7 | 8 | 9 | async def example(): 10 | cluster = StrictRedisCluster(startup_nodes=[{'host': '127.0.0.1', 'port': 7001}]) 11 | slots = await cluster.cluster_slots() 12 | master_node = slots[(5461, 10922)][0]['node_id'] 13 | slave_node = slots[(5461, 10922)][1]['node_id'] 14 | print('master: {}'.format(master_node)) 15 | print('slave: {}'.format(slave_node)) 16 | print('nodes: {}'.format(await cluster.cluster_info())) 17 | for time in range(2): 18 | # forget a node twice to see if error will be raised 19 | try: 20 | await cluster.cluster_forget(master_node) 21 | except Exception as exc: 22 | logging.error(exc) 23 | slots = await cluster.cluster_slots() 24 | print(slots[(5461, 10922)]) 25 | 26 | 27 | if __name__ == '__main__': 28 | loop = asyncio.get_event_loop() 29 | loop.run_until_complete(example()) 30 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2016 Jason Chen 2 | 3 | Permission is hereby granted, free of charge, to any person 4 | obtaining a copy of this software and associated documentation 5 | files (the "Software"), to deal in the Software without 6 | restriction, including without limitation the rights to use, 7 | copy, modify, merge, publish, distribute, sublicense, and/or sell 8 | copies of the Software, and to permit persons to whom the 9 | Software is furnished to do so, subject to the following 10 | conditions: 11 | 12 | The above copyright notice and this permission notice shall be 13 | included in all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 16 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 17 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 18 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 19 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 20 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 21 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 22 | OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /docs/source/license.rst: -------------------------------------------------------------------------------- 1 | Licensing 2 | --------- 3 | 4 | Copyright (c) 2016 Jason Chen 5 | 6 | Permission is hereby granted, free of charge, to any person 7 | obtaining a copy of this software and associated documentation 8 | files (the "Software"), to deal in the Software without 9 | restriction, including without limitation the rights to use, 10 | copy, modify, merge, publish, distribute, sublicense, and/or sell 11 | copies of the Software, and to permit persons to whom the 12 | Software is furnished to do so, subject to the following 13 | conditions: 14 | 15 | The above copyright notice and this permission notice shall be 16 | included in all copies or substantial portions of the Software. 17 | 18 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 19 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 20 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 21 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 22 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 23 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 24 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 25 | OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /aredis/scripting.py: -------------------------------------------------------------------------------- 1 | import hashlib 2 | from aredis.pipeline import BasePipeline 3 | from aredis.exceptions import NoScriptError 4 | from aredis.utils import b 5 | 6 | 7 | class Script: 8 | """An executable Lua script object returned by ``register_script``""" 9 | 10 | def __init__(self, registered_client, script): 11 | self.registered_client = registered_client 12 | self.script = script 13 | self.sha = hashlib.sha1(b(script)).hexdigest() 14 | 15 | async def execute(self, keys=[], args=[], client=None): 16 | """Executes the script, passing any required ``args``""" 17 | if client is None: 18 | client = self.registered_client 19 | args = tuple(keys) + tuple(args) 20 | # make sure the Redis server knows about the script 21 | if isinstance(client, BasePipeline): 22 | # make sure this script is good to go on pipeline 23 | client.scripts.add(self) 24 | try: 25 | return await client.evalsha(self.sha, len(keys), *args) 26 | except NoScriptError: 27 | # Maybe the client is pointed to a differnet server than the client 28 | # that created this instance? 29 | # Overwrite the sha just in case there was a discrepancy. 30 | self.sha = await client.script_load(self.script) 31 | return await client.evalsha(self.sha, len(keys), *args) 32 | -------------------------------------------------------------------------------- /examples/keys.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import asyncio 5 | from aredis import StrictRedis, StrictRedisCluster 6 | 7 | 8 | async def example_client(): 9 | client = StrictRedis(host='127.0.0.1', port=6379, db=0) 10 | # clear the db 11 | await client.flushdb() 12 | await client.set('foo', 1) 13 | print(await client.get('foo')) 14 | assert await client.exists('foo') is True 15 | await client.incr('foo', 100) 16 | # will return b'101' (byte type) 17 | assert int(await client.get('foo')) == 101 18 | await client.expire('foo', 1) 19 | await asyncio.sleep(0.1) 20 | await client.ttl('foo') 21 | await asyncio.sleep(1) 22 | assert not await client.exists('foo') 23 | 24 | 25 | async def example_cluster(): 26 | client = StrictRedisCluster(host='127.0.0.1', port=7001) 27 | await client.flushdb() 28 | await client.set('foo', 1) 29 | await client.lpush('a', 1) 30 | print(await client.cluster_slots()) 31 | # 'a' and 'b' are in different slots 32 | await client.rpoplpush('a', 'b') 33 | assert await client.rpop('b') == b'1' 34 | 35 | 36 | if __name__ == '__main__': 37 | # initial redis client synchronously, which enable client to be intitialized out of function 38 | loop = asyncio.get_event_loop() 39 | loop.run_until_complete(example_client()) 40 | # loop.run_until_complete(example_cluster()) 41 | -------------------------------------------------------------------------------- /aredis/__init__.py: -------------------------------------------------------------------------------- 1 | from aredis.client import (StrictRedis, StrictRedisCluster) 2 | from aredis.connection import ( 3 | Connection, 4 | UnixDomainSocketConnection, 5 | ClusterConnection 6 | ) 7 | from aredis.pool import ConnectionPool, ClusterConnectionPool 8 | from aredis.exceptions import ( 9 | AuthenticationError, BusyLoadingError, ConnectionError, 10 | DataError, InvalidResponse, PubSubError, ReadOnlyError, 11 | RedisError, ResponseError, TimeoutError, WatchError, 12 | CompressError, ClusterDownException, ClusterCrossSlotError, 13 | CacheError, ClusterDownError, ClusterError, RedisClusterException, 14 | RedisClusterError, ExecAbortError, LockError, NoScriptError 15 | ) 16 | 17 | 18 | __version__ = '1.1.8' 19 | 20 | VERSION = tuple(map(int, __version__.split('.'))) 21 | 22 | 23 | __all__ = [ 24 | 'StrictRedis', 'StrictRedisCluster', 25 | 'Connection', 'UnixDomainSocketConnection', 'ClusterConnection', 26 | 'ConnectionPool', 'ClusterConnectionPool', 27 | 'AuthenticationError', 'BusyLoadingError', 'ConnectionError', 'DataError', 28 | 'InvalidResponse', 'PubSubError', 'ReadOnlyError', 'RedisError', 29 | 'ResponseError', 'TimeoutError', 'WatchError', 30 | 'CompressError', 'ClusterDownException', 'ClusterCrossSlotError', 31 | 'CacheError', 'ClusterDownError', 'ClusterError', 'RedisClusterException', 32 | 'RedisClusterError', 'ExecAbortError', 'LockError', 'NoScriptError' 33 | ] 34 | -------------------------------------------------------------------------------- /examples/pubsub.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import aredis 5 | import asyncio 6 | import concurrent.futures 7 | import time 8 | import logging 9 | 10 | 11 | async def wait_for_message(pubsub, timeout=2, ignore_subscribe_messages=False): 12 | now = time.time() 13 | timeout = now + timeout 14 | while now < timeout: 15 | message = await pubsub.get_message( 16 | ignore_subscribe_messages=ignore_subscribe_messages, 17 | timeout=1 18 | ) 19 | if message is not None: 20 | print(message) 21 | await asyncio.sleep(0.01) 22 | now = time.time() 23 | return None 24 | 25 | 26 | async def subscribe(client): 27 | await client.flushdb() 28 | pubsub = client.pubsub() 29 | assert pubsub.subscribed is False 30 | await pubsub.subscribe('foo') 31 | # assert await pubsub.subscribe() is True 32 | await wait_for_message(pubsub) 33 | 34 | 35 | async def publish(client): 36 | # sleep to wait for subscriber to listen 37 | await asyncio.sleep(1) 38 | await client.publish('foo', 'test message') 39 | await client.publish('foo', 'quit') 40 | 41 | 42 | if __name__ == '__main__': 43 | logging.basicConfig(level=logging.DEBUG) 44 | client = aredis.StrictRedis() 45 | loop = asyncio.get_event_loop() 46 | loop.set_debug(enabled=True) 47 | with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor: 48 | executor.submit(asyncio.run_coroutine_threadsafe, publish(client), loop) 49 | loop.run_until_complete(subscribe(client)) 50 | -------------------------------------------------------------------------------- /examples/client_reply.py: -------------------------------------------------------------------------------- 1 | """ 2 | `client reply off | on | skip` is hard to be supported by aredis gracefully because of the client pool usage. 3 | The client is supposed to read response from server and release connection after the command being sent. 4 | But the connection is needed to be always reused if you need to turn on | off | skip the reply, 5 | it should always be the connection by which you send `client reply` command to server you use to send the rest commands. 6 | 7 | However, you can use the connection by your self like the example below~ 8 | """ 9 | 10 | from aredis import Connection 11 | import asyncio 12 | 13 | 14 | async def skip(): 15 | print('skip response example::') 16 | conn = Connection(host='127.0.0.1', port=6379) 17 | await conn.send_command('flushdb') 18 | print(await conn.read_response()) 19 | await conn.send_command('CLIENT REPLY', 'SKIP') 20 | await conn.send_command('SET', 'lalala', 1) 21 | await conn.send_command('SET', 'lalala', 2) 22 | print(await conn.read_response()) 23 | 24 | async def off_and_on(): 25 | print('turn off response and then turn it ') 26 | conn = Connection() 27 | await conn.send_command('flushdb') 28 | print(await conn.read_response()) 29 | await conn.send_command('CLIENT REPLY', 'OFF') 30 | await conn.send_command('SET', 'lalala', 10) 31 | await conn.send_command('CLIENT REPLY', 'ON') 32 | print(await conn.read_response()) 33 | await conn.send_command('GET', 'lalala') 34 | print(await conn.read_response()) 35 | 36 | 37 | if __name__ == '__main__': 38 | loop = asyncio.get_event_loop() 39 | loop.run_until_complete(skip()) 40 | loop.run_until_complete(off_and_on()) 41 | -------------------------------------------------------------------------------- /tests/client/test_encoding.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | import pytest 3 | import pickle 4 | import aredis 5 | 6 | 7 | class TestEncoding: 8 | @pytest.fixture() 9 | def r(self, request): 10 | return aredis.StrictRedis(decode_responses=True) 11 | 12 | @pytest.mark.asyncio() 13 | async def test_simple_encoding(self, r): 14 | await r.flushdb() 15 | unicode_string = chr(124) + 'abcd' + chr(125) 16 | await r.set('unicode-string', unicode_string) 17 | cached_val = await r.get('unicode-string') 18 | assert isinstance(cached_val, str) 19 | assert unicode_string == cached_val 20 | 21 | @pytest.mark.asyncio() 22 | async def test_list_encoding(self, r): 23 | unicode_string = chr(124) + 'abcd' + chr(125) 24 | result = [unicode_string, unicode_string, unicode_string] 25 | await r.rpush('a', *result) 26 | assert await r.lrange('a', 0, -1) == result 27 | 28 | @pytest.mark.asyncio() 29 | async def test_object_value(self, r): 30 | unicode_string = chr(124) + 'abcd' + chr(125) 31 | await r.set('unicode-string', Exception(unicode_string)) 32 | cached_val = await r.get('unicode-string') 33 | assert isinstance(cached_val, str) 34 | assert unicode_string == cached_val 35 | 36 | @pytest.mark.asyncio() 37 | async def test_pickled_object(self): 38 | r = aredis.StrictRedis() 39 | obj = Exception('args') 40 | pickled_obj = pickle.dumps(obj) 41 | await r.set('pickled-obj', pickled_obj) 42 | cached_obj = await r.get('pickled-obj') 43 | assert isinstance(cached_obj, bytes) 44 | assert obj.args == pickle.loads(cached_obj).args 45 | -------------------------------------------------------------------------------- /benchmarks/comparison.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | import time 4 | import asyncio 5 | import asyncio_redis 6 | import aioredis 7 | import redis 8 | import aredis 9 | 10 | 11 | HOST = '127.0.0.1' 12 | NUM = 10000 13 | 14 | 15 | async def test_aredis(i): 16 | start = time.time() 17 | client = aredis.StrictRedis(host=HOST) 18 | res = None 19 | for i in range(i): 20 | res = await client.keys('*') 21 | print(time.time() - start) 22 | return res 23 | 24 | 25 | async def test_asyncio_redis(i): 26 | connection = await asyncio_redis.Connection.create(host=HOST, port=6379) 27 | start = time.time() 28 | res = None 29 | for i in range(i): 30 | res = await connection.keys('*') 31 | print(time.time() - start) 32 | connection.close() 33 | return res 34 | 35 | 36 | def test_conn(i): 37 | start = time.time() 38 | client = redis.StrictRedis(host=HOST) 39 | res = None 40 | for i in range(i): 41 | res = client.keys('*') 42 | print(time.time() - start) 43 | return res 44 | 45 | 46 | async def test_aioredis(i, loop): 47 | start = time.time() 48 | redis = await aioredis.create_redis((HOST, 6379), loop=loop) 49 | val = None 50 | for i in range(i): 51 | val = await redis.keys('*') 52 | print(time.time() - start) 53 | redis.close() 54 | await redis.wait_closed() 55 | return val 56 | 57 | 58 | if __name__ == '__main__': 59 | loop = asyncio.get_event_loop() 60 | print('aredis') 61 | print(loop.run_until_complete(test_aredis(NUM))) 62 | print('asyncio_redis') 63 | print(loop.run_until_complete(test_asyncio_redis(NUM))) 64 | print('redis-py') 65 | print(test_conn(NUM)) 66 | print('aioredis') 67 | print(loop.run_until_complete(test_aioredis(NUM, loop))) 68 | 69 | -------------------------------------------------------------------------------- /docs/source/testing.rst: -------------------------------------------------------------------------------- 1 | Testing 2 | ======= 3 | 4 | StrictRedis 5 | ----------- 6 | 7 | All tests are built on the base of simplest redis server with default config. 8 | 9 | Redis server setup 10 | ^^^^^^^^^^^^^^^^^^ 11 | 12 | To test against the latest stable redis server from source, use: 13 | 14 | .. code-block:: bash 15 | 16 | $ sudo apt-get update 17 | $ sudo apt-get install build-essential 18 | $ sudo apt-get install tcl8.5 19 | $ wget http://download.redis.io/releases/redis-stable.tar.gz 20 | $ tar xzf redis-stable.tar.gz 21 | $ cd redis-stable 22 | $ make test 23 | $ make install 24 | $ sudo utils/install_server.sh 25 | $ sudo service redis_6379 start 26 | 27 | You can also use any version of Redis installed from your OS package manager (example for OSX: ``brew install redis``), in which case starting the server is as simple as running: 28 | 29 | .. code-block:: bash 30 | 31 | $ redis-server 32 | 33 | 34 | StrictRedisCluster 35 | ------------------ 36 | 37 | All tests are currently built around a 6 redis server cluster setup (3 masters + 3 slaves). 38 | One server must be using port 7000 for redis cluster discovery. 39 | The easiest way to setup a cluster is to use Docker. 40 | 41 | 42 | Redis cluster setup 43 | ^^^^^^^^^^^^^^^^^^^ 44 | 45 | A fully functional docker image can be found at https://github.com/Grokzen/docker-redis-cluster 46 | 47 | To turn on a cluster which should pass all tests, run: 48 | 49 | .. code-block:: bash 50 | 51 | $ docker run --rm -it -p7000:7000 -p7001:7001 -p7002:7002 -p7003:7003 -p7004:7004 -p7005:7005 -e IP='0.0.0.0' grokzen/redis-cluster:latest 52 | 53 | 54 | Run test 55 | -------- 56 | 57 | To run test you should install dependency firstly. 58 | 59 | .. code-block:: bash 60 | 61 | $ pip install -r dev_requirements.txt 62 | $ pytest tests/ 63 | -------------------------------------------------------------------------------- /docs/source/sentinel.rst: -------------------------------------------------------------------------------- 1 | Sentinel support 2 | ================ 3 | 4 | aredis can be used together with `Redis Sentinel `_ 5 | to discover Redis nodes. You need to have at least one Sentinel daemon running 6 | in order to use aredis's Sentinel support. 7 | 8 | Connecting aredis to the Sentinel instance(s) is easy. You can use a 9 | Sentinel connection to discover the master and slaves network addresses: 10 | 11 | .. code-block:: python 12 | 13 | from redis.sentinel import Sentinel 14 | sentinel = Sentinel([('localhost', 26379)], stream_timeout=0.1) 15 | await sentinel.discover_master('mymaster') 16 | # ('127.0.0.1', 6379) 17 | await sentinel.discover_slaves('mymaster') 18 | # [('127.0.0.1', 6380)] 19 | 20 | You can also create Redis client connections from a Sentinel instance. You can 21 | connect to either the master (for write operations) or a slave (for read-only 22 | operations). 23 | 24 | .. code-block:: pycon 25 | 26 | master = sentinel.master_for('mymaster', stream_timeout=0.1) 27 | slave = sentinel.slave_for('mymaster', stream_timeout=0.1) 28 | master.set('foo', 'bar') 29 | slave.get('foo') 30 | # 'bar' 31 | 32 | The master and slave objects are normal StrictRedis instances with their 33 | connection pool bound to the Sentinel instance. When a Sentinel backed client 34 | attempts to establish a connection, it first queries the Sentinel servers to 35 | determine an appropriate host to connect to. If no server is found, 36 | a MasterNotFoundError or SlaveNotFoundError is raised. Both exceptions are 37 | subclasses of ConnectionError. 38 | 39 | When trying to connect to a slave client, the Sentinel connection pool will 40 | iterate over the list of slaves until it finds one that can be connected to. 41 | If no slaves can be connected to, a connection will be established with the 42 | master. 43 | 44 | See `Guidelines for Redis clients with support for Redis Sentinel 45 | `_ to learn more about Redis Sentinel. -------------------------------------------------------------------------------- /examples/cluster_transaction.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import asyncio 5 | from aredis import StrictRedisCluster 6 | 7 | 8 | async def func1(pipe): 9 | for _ in range(10): 10 | await pipe.incr('foobar', 1) 11 | 12 | 13 | 14 | async def func2(): 15 | cluster = StrictRedisCluster(startup_nodes=[{'host': '127.0.0.1', 'port': 7001}], decode_responses=True) 16 | while True: 17 | foobar = int(await cluster.get('foobar')) 18 | print('thread: get `foobar` = {}'.format(foobar)) 19 | if foobar >= 0: 20 | print('thread: cluster get foobar == {}, decrease it'.format(foobar)) 21 | await cluster.decr('foobar', 1) 22 | if foobar < 0: 23 | print('thread: break loop now') 24 | break 25 | 26 | 27 | async def run_func1(): 28 | cluster = StrictRedisCluster(startup_nodes=[{'host': '127.0.0.1', 'port': 7001}], decode_responses=True) 29 | print('before transaction: set key `foobar` = 0') 30 | await cluster.set('foobar', 0) 31 | try: 32 | await cluster.transaction(func1, 'foobar', watch_delay=2) 33 | except Exception as exc: 34 | print(exc) 35 | print('after transaction: `foobar` = {}'.format(await cluster.get('foobar'))) 36 | print('wait for thread to end...') 37 | await asyncio.sleep(1) 38 | 39 | 40 | if __name__ == '__main__': 41 | """ 42 | The output should be like below: 43 | 44 | before transaction: set key `foobar` = 0 45 | thread: get `foobar` = 0 46 | thread: get `foobar` = 11 47 | thread: cluster get foobar == 11, try to set it to -1 48 | after transaction: `foobar` = 11 49 | wait for thread to end... 50 | thread: get `foobar` = -1 51 | thread: break loop now 52 | 53 | Notice the intermediate state of key `foobar` is not printed, 54 | which means all `incr` commands run in transaction 55 | """ 56 | loop = asyncio.get_event_loop() 57 | asyncio.run_coroutine_threadsafe(func2(), loop) 58 | loop.run_until_complete(run_func1()) 59 | -------------------------------------------------------------------------------- /tests/client/conftest.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | import aredis 4 | import asyncio 5 | import pytest 6 | import sys 7 | from unittest.mock import Mock 8 | from distutils.version import StrictVersion 9 | 10 | 11 | _REDIS_VERSIONS = {} 12 | 13 | 14 | async def get_version(**kwargs): 15 | params = {'host': 'localhost', 'port': 6379, 'db': 0} 16 | params.update(kwargs) 17 | key = '%s:%s' % (params['host'], params['port']) 18 | if key not in _REDIS_VERSIONS: 19 | client = aredis.StrictRedis(**params) 20 | _REDIS_VERSIONS[key] = (await client.info())['redis_version'] 21 | client.connection_pool.disconnect() 22 | return _REDIS_VERSIONS[key] 23 | 24 | 25 | def skip_if_server_version_lt(min_version): 26 | loop = asyncio.get_event_loop() 27 | version = StrictVersion(loop.run_until_complete(get_version())) 28 | check = version < StrictVersion(min_version) 29 | return pytest.mark.skipif(check, reason="") 30 | 31 | 32 | def skip_python_vsersion_lt(min_version): 33 | min_version = tuple(map(int, min_version.split('.'))) 34 | check = sys.version_info[:2] < min_version 35 | return pytest.mark.skipif(check, reason="") 36 | 37 | 38 | @pytest.fixture() 39 | def r(event_loop): 40 | return aredis.StrictRedis(loop=event_loop) 41 | 42 | 43 | class AsyncMock(Mock): 44 | 45 | def __init__(self, *args, **kwargs): 46 | super(AsyncMock, self).__init__(*args, **kwargs) 47 | 48 | def __await__(self): 49 | future = asyncio.Future(loop=self.loop) 50 | future.set_result(self) 51 | result = yield from future 52 | return result 53 | 54 | @staticmethod 55 | def pack_response(response, *, loop): 56 | future = asyncio.Future(loop=loop) 57 | future.set_result(response) 58 | return future 59 | 60 | 61 | def _gen_mock_resp(r, response, *, loop): 62 | mock_connection_pool = AsyncMock(loop=loop) 63 | connection = AsyncMock(loop=loop) 64 | connection.read_response.return_value = AsyncMock.pack_response(response, loop=loop) 65 | mock_connection_pool.get_connection.return_value = connection 66 | r.connection_pool = mock_connection_pool 67 | return r 68 | 69 | 70 | @pytest.fixture() 71 | def mock_resp_role(event_loop): 72 | r = aredis.StrictRedis(loop=event_loop) 73 | response = [b'master', 169, [[b'172.17.0.2', b'7004', b'169']]] 74 | return _gen_mock_resp(r, response, loop=event_loop) 75 | -------------------------------------------------------------------------------- /docs/source/streams.rst: -------------------------------------------------------------------------------- 1 | Streams 2 | ======= 3 | 4 | Stream is a new feature provided by redis. 5 | 6 | Since not all commands related are released officially(some commands are only referred in 7 | `stream introduction `_ 8 | ), **you should make sure you know about 9 | it before using the api, and the API may be changed in the future.** 10 | 11 | For now, according to `command manual `_ 12 | , only `XADD`, `XRANGE`, `XREVRANGE`, `XLEN`, 13 | `XREAD`, `XREADGROUP`, `XPENDING` commands are released. But commands you can find in 14 | `stream introduction `_ 15 | are all supported in aredis, 16 | you can try the new feature with it. 17 | 18 | 19 | You can append entries to stream like code below: 20 | 21 | .. code-block:: python 22 | 23 | entry = dict(event=1, user='usr1') 24 | async def append_msg_to_stream(client, entry): 25 | stream_id = await client.xadd('example_stream', entry, max_len=10) 26 | return stream_id 27 | 28 | **notice** 29 | - max length of the stream length will not be limited max_len is set to None 30 | - max_len should be int greater than 0, if set to 0 or negative, the stream length will not be limited 31 | - The `XADD` command will auto-generate a unique id for you if the id argument specified is the '*' character. 32 | 33 | You can use use read entries from a stream using `XRANGE` & `XREVRANGE` 34 | 35 | .. code-block:: python 36 | 37 | async def fetch_entries(client, stream, count=10, reverse=False): 38 | # if you do know the range of stream_id, you can specify it when using xrange 39 | if reverse: 40 | entries = await client.xrevrange(stream, start='10-0', end='1-0', count=count) 41 | else: 42 | entries = await client.xrange(stream, start='1-0', end='10-0', count=count) 43 | return entries 44 | 45 | Actually, stream feature is inspired by `kafka `_, a stream can be consumed by `consumer` 46 | from a `group`, like code below: 47 | 48 | .. code-block:: python 49 | 50 | async def consuming_process(client): 51 | # create a stream firstly 52 | for idx in range(20): 53 | # give progressive stream id when create entry 54 | await client.xadd('test_stream', {'k1': 'v1', 'k2': 1}, stream_id=idx) 55 | # now create a consumer group 56 | # stream_id can be specified when creating a group, 57 | # if given '0', group will consume the stream from the beginning 58 | # if give '$', group will only consume newly appended entries 59 | await r.xgroup_create('test_stream', 'test_group', '0') 60 | # now consume the entries by 'consumer1' from group 'test_group' 61 | entries = await r.xreadgroup('test_group', 'consumer1', count=5, test_stream='1') 62 | -------------------------------------------------------------------------------- /tests/client/test_connection.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | import socket 4 | 5 | import pytest 6 | from aredis import (Connection, 7 | UnixDomainSocketConnection) 8 | 9 | 10 | @pytest.mark.asyncio(forbid_global_loop=True) 11 | async def test_connect_tcp(event_loop): 12 | conn = Connection(loop=event_loop) 13 | assert conn.host == '127.0.0.1' 14 | assert conn.port == 6379 15 | assert str(conn) == 'Connection' 16 | await conn.send_command('PING') 17 | res = await conn.read_response() 18 | assert res == b'PONG' 19 | assert (conn._reader is not None) and (conn._writer is not None) 20 | conn.disconnect() 21 | assert (conn._reader is None) and (conn._writer is None) 22 | 23 | 24 | @pytest.mark.asyncio(forbid_global_loop=True) 25 | async def test_connect_tcp_keepalive_options(event_loop): 26 | conn = Connection( 27 | loop=event_loop, 28 | socket_keepalive=True, 29 | socket_keepalive_options={ 30 | socket.TCP_KEEPIDLE: 1, 31 | socket.TCP_KEEPINTVL: 1, 32 | socket.TCP_KEEPCNT: 3, 33 | }, 34 | ) 35 | await conn._connect() 36 | sock = conn._writer.transport.get_extra_info('socket') 37 | assert sock.getsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE) == 1 38 | for k, v in ( 39 | (socket.TCP_KEEPIDLE, 1), 40 | (socket.TCP_KEEPINTVL, 1), 41 | (socket.TCP_KEEPCNT, 3), 42 | ): 43 | assert sock.getsockopt(socket.SOL_TCP, k) == v 44 | conn.disconnect() 45 | 46 | 47 | @pytest.mark.parametrize('option', ['UNKNOWN', 999]) 48 | @pytest.mark.asyncio(forbid_global_loop=True) 49 | async def test_connect_tcp_wrong_socket_opt_raises(event_loop, option): 50 | conn = Connection( 51 | loop=event_loop, 52 | socket_keepalive=True, 53 | socket_keepalive_options={ 54 | option: 1, 55 | }, 56 | ) 57 | with pytest.raises((socket.error, TypeError)): 58 | await conn._connect() 59 | # verify that the connection isn't left open 60 | assert conn._writer.transport.is_closing() 61 | 62 | 63 | # only test during dev 64 | # @pytest.mark.asyncio(forbid_global_loop=True) 65 | # async def test_connect_unix_socket(event_loop): 66 | # # to run this test case you should change your redis configuration 67 | # # unixsocket /var/run/redis/redis.sock 68 | # # unixsocketperm 777 69 | # path = '/var/run/redis/redis.sock' 70 | # conn = UnixDomainSocketConnection(path, event_loop) 71 | # await conn.connect() 72 | # assert conn.path == path 73 | # assert str(conn) == 'UnixDomainSocketConnection'.format(path) 74 | # await conn.send_command('PING') 75 | # res = await conn.read_response() 76 | # assert res == b'PONG' 77 | # assert (conn._reader is not None) and (conn._writer is not None) 78 | # conn.disconnect() 79 | # assert (conn._reader is None) and (conn._writer is None) 80 | -------------------------------------------------------------------------------- /aredis/exceptions.py: -------------------------------------------------------------------------------- 1 | class RedisError(Exception): 2 | pass 3 | 4 | 5 | class AuthenticationError(RedisError): 6 | pass 7 | 8 | 9 | class ConnectionError(RedisError): 10 | pass 11 | 12 | 13 | class TimeoutError(RedisError): 14 | pass 15 | 16 | 17 | class BusyLoadingError(ConnectionError): 18 | pass 19 | 20 | 21 | class InvalidResponse(RedisError): 22 | pass 23 | 24 | 25 | class ResponseError(RedisError): 26 | pass 27 | 28 | 29 | class DataError(RedisError): 30 | pass 31 | 32 | 33 | class PubSubError(RedisError): 34 | pass 35 | 36 | 37 | class WatchError(RedisError): 38 | pass 39 | 40 | 41 | class NoScriptError(ResponseError): 42 | pass 43 | 44 | 45 | class ExecAbortError(ResponseError): 46 | pass 47 | 48 | 49 | class ReadOnlyError(ResponseError): 50 | pass 51 | 52 | 53 | class LockError(RedisError, ValueError): 54 | """Errors acquiring or releasing a lock""" 55 | # NOTE: For backwards compatability, this class derives from ValueError. 56 | # This was originally chosen to behave like threading.Lock. 57 | pass 58 | 59 | 60 | class CacheError(RedisError): 61 | """Basic error of aredis.cache""" 62 | pass 63 | 64 | 65 | class SerializeError(CacheError): 66 | pass 67 | 68 | 69 | class CompressError(CacheError): 70 | pass 71 | 72 | 73 | class RedisClusterException(Exception): 74 | pass 75 | 76 | 77 | class RedisClusterError(Exception): 78 | pass 79 | 80 | 81 | class ClusterDownException(Exception): 82 | pass 83 | 84 | 85 | class ClusterError(RedisError): 86 | pass 87 | 88 | 89 | class ClusterCrossSlotError(ResponseError): 90 | message = "Keys in request don't hash to the same slot" 91 | 92 | 93 | class ClusterDownError(ClusterError, ResponseError): 94 | 95 | def __init__(self, resp): 96 | self.args = (resp,) 97 | self.message = resp 98 | 99 | 100 | class ClusterTransactionError(ClusterError): 101 | 102 | def __init__(self, msg): 103 | self.msg = msg 104 | 105 | 106 | class AskError(ResponseError): 107 | """ 108 | src node: MIGRATING to dst node 109 | get > ASK error 110 | ask dst node > ASKING command 111 | dst node: IMPORTING from src node 112 | asking command only affects next command 113 | any op will be allowed after asking command 114 | """ 115 | 116 | def __init__(self, resp): 117 | """should only redirect to master node""" 118 | self.args = (resp,) 119 | self.message = resp 120 | slot_id, new_node = resp.split(' ') 121 | host, port = new_node.rsplit(':', 1) 122 | self.slot_id = int(slot_id) 123 | self.node_addr = self.host, self.port = host, int(port) 124 | 125 | 126 | class TryAgainError(ResponseError): 127 | 128 | def __init__(self, *args, **kwargs): 129 | pass 130 | 131 | 132 | class MovedError(AskError): 133 | pass 134 | -------------------------------------------------------------------------------- /docs/source/scripting.rst: -------------------------------------------------------------------------------- 1 | LUA Scripting 2 | ============= 3 | 4 | aredis supports the EVAL, EVALSHA, and SCRIPT commands. However, there are 5 | a number of edge cases that make these commands tedious to use in real world 6 | scenarios. Therefore, aredis exposes a Script object that makes scripting 7 | much easier to use. 8 | 9 | To create a Script instance, use the `register_script` function on a client 10 | instance passing the LUA code as the first argument. `register_script` returns 11 | a Script instance that you can use throughout your code. 12 | 13 | The following trivial LUA script accepts two parameters: the name of a key and 14 | a multiplier value. The script fetches the value stored in the key, multiplies 15 | it with the multiplier value and returns the result. 16 | 17 | .. code-block:: pycon 18 | 19 | r = redis.StrictRedis() 20 | lua = """ 21 | local value = redis.call('GET', KEYS[1]) 22 | value = tonumber(value) 23 | return value * ARGV[1]""" 24 | multiply = r.register_script(lua) 25 | 26 | `multiply` is now a Script instance that is invoked by calling it like a 27 | function. Script instances accept the following optional arguments: 28 | 29 | * **keys**: A list of key names that the script will access. This becomes the 30 | KEYS list in LUA. 31 | * **args**: A list of argument values. This becomes the ARGV list in LUA. 32 | * **client**: A aredis Client or Pipeline instance that will invoke the 33 | script. If client isn't specified, the client that intiially 34 | created the Script instance (the one that `register_script` was 35 | invoked from) will be used. 36 | 37 | Notice that the `Srcipt.__call__` is no longer useful(`async/await` can't be used in magic method), 38 | please use `Script.register` instead 39 | 40 | Continuing the example from above: 41 | 42 | .. code-block:: python 43 | 44 | await r.set('foo', 2) 45 | await multiply.execute(keys=['foo'], args=[5]) 46 | # 10 47 | 48 | The value of key 'foo' is set to 2. When multiply is invoked, the 'foo' key is 49 | passed to the script along with the multiplier value of 5. LUA executes the 50 | script and returns the result, 10. 51 | 52 | Script instances can be executed using a different client instance, even one 53 | that points to a completely different Redis server. 54 | 55 | .. code-block:: python 56 | 57 | r2 = redis.StrictRedis('redis2.example.com') 58 | await r2.set('foo', 3) 59 | multiply.execute(keys=['foo'], args=[5], client=r2) 60 | # 15 61 | 62 | The Script object ensures that the LUA script is loaded into Redis's script 63 | cache. In the event of a NOSCRIPT error, it will load the script and retry 64 | executing it. 65 | 66 | Script objects can also be used in pipelines. The pipeline instance should be 67 | passed as the client argument when calling the script. Care is taken to ensure 68 | that the script is registered in Redis's script cache just prior to pipeline 69 | execution. 70 | 71 | .. code-block:: python 72 | 73 | pipe = await r.pipeline() 74 | await pipe.set('foo', 5) 75 | await multiply(keys=['foo'], args=[5], client=pipe) 76 | await pipe.execute() 77 | # [True, 25] 78 | -------------------------------------------------------------------------------- /aredis/commands/scripting.py: -------------------------------------------------------------------------------- 1 | from aredis.utils import (dict_merge, nativestr, 2 | list_keys_to_dict, 3 | NodeFlag, bool_ok) 4 | 5 | 6 | class ScriptingCommandMixin: 7 | 8 | RESPONSE_CALLBACKS = { 9 | 'SCRIPT EXISTS': lambda r: list(map(bool, r)), 10 | 'SCRIPT FLUSH': bool_ok, 11 | 'SCRIPT KILL': bool_ok, 12 | 'SCRIPT LOAD': nativestr, 13 | } 14 | 15 | async def eval(self, script, numkeys, *keys_and_args): 16 | """ 17 | Execute the Lua ``script``, specifying the ``numkeys`` the script 18 | will touch and the key names and argument values in ``keys_and_args``. 19 | Returns the result of the script. 20 | 21 | In practice, use the object returned by ``register_script``. This 22 | function exists purely for Redis API completion. 23 | """ 24 | return await self.execute_command('EVAL', script, numkeys, *keys_and_args) 25 | 26 | async def evalsha(self, sha, numkeys, *keys_and_args): 27 | """ 28 | Use the ``sha`` to execute a Lua script already registered via EVAL 29 | or SCRIPT LOAD. Specify the ``numkeys`` the script will touch and the 30 | key names and argument values in ``keys_and_args``. Returns the result 31 | of the script. 32 | 33 | In practice, use the object returned by ``register_script``. This 34 | function exists purely for Redis API completion. 35 | """ 36 | return await self.execute_command('EVALSHA', sha, numkeys, *keys_and_args) 37 | 38 | async def script_exists(self, *args): 39 | """ 40 | Check if a script exists in the script cache by specifying the SHAs of 41 | each script as ``args``. Returns a list of boolean values indicating if 42 | if each already script exists in the cache. 43 | """ 44 | return await self.execute_command('SCRIPT EXISTS', *args) 45 | 46 | async def script_flush(self): 47 | """Flushes all scripts from the script cache""" 48 | return await self.execute_command('SCRIPT FLUSH') 49 | 50 | async def script_kill(self): 51 | """Kills the currently executing Lua script""" 52 | return await self.execute_command('SCRIPT KILL') 53 | 54 | async def script_load(self, script): 55 | """Loads a Lua ``script`` into the script cache. Returns the SHA.""" 56 | return await self.execute_command('SCRIPT LOAD', script) 57 | 58 | def register_script(self, script): 59 | """ 60 | Registers a Lua ``script`` specifying the ``keys`` it will touch. 61 | Returns a Script object that is callable and hides the complexity of 62 | dealing with scripts, keys, and shas. This is the preferred way of 63 | working with Lua scripts. 64 | """ 65 | from aredis.scripting import Script 66 | return Script(self, script) 67 | 68 | 69 | class ClusterScriptingCommandMixin(ScriptingCommandMixin): 70 | 71 | NODES_FLAGS = dict_merge( 72 | { 73 | 'SCRIPT KILL': NodeFlag.BLOCKED 74 | }, 75 | list_keys_to_dict( 76 | ["SCRIPT LOAD", "SCRIPT FLUSH", "SCRIPT EXISTS",], NodeFlag.ALL_MASTERS 77 | ) 78 | ) 79 | 80 | RESULT_CALLBACKS = { 81 | "SCRIPT LOAD": lambda res: list(res.values()).pop(), 82 | "SCRIPT EXISTS": lambda res: [all(k) for k in zip(*res.values())], 83 | "SCRIPT FLUSH": lambda res: all(res.values()) 84 | } 85 | -------------------------------------------------------------------------------- /aredis/commands/transaction.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import warnings 3 | from aredis.exceptions import (RedisClusterException, 4 | WatchError) 5 | from aredis.utils import (string_keys_to_dict, 6 | bool_ok) 7 | 8 | 9 | class TransactionCommandMixin: 10 | 11 | RESPONSE_CALLBACKS = string_keys_to_dict( 12 | 'WATCH UNWATCH', 13 | bool_ok 14 | ) 15 | 16 | async def transaction(self, func, *watches, **kwargs): 17 | """ 18 | Convenience method for executing the callable `func` as a transaction 19 | while watching all keys specified in `watches`. The 'func' callable 20 | should expect a single argument which is a Pipeline object. 21 | """ 22 | shard_hint = kwargs.pop('shard_hint', None) 23 | value_from_callable = kwargs.pop('value_from_callable', False) 24 | watch_delay = kwargs.pop('watch_delay', None) 25 | async with await self.pipeline(True, shard_hint) as pipe: 26 | while True: 27 | try: 28 | if watches: 29 | await pipe.watch(*watches) 30 | func_value = await func(pipe) 31 | exec_value = await pipe.execute() 32 | return func_value if value_from_callable else exec_value 33 | except WatchError: 34 | if watch_delay is not None and watch_delay > 0: 35 | await asyncio.sleep( 36 | watch_delay, 37 | loop=self.connection_pool.loop 38 | ) 39 | continue 40 | 41 | async def watch(self, *names): 42 | """ 43 | Watches the values at keys ``names``, or None if the key doesn't exist 44 | """ 45 | warnings.warn(DeprecationWarning('Call WATCH from a Pipeline object')) 46 | 47 | async def unwatch(self): 48 | """ 49 | Unwatches the value at key ``name``, or None of the key doesn't exist 50 | """ 51 | warnings.warn( 52 | DeprecationWarning('Call UNWATCH from a Pipeline object')) 53 | 54 | 55 | class ClusterTransactionCommandMixin(TransactionCommandMixin): 56 | 57 | async def transaction(self, func, *watches, **kwargs): 58 | """ 59 | Convenience method for executing the callable `func` as a transaction 60 | while watching all keys specified in `watches`. The 'func' callable 61 | should expect a single argument which is a Pipeline object. 62 | 63 | cluster transaction can only be run with commands in the same node, 64 | otherwise error will be raised. 65 | """ 66 | shard_hint = kwargs.pop('shard_hint', None) 67 | value_from_callable = kwargs.pop('value_from_callable', False) 68 | watch_delay = kwargs.pop('watch_delay', None) 69 | async with await self.pipeline(True, shard_hint, watches=watches) as pipe: 70 | while True: 71 | try: 72 | func_value = await func(pipe) 73 | exec_value = await pipe.execute() 74 | return func_value if value_from_callable else exec_value 75 | except WatchError: 76 | if watch_delay is not None and watch_delay > 0: 77 | await asyncio.sleep( 78 | watch_delay, 79 | loop=self.connection_pool.loop 80 | ) 81 | continue 82 | -------------------------------------------------------------------------------- /docs/source/benchmark.rst: -------------------------------------------------------------------------------- 1 | Benchmark 2 | ========= 3 | During test benchmarks/comparison.py ran on a virtual machine (Ubuntu, 4G RAM and 2 CPUs) with hiredis installed. 4 | 5 | local redis server 6 | ^^^^^^^^^^^^^^^^^^ 7 | +-----------------+---------------+--------------+-----------------+----------------+----------------------+---------------------+--------+ 8 | |num of query/time|aredis(asyncio)|aredis(uvloop)|aioredis(asyncio)|aioredis(uvloop)|asyncio_redis(asyncio)|asyncio_redis(uvloop)|redis-py| 9 | +=================+===============+==============+=================+================+======================+=====================+========+ 10 | |100 | 0.0190 | 0.01802 | 0.0400 | 0.01989 | 0.0391 | 0.0326 | 0.0111 | 11 | +-----------------+---------------+--------------+-----------------+----------------+----------------------+---------------------+--------+ 12 | |1000 | 0.0917 | 0.05998 | 0.1237 | 0.05866 | 0.1838 | 0.1397 | 0.0396 | 13 | +-----------------+---------------+--------------+-----------------+----------------+----------------------+---------------------+--------+ 14 | |10000 | 1.0614 | 0.66423 | 1.2277 | 0.62957 | 1.9061 | 1.5464 | 0.3944 | 15 | +-----------------+---------------+--------------+-----------------+----------------+----------------------+---------------------+--------+ 16 | |100000 | 10.228 | 6.13821 | 10.400 | 6.06872 | 19.982 | 15.252 | 3.6307 | 17 | +-----------------+---------------+--------------+-----------------+----------------+----------------------+---------------------+--------+ 18 | 19 | redis server in local area network 20 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 21 | Only run with uvloop, or it will be too slow. 22 | Although synchronous code may perform better than in asynchronous, asynchronous won't block other code. 23 | 24 | +-----------------+--------------+----------------+---------------------+--------+ 25 | |num of query/time|aredis(uvloop)|aioredis(uvloop)|asyncio_redis(uvloop)|redis-py| 26 | +=================+==============+================+=====================+========+ 27 | |100 | 0.06998 | 0.06019 | 0.1971 | 0.0556 | 28 | +-----------------+--------------+----------------+---------------------+--------+ 29 | |1000 | 0.66197 | 0.61183 | 1.9330 | 0.7909 | 30 | +-----------------+--------------+----------------+---------------------+--------+ 31 | |10000 | 5.81604 | 6.87364 | 19.186 | 7.1334 | 32 | +-----------------+--------------+----------------+---------------------+--------+ 33 | |100000 | 58.4715 | 60.9220 | 189.06 | 58.979 | 34 | +-----------------+--------------+----------------+---------------------+--------+ 35 | 36 | **test results may differ depending on your workstation and networking hardware (you may run the benchmark yourself to determine which one is the most suitable for you)** 37 | 38 | Advantages 39 | ^^^^^^^^^^ 40 | 41 | 1. hiredis is an optional dependency for aredis. 42 | 2. API of aredis was mostly ported from redis-py, which is easy to use. It lets you easily port existing code to work with asyncio. 43 | 3. aredis has a decent efficiency (please run benchmarks/comparison.py to see which async redis client is suitable for you). 44 | 4. uvloop event loop is supported by aredis, it can double the speed of your async code. 45 | -------------------------------------------------------------------------------- /docs/source/aredis.commands.rst: -------------------------------------------------------------------------------- 1 | aredis\.commands package 2 | ======================== 3 | 4 | Submodules 5 | ---------- 6 | 7 | aredis\.commands\.cluster module 8 | -------------------------------- 9 | 10 | .. automodule:: aredis.commands.cluster 11 | :members: 12 | :undoc-members: 13 | :show-inheritance: 14 | 15 | aredis\.commands\.connection module 16 | ----------------------------------- 17 | 18 | .. automodule:: aredis.commands.connection 19 | :members: 20 | :undoc-members: 21 | :show-inheritance: 22 | 23 | aredis\.commands\.extra module 24 | ------------------------------ 25 | 26 | .. automodule:: aredis.commands.extra 27 | :members: 28 | :undoc-members: 29 | :show-inheritance: 30 | 31 | aredis\.commands\.geo module 32 | ---------------------------- 33 | 34 | .. automodule:: aredis.commands.geo 35 | :members: 36 | :undoc-members: 37 | :show-inheritance: 38 | 39 | aredis\.commands\.hash module 40 | ----------------------------- 41 | 42 | .. automodule:: aredis.commands.hash 43 | :members: 44 | :undoc-members: 45 | :show-inheritance: 46 | 47 | aredis\.commands\.hyperlog module 48 | --------------------------------- 49 | 50 | .. automodule:: aredis.commands.hyperlog 51 | :members: 52 | :undoc-members: 53 | :show-inheritance: 54 | 55 | aredis\.commands\.iter module 56 | ----------------------------- 57 | 58 | .. automodule:: aredis.commands.iter 59 | :members: 60 | :undoc-members: 61 | :show-inheritance: 62 | 63 | aredis\.commands\.keys module 64 | ----------------------------- 65 | 66 | .. automodule:: aredis.commands.keys 67 | :members: 68 | :undoc-members: 69 | :show-inheritance: 70 | 71 | aredis\.commands\.lists module 72 | ------------------------------ 73 | 74 | .. automodule:: aredis.commands.lists 75 | :members: 76 | :undoc-members: 77 | :show-inheritance: 78 | 79 | aredis\.commands\.pubsub module 80 | ------------------------------- 81 | 82 | .. automodule:: aredis.commands.pubsub 83 | :members: 84 | :undoc-members: 85 | :show-inheritance: 86 | 87 | aredis\.commands\.scripting module 88 | ---------------------------------- 89 | 90 | .. automodule:: aredis.commands.scripting 91 | :members: 92 | :undoc-members: 93 | :show-inheritance: 94 | 95 | aredis\.commands\.sentinel module 96 | --------------------------------- 97 | 98 | .. automodule:: aredis.commands.sentinel 99 | :members: 100 | :undoc-members: 101 | :show-inheritance: 102 | 103 | aredis\.commands\.server module 104 | ------------------------------- 105 | 106 | .. automodule:: aredis.commands.server 107 | :members: 108 | :undoc-members: 109 | :show-inheritance: 110 | 111 | aredis\.commands\.sets module 112 | ----------------------------- 113 | 114 | .. automodule:: aredis.commands.sets 115 | :members: 116 | :undoc-members: 117 | :show-inheritance: 118 | 119 | aredis\.commands\.sorted\_set module 120 | ------------------------------------ 121 | 122 | .. automodule:: aredis.commands.sorted_set 123 | :members: 124 | :undoc-members: 125 | :show-inheritance: 126 | 127 | aredis\.commands\.strings module 128 | -------------------------------- 129 | 130 | .. automodule:: aredis.commands.strings 131 | :members: 132 | :undoc-members: 133 | :show-inheritance: 134 | 135 | aredis\.commands\.transaction module 136 | ------------------------------------ 137 | 138 | .. automodule:: aredis.commands.transaction 139 | :members: 140 | :undoc-members: 141 | :show-inheritance: 142 | 143 | 144 | Module contents 145 | --------------- 146 | 147 | .. automodule:: aredis.commands 148 | :members: 149 | :undoc-members: 150 | :show-inheritance: 151 | -------------------------------------------------------------------------------- /aredis/commands/iter.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | from collections import defaultdict 4 | 5 | 6 | class IterCommandMixin: 7 | """ 8 | convenient function of scan iter, make it a class separately 9 | because yield can not be used in async function in Python3.6 10 | """ 11 | RESPONSE_CALLBACKS = {} 12 | 13 | async def scan_iter(self, match=None, count=None): 14 | """ 15 | Make an iterator using the SCAN command so that the client doesn't 16 | need to remember the cursor position. 17 | 18 | ``match`` allows for filtering the keys by pattern 19 | 20 | ``count`` allows for hint the minimum number of returns 21 | """ 22 | cursor = '0' 23 | while cursor != 0: 24 | cursor, data = await self.scan(cursor=cursor, match=match, count=count) 25 | for item in data: 26 | yield item 27 | 28 | async def sscan_iter(self, name, match=None, count=None): 29 | """ 30 | Make an iterator using the SSCAN command so that the client doesn't 31 | need to remember the cursor position. 32 | 33 | ``match`` allows for filtering the keys by pattern 34 | 35 | ``count`` allows for hint the minimum number of returns 36 | """ 37 | cursor = '0' 38 | while cursor != 0: 39 | cursor, data = await self.sscan(name, cursor=cursor, 40 | match=match, count=count) 41 | for item in data: 42 | yield item 43 | 44 | async def hscan_iter(self, name, match=None, count=None): 45 | """ 46 | Make an iterator using the HSCAN command so that the client doesn't 47 | need to remember the cursor position. 48 | 49 | ``match`` allows for filtering the keys by pattern 50 | 51 | ``count`` allows for hint the minimum number of returns 52 | """ 53 | cursor = '0' 54 | while cursor != 0: 55 | cursor, data = await self.hscan(name, cursor=cursor, 56 | match=match, count=count) 57 | for item in data.items(): 58 | yield item 59 | 60 | async def zscan_iter(self, name, match=None, count=None, 61 | score_cast_func=float): 62 | """ 63 | Make an iterator using the ZSCAN command so that the client doesn't 64 | need to remember the cursor position. 65 | 66 | ``match`` allows for filtering the keys by pattern 67 | 68 | ``count`` allows for hint the minimum number of returns 69 | 70 | ``score_cast_func`` a callable used to cast the score return value 71 | """ 72 | cursor = '0' 73 | while cursor != 0: 74 | cursor, data = await self.zscan(name, cursor=cursor, match=match, 75 | count=count, 76 | score_cast_func=score_cast_func) 77 | for item in data: 78 | yield item 79 | 80 | 81 | class ClusterIterCommandMixin(IterCommandMixin): 82 | 83 | async def scan_iter(self, match=None, count=None): 84 | nodes = await self.cluster_nodes() 85 | for node in nodes: 86 | if 'master' in node['flags']: 87 | cursor = '0' 88 | while cursor != 0: 89 | pieces = [cursor] 90 | if match is not None: 91 | pieces.extend(['MATCH', match]) 92 | if count is not None: 93 | pieces.extend(['COUNT', count]) 94 | response = await self.execute_command_on_nodes([node], 'SCAN', *pieces) 95 | cursor, data = list(response.values())[0] 96 | for item in data: 97 | yield item 98 | -------------------------------------------------------------------------------- /tests/cluster/conftest.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # python std lib 4 | import asyncio 5 | import os 6 | import sys 7 | import json 8 | 9 | # rediscluster imports 10 | from aredis import StrictRedisCluster, StrictRedis 11 | 12 | # 3rd party imports 13 | import pytest 14 | from distutils.version import StrictVersion 15 | 16 | # put our path in front so we can be sure we are testing locally not against the global package 17 | basepath = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) 18 | sys.path.insert(1, basepath) 19 | 20 | _REDIS_VERSIONS = {} 21 | 22 | 23 | def get_versions(**kwargs): 24 | key = json.dumps(kwargs) 25 | if key not in _REDIS_VERSIONS: 26 | client = _get_client(**kwargs) 27 | loop = asyncio.get_event_loop() 28 | info = loop.run_until_complete(client.info()) 29 | _REDIS_VERSIONS[key] = {key: value['redis_version'] for key, value in info.items()} 30 | return _REDIS_VERSIONS[key] 31 | 32 | 33 | def _get_client(cls=None, **kwargs): 34 | if not cls: 35 | cls = StrictRedisCluster 36 | 37 | params = { 38 | 'startup_nodes': [{ 39 | 'host': '127.0.0.1', 'port': 7000 40 | }], 41 | 'stream_timeout': 10, 42 | } 43 | params.update(kwargs) 44 | return cls(**params) 45 | 46 | 47 | def _init_mgt_client(request, cls=None, **kwargs): 48 | """ 49 | """ 50 | client = _get_client(cls=cls, **kwargs) 51 | if request: 52 | def teardown(): 53 | client.connection_pool.disconnect() 54 | request.addfinalizer(teardown) 55 | return client 56 | 57 | 58 | def skip_if_not_password_protected_nodes(): 59 | """ 60 | """ 61 | return pytest.mark.skipif('TEST_PASSWORD_PROTECTED' not in os.environ, reason="") 62 | 63 | 64 | def skip_if_server_version_lt(min_version): 65 | """ 66 | """ 67 | versions = get_versions() 68 | for version in versions.values(): 69 | if StrictVersion(version) < StrictVersion(min_version): 70 | return pytest.mark.skipif(True, reason="") 71 | return pytest.mark.skipif(False, reason="") 72 | 73 | 74 | def skip_if_redis_py_version_lt(min_version): 75 | """ 76 | """ 77 | import aredis 78 | version = aredis.__version__ 79 | if StrictVersion(version) < StrictVersion(min_version): 80 | return pytest.mark.skipif(True, reason="") 81 | return pytest.mark.skipif(False, reason="") 82 | 83 | 84 | @pytest.fixture() 85 | def o(request, *args, **kwargs): 86 | """ 87 | Create a StrictRedisCluster instance with decode_responses set to True. 88 | """ 89 | params = {'decode_responses': True} 90 | params.update(kwargs) 91 | return _get_client(cls=StrictRedisCluster, **params) 92 | 93 | 94 | @pytest.fixture() 95 | def r(request, *args, **kwargs): 96 | """ 97 | Create a StrictRedisCluster instance with default settings. 98 | """ 99 | return _get_client(cls=StrictRedisCluster, **kwargs) 100 | 101 | 102 | @pytest.fixture() 103 | def ro(request, *args, **kwargs): 104 | """ 105 | Create a StrictRedisCluster instance with readonly mode 106 | """ 107 | params = {'readonly': True} 108 | params.update(kwargs) 109 | return _get_client(cls=StrictRedisCluster, **params) 110 | 111 | 112 | @pytest.fixture() 113 | def s(*args, **kwargs): 114 | """ 115 | Create a StrictRedisCluster instance with 'init_slot_cache' set to false 116 | """ 117 | s = _get_client(**kwargs) 118 | assert s.connection_pool.nodes.slots == {} 119 | assert s.connection_pool.nodes.nodes == {} 120 | return s 121 | 122 | 123 | @pytest.fixture() 124 | def t(*args, **kwargs): 125 | """ 126 | Create a regular StrictRedis object instance 127 | """ 128 | return StrictRedis(*args, **kwargs) 129 | 130 | 131 | @pytest.fixture() 132 | def sr(request, *args, **kwargs): 133 | """ 134 | Returns a instance of StrictRedisCluster 135 | """ 136 | return _get_client(reinitialize_steps=1, cls=StrictRedisCluster, **kwargs) 137 | -------------------------------------------------------------------------------- /aredis/commands/pubsub.py: -------------------------------------------------------------------------------- 1 | from aredis.pubsub import (PubSub, 2 | ClusterPubSub) 3 | from aredis.utils import (dict_merge, 4 | merge_result, 5 | list_keys_to_dict, 6 | NodeFlag) 7 | 8 | 9 | def parse_pubsub_numsub(response, **options): 10 | return list(zip(response[0::2], response[1::2])) 11 | 12 | 13 | class PubSubCommandMixin: 14 | 15 | RESPONSE_CALLBACKS = { 16 | 'PUBSUB NUMSUB': parse_pubsub_numsub, 17 | } 18 | 19 | def pubsub(self, **kwargs): 20 | """ 21 | Return a Publish/Subscribe object. With this object, you can 22 | subscribe to channels and listen for messages that get published to 23 | them. 24 | """ 25 | return PubSub(self.connection_pool, **kwargs) 26 | 27 | async def publish(self, channel, message): 28 | """ 29 | Publish ``message`` on ``channel``. 30 | Returns the number of subscribers the message was delivered to. 31 | """ 32 | return await self.execute_command('PUBLISH', channel, message) 33 | 34 | async def pubsub_channels(self, pattern='*'): 35 | """ 36 | Return a list of channels that have at least one subscriber 37 | """ 38 | return await self.execute_command('PUBSUB CHANNELS', pattern) 39 | 40 | async def pubsub_numpat(self): 41 | """ 42 | Returns the number of subscriptions to patterns 43 | """ 44 | return await self.execute_command('PUBSUB NUMPAT') 45 | 46 | async def pubsub_numsub(self, *args): 47 | """ 48 | Return a list of (channel, number of subscribers) tuples 49 | for each channel given in ``*args`` 50 | """ 51 | return await self.execute_command('PUBSUB NUMSUB', *args) 52 | 53 | 54 | def parse_cluster_pubsub_channels(res, **options): 55 | """ 56 | Result callback, handles different return types 57 | switchable by the `aggregate` flag. 58 | """ 59 | aggregate = options.get('aggregate', True) 60 | if not aggregate: 61 | return res 62 | return merge_result(res) 63 | 64 | 65 | def parse_cluster_pubsub_numpat(res, **options): 66 | """ 67 | Result callback, handles different return types 68 | switchable by the `aggregate` flag. 69 | """ 70 | aggregate = options.get('aggregate', True) 71 | if not aggregate: 72 | return res 73 | 74 | numpat = 0 75 | for node, node_numpat in res.items(): 76 | numpat += node_numpat 77 | return numpat 78 | 79 | 80 | def parse_cluster_pubsub_numsub(res, **options): 81 | """ 82 | Result callback, handles different return types 83 | switchable by the `aggregate` flag. 84 | """ 85 | aggregate = options.get('aggregate', True) 86 | if not aggregate: 87 | return res 88 | 89 | numsub_d = dict() 90 | for _, numsub_tups in res.items(): 91 | for channel, numsubbed in numsub_tups: 92 | try: 93 | numsub_d[channel] += numsubbed 94 | except KeyError: 95 | numsub_d[channel] = numsubbed 96 | 97 | ret_numsub = [] 98 | for channel, numsub in numsub_d.items(): 99 | ret_numsub.append((channel, numsub)) 100 | return ret_numsub 101 | 102 | 103 | 104 | class CLusterPubSubCommandMixin(PubSubCommandMixin): 105 | 106 | NODES_FLAGS = dict_merge( 107 | list_keys_to_dict( 108 | ['PUBSUB CHANNELS', 'PUBSUB NUMSUB', 'PUBSUB NUMPAT'], 109 | NodeFlag.ALL_NODES 110 | ) 111 | ) 112 | 113 | RESULT_CALLBACKS = dict_merge( 114 | list_keys_to_dict([ 115 | "PUBSUB CHANNELS", 116 | ], parse_cluster_pubsub_channels), 117 | list_keys_to_dict([ 118 | "PUBSUB NUMSUB", 119 | ], parse_cluster_pubsub_numsub), 120 | list_keys_to_dict([ 121 | "PUBSUB NUMPAT", 122 | ], parse_cluster_pubsub_numpat), 123 | ) 124 | 125 | def pubsub(self, **kwargs): 126 | return ClusterPubSub(self.connection_pool, **kwargs) 127 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | aredis 2 | ====== 3 | |pypi-ver| |circleci-status| |python-ver| 4 | 5 | An efficient and user-friendly async redis client ported from `redis-py `_ 6 | (which is a Python interface to the Redis key-value) 7 | 8 | To get more information please read `full document`_ 9 | 10 | .. _full document: http://aredis.readthedocs.io/en/latest/ 11 | 12 | Installation 13 | ------------ 14 | 15 | aredis requires a running Redis server. 16 | 17 | To install aredis, simply: 18 | 19 | .. code-block:: bash 20 | 21 | $ pip3 install aredis[hiredis] 22 | 23 | or from source: 24 | 25 | .. code-block:: bash 26 | 27 | $ python setup.py install 28 | 29 | 30 | Getting started 31 | --------------- 32 | 33 | `More examples`_ 34 | 35 | .. _More examples: https://github.com/NoneGG/aredis/tree/master/examples 36 | 37 | Tip: since python 3.8 you can use asyncio REPL: 38 | 39 | .. code-block:: bash 40 | 41 | $ python -m asyncio 42 | 43 | single node client 44 | ^^^^^^^^^^^^^^^^^^ 45 | 46 | .. code-block:: python 47 | 48 | import asyncio 49 | from aredis import StrictRedis 50 | 51 | async def example(): 52 | client = StrictRedis(host='127.0.0.1', port=6379, db=0) 53 | await client.flushdb() 54 | await client.set('foo', 1) 55 | assert await client.exists('foo') is True 56 | await client.incr('foo', 100) 57 | 58 | assert int(await client.get('foo')) == 101 59 | await client.expire('foo', 1) 60 | await asyncio.sleep(0.1) 61 | await client.ttl('foo') 62 | await asyncio.sleep(1) 63 | assert not await client.exists('foo') 64 | 65 | loop = asyncio.get_event_loop() 66 | loop.run_until_complete(example()) 67 | 68 | cluster client 69 | ^^^^^^^^^^^^^^ 70 | 71 | .. code-block:: python 72 | 73 | import asyncio 74 | from aredis import StrictRedisCluster 75 | 76 | async def example(): 77 | client = StrictRedisCluster(host='172.17.0.2', port=7001) 78 | await client.flushdb() 79 | await client.set('foo', 1) 80 | await client.lpush('a', 1) 81 | print(await client.cluster_slots()) 82 | 83 | await client.rpoplpush('a', 'b') 84 | assert await client.rpop('b') == b'1' 85 | 86 | loop = asyncio.get_event_loop() 87 | loop.run_until_complete(example()) 88 | # {(10923, 16383): [{'host': b'172.17.0.2', 'node_id': b'332f41962b33fa44bbc5e88f205e71276a9d64f4', 'server_type': 'master', 'port': 7002}, 89 | # {'host': b'172.17.0.2', 'node_id': b'c02deb8726cdd412d956f0b9464a88812ef34f03', 'server_type': 'slave', 'port': 7005}], 90 | # (5461, 10922): [{'host': b'172.17.0.2', 'node_id': b'3d1b020fc46bf7cb2ffc36e10e7d7befca7c5533', 'server_type': 'master', 'port': 7001}, 91 | # {'host': b'172.17.0.2', 'node_id': b'aac4799b65ff35d8dd2ad152a5515d15c0dc8ab7', 'server_type': 'slave', 'port': 7004}], 92 | # (0, 5460): [{'host': b'172.17.0.2', 'node_id': b'0932215036dc0d908cf662fdfca4d3614f221b01', 'server_type': 'master', 'port': 7000}, 93 | # {'host': b'172.17.0.2', 'node_id': b'f6603ab4cb77e672de23a6361ec165f3a1a2bb42', 'server_type': 'slave', 'port': 7003}]} 94 | 95 | Benchmark 96 | --------- 97 | 98 | Please run test script in benchmarks dir to confirm the benchmark. 99 | 100 | For benchmark in my environment please see: `benchmark`_ 101 | 102 | .. _benchmark: http://aredis.readthedocs.io/en/latest/benchmark.html 103 | 104 | .. |circleci-status| image:: https://img.shields.io/circleci/project/github/NoneGG/aredis/master.svg 105 | :alt: CircleCI build status 106 | :target: https://circleci.com/gh/NoneGG/aredis/tree/master 107 | 108 | .. |pypi-ver| image:: https://img.shields.io/pypi/v/aredis.svg 109 | :target: https://pypi.python.org/pypi/aredis/ 110 | :alt: Latest Version in PyPI 111 | 112 | .. |python-ver| image:: https://img.shields.io/pypi/pyversions/aredis.svg 113 | :target: https://pypi.python.org/pypi/aredis/ 114 | :alt: Supported Python versions 115 | 116 | Contributing 117 | ------------ 118 | 119 | Enhancement, bug reports and Pull requests are welcomed, please make an issue to let me know. 120 | Fork me please~ 121 | -------------------------------------------------------------------------------- /aredis/commands/hyperlog.py: -------------------------------------------------------------------------------- 1 | import string 2 | import random 3 | from aredis.utils import (string_keys_to_dict, 4 | dict_merge, 5 | bool_ok) 6 | 7 | 8 | class HyperLogCommandMixin: 9 | 10 | RESPONSE_CALLBACKS = dict_merge( 11 | string_keys_to_dict('PFADD PFCOUNT', int), 12 | { 13 | 'PFMERGE': bool_ok, 14 | } 15 | ) 16 | 17 | async def pfadd(self, name, *values): 18 | "Adds the specified elements to the specified HyperLogLog." 19 | return await self.execute_command('PFADD', name, *values) 20 | 21 | async def pfcount(self, *sources): 22 | """ 23 | Return the approximated cardinality of 24 | the set observed by the HyperLogLog at key(s). 25 | """ 26 | return await self.execute_command('PFCOUNT', *sources) 27 | 28 | async def pfmerge(self, dest, *sources): 29 | "Merge N different HyperLogLogs into a single one." 30 | return await self.execute_command('PFMERGE', dest, *sources) 31 | 32 | 33 | class ClusterHyperLogCommandMixin(HyperLogCommandMixin): 34 | 35 | async def pfmerge(self, dest, *sources): 36 | """ 37 | Merge N different HyperLogLogs into a single one. 38 | 39 | Cluster impl: 40 | Very special implementation is required to make pfmerge() work 41 | But it works :] 42 | It works by first fetching all HLL objects that should be merged and 43 | move them to one hashslot so that pfmerge operation can be performed without 44 | any 'CROSSSLOT' error. 45 | After the PFMERGE operation is done then it will be moved to the correct location 46 | within the cluster and cleanup is done. 47 | 48 | This operation is no longer atomic because of all the operations that has to be done. 49 | """ 50 | all_k = [] 51 | 52 | # Fetch all HLL objects via GET and store them client side as strings 53 | all_hll_objects = list() 54 | for hll_key in sources: 55 | all_hll_objects.append(await self.get(hll_key)) 56 | 57 | # Randomize a keyslot hash that should be used inside {} when doing SET 58 | random_hash_slot = self._random_id() 59 | 60 | # Special handling of dest variable if it allready exists, then it shold be included in the HLL merge 61 | # dest can exists anywhere in the cluster. 62 | dest_data = await self.get(dest) 63 | 64 | if dest_data: 65 | all_hll_objects.append(dest_data) 66 | 67 | # SET all stored HLL objects with SET {RandomHash}RandomKey hll_obj 68 | for hll_object in all_hll_objects: 69 | k = self._random_good_hashslot_key(random_hash_slot) 70 | all_k.append(k) 71 | await self.set(k, hll_object) 72 | 73 | # Do regular PFMERGE operation and store value in random key in {RandomHash} 74 | tmp_dest = self._random_good_hashslot_key(random_hash_slot) 75 | await self.execute_command("PFMERGE", tmp_dest, *all_k) 76 | 77 | # Do GET and SET so that result will be stored in the destination object any where in the cluster 78 | parsed_dest = await self.get(tmp_dest) 79 | await self.set(dest, parsed_dest) 80 | 81 | # Cleanup tmp variables 82 | await self.delete(tmp_dest) 83 | 84 | for k in all_k: 85 | await self.delete(k) 86 | 87 | return True 88 | 89 | def _random_good_hashslot_key(self, hashslot): 90 | """ 91 | Generate a good random key with a low probability of collision between any other key. 92 | """ 93 | random_id = "{{0}}{1}".format(hashslot, self._random_id()) 94 | return random_id 95 | 96 | def _random_id(self, size=16, chars=string.ascii_uppercase + string.digits): 97 | """ 98 | Generates a random id based on `size` and `chars` variable. 99 | 100 | By default it will generate a 16 character long string based on 101 | ascii uppercase letters and digits. 102 | """ 103 | return ''.join(random.choice(chars) for _ in range(size)) 104 | -------------------------------------------------------------------------------- /aredis/commands/extra.py: -------------------------------------------------------------------------------- 1 | from aredis.lock import Lock, LuaLock 2 | from aredis.cache import (Cache, IdentityGenerator, 3 | Serializer, Compressor) 4 | from aredis.exceptions import ResponseError 5 | 6 | 7 | class ExtraCommandMixin: 8 | 9 | RESPONSE_CALLBACKS = {} 10 | 11 | def cache(self, name, cache_class=Cache, 12 | identity_generator_class=IdentityGenerator, 13 | compressor_class=Compressor, 14 | serializer_class=Serializer, *args, **kwargs): 15 | """ 16 | Return a cache object using default identity generator, 17 | serializer and compressor. 18 | 19 | ``name`` is used to identify the series of your cache 20 | ``cache_class`` Cache is for normal use and HerdCache 21 | is used in case of Thundering Herd Problem 22 | ``identity_generator_class`` is the class used to generate 23 | the real unique key in cache, can be overwritten to 24 | meet your special needs. It should provide `generate` API 25 | ``compressor_class`` is the class used to compress cache in redis, 26 | can be overwritten with API `compress` and `decompress` retained. 27 | ``serializer_class`` is the class used to serialize 28 | content before compress, can be overwritten with API 29 | `serialize` and `deserialize` retained. 30 | """ 31 | return cache_class(self, app=name, 32 | identity_generator_class=identity_generator_class, 33 | compressor_class=compressor_class, 34 | serializer_class=serializer_class, 35 | *args, **kwargs) 36 | 37 | def lock(self, name, timeout=None, sleep=0.1, blocking_timeout=None, 38 | lock_class=None, thread_local=True): 39 | """ 40 | Return a new Lock object using key ``name`` that mimics 41 | the behavior of threading.Lock. 42 | 43 | If specified, ``timeout`` indicates a maximum life for the lock. 44 | By default, it will remain locked until release() is called. 45 | 46 | ``sleep`` indicates the amount of time to sleep per loop iteration 47 | when the lock is in blocking mode and another client is currently 48 | holding the lock. 49 | 50 | ``blocking_timeout`` indicates the maximum amount of time in seconds to 51 | spend trying to acquire the lock. A value of ``None`` indicates 52 | continue trying forever. ``blocking_timeout`` can be specified as a 53 | float or integer, both representing the number of seconds to wait. 54 | 55 | ``lock_class`` forces the specified lock implementation. 56 | 57 | ``thread_local`` indicates whether the lock token is placed in 58 | thread-local storage. By default, the token is placed in thread local 59 | storage so that a thread only sees its token, not a token set by 60 | another thread. Consider the following timeline: 61 | 62 | time: 0, thread-1 acquires `my-lock`, with a timeout of 5 seconds. 63 | thread-1 sets the token to "abc" 64 | time: 1, thread-2 blocks trying to acquire `my-lock` using the 65 | Lock instance. 66 | time: 5, thread-1 has not yet completed. redis expires the lock 67 | key. 68 | time: 5, thread-2 acquired `my-lock` now that it's available. 69 | thread-2 sets the token to "xyz" 70 | time: 6, thread-1 finishes its work and calls release(). if the 71 | token is *not* stored in thread local storage, then 72 | thread-1 would see the token value as "xyz" and would be 73 | able to successfully release the thread-2's lock. 74 | 75 | In some use cases it's necessary to disable thread local storage. For 76 | example, if you have code where one thread acquires a lock and passes 77 | that lock instance to a worker thread to release later. If thread 78 | local storage isn't disabled in this case, the worker thread won't see 79 | the token set by the thread that acquired the lock. Our assumption 80 | is that these cases aren't common and as such default to using 81 | thread local storage. """ 82 | if lock_class is None: 83 | if self._use_lua_lock is None: 84 | # the first time .lock() is called, determine if we can use 85 | # Lua by attempting to register the necessary scripts 86 | try: 87 | LuaLock.register_scripts(self) 88 | self._use_lua_lock = True 89 | except ResponseError: 90 | self._use_lua_lock = False 91 | lock_class = self._use_lua_lock and LuaLock or Lock 92 | return lock_class(self, name, timeout=timeout, sleep=sleep, 93 | blocking_timeout=blocking_timeout, 94 | thread_local=thread_local) -------------------------------------------------------------------------------- /tests/cluster/test_lock.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | import time 3 | from aredis.exceptions import LockError 4 | from aredis.lock import ClusterLock 5 | 6 | 7 | class TestLock: 8 | lock_class = ClusterLock 9 | 10 | def get_lock(self, redis, *args, **kwargs): 11 | kwargs['lock_class'] = self.lock_class 12 | return redis.lock(*args, **kwargs) 13 | 14 | @pytest.mark.asyncio() 15 | async def test_lock(self, r): 16 | await r.flushdb() 17 | lock = self.get_lock(r, 'foo', timeout=3) 18 | assert await lock.acquire(blocking=False) 19 | assert await r.get('foo') == lock.local.get() 20 | assert await r.ttl('foo') == 3 21 | await lock.release() 22 | assert await r.get('foo') is None 23 | 24 | @pytest.mark.asyncio() 25 | async def test_competing_locks(self, r): 26 | lock1 = self.get_lock(r, 'foo', timeout=3) 27 | lock2 = self.get_lock(r, 'foo', timeout=3) 28 | assert await lock1.acquire(blocking=False) 29 | assert not await lock2.acquire(blocking=False) 30 | await lock1.release() 31 | assert await lock2.acquire(blocking=False) 32 | assert not await lock1.acquire(blocking=False) 33 | await lock2.release() 34 | 35 | @pytest.mark.asyncio() 36 | async def test_timeout(self, r): 37 | lock = self.get_lock(r, 'foo', timeout=10) 38 | assert await lock.acquire(blocking=False) 39 | assert 8 < await r.ttl('foo') <= 10 40 | await lock.release() 41 | 42 | @pytest.mark.asyncio() 43 | async def test_float_timeout(self, r): 44 | lock = self.get_lock(r, 'foo', timeout=9.5) 45 | assert await lock.acquire(blocking=False) 46 | assert 8 < await r.pttl('foo') <= 9500 47 | await lock.release() 48 | 49 | @pytest.mark.asyncio() 50 | async def test_blocking_timeout(self, r): 51 | lock1 = self.get_lock(r, 'foo', timeout=3) 52 | assert await lock1.acquire(blocking=False) 53 | lock2 = self.get_lock(r, 'foo', timeout=3, blocking_timeout=0.2) 54 | start = time.time() 55 | assert not await lock2.acquire() 56 | assert (time.time() - start) > 0.2 57 | await lock1.release() 58 | 59 | @pytest.mark.asyncio() 60 | async def test_context_manager(self, r): 61 | # blocking_timeout prevents a deadlock if the lock can't be acquired 62 | # for some reason 63 | async with self.get_lock(r, 'foo', timeout=3, blocking_timeout=0.2) as lock: 64 | assert await r.get('foo') == lock.local.get() 65 | assert await r.get('foo') is None 66 | 67 | @pytest.mark.asyncio() 68 | async def test_high_sleep_raises_error(self, r): 69 | "If sleep is higher than timeout, it should raise an error" 70 | with pytest.raises(LockError): 71 | self.get_lock(r, 'foo', timeout=1, sleep=2) 72 | 73 | @pytest.mark.asyncio() 74 | async def test_releasing_unlocked_lock_raises_error(self, r): 75 | lock = self.get_lock(r, 'foo', timeout=3) 76 | with pytest.raises(LockError): 77 | await lock.release() 78 | 79 | @pytest.mark.asyncio() 80 | async def test_releasing_lock_no_longer_owned_raises_error(self, r): 81 | lock = self.get_lock(r, 'foo', timeout=3) 82 | await lock.acquire(blocking=False) 83 | # manually change the token 84 | await r.set('foo', 'a') 85 | with pytest.raises(LockError): 86 | await lock.release() 87 | # even though we errored, the token is still cleared 88 | assert lock.local.get() is None 89 | 90 | @pytest.mark.asyncio() 91 | async def test_extend_lock(self, r): 92 | await r.flushdb() 93 | lock = self.get_lock(r, 'foo', timeout=10) 94 | assert await lock.acquire(blocking=False) 95 | assert 8000 < await r.pttl('foo') <= 10000 96 | assert await lock.extend(10) 97 | assert 16000 < await r.pttl('foo') <= 20000 98 | await lock.release() 99 | 100 | @pytest.mark.asyncio() 101 | async def test_extend_lock_float(self, r): 102 | await r.flushdb() 103 | lock = self.get_lock(r, 'foo', timeout=10.0) 104 | assert await lock.acquire(blocking=False) 105 | assert 8000 < await r.pttl('foo') <= 10000 106 | assert await lock.extend(10.0) 107 | assert 16000 < await r.pttl('foo') <= 20000 108 | await lock.release() 109 | 110 | @pytest.mark.asyncio() 111 | async def test_extending_unlocked_lock_raises_error(self, r): 112 | lock = self.get_lock(r, 'foo', timeout=10) 113 | with pytest.raises(LockError): 114 | await lock.extend(10) 115 | 116 | @pytest.mark.asyncio() 117 | async def test_extending_lock_no_longer_owned_raises_error(self, r): 118 | lock = self.get_lock(r, 'foo', timeout=3) 119 | await r.flushdb() 120 | assert await lock.acquire(blocking=False) 121 | await r.set('foo', 'a') 122 | with pytest.raises(LockError): 123 | await lock.extend(10) -------------------------------------------------------------------------------- /aredis/commands/hash.py: -------------------------------------------------------------------------------- 1 | from aredis.exceptions import DataError 2 | from aredis.utils import (b, dict_merge, 3 | iteritems, 4 | first_key, 5 | string_keys_to_dict, 6 | list_or_args, 7 | pairs_to_dict) 8 | 9 | 10 | def parse_hscan(response, **options): 11 | cursor, r = response 12 | return int(cursor), r and pairs_to_dict(r) or {} 13 | 14 | 15 | class HashCommandMixin: 16 | 17 | RESPONSE_CALLBACKS = dict_merge( 18 | string_keys_to_dict('HDEL HLEN', int), 19 | string_keys_to_dict('HEXISTS HMSET', bool), 20 | { 21 | 'HGETALL': lambda r: r and pairs_to_dict(r) or {}, 22 | 'HINCRBYFLOAT': float, 23 | 'HSCAN': parse_hscan, 24 | } 25 | ) 26 | 27 | async def hdel(self, name, *keys): 28 | """Deletes ``keys`` from hash ``name``""" 29 | return await self.execute_command('HDEL', name, *keys) 30 | 31 | async def hexists(self, name, key): 32 | """ 33 | Returns a boolean indicating if ``key`` exists within hash ``name`` 34 | """ 35 | return await self.execute_command('HEXISTS', name, key) 36 | 37 | async def hget(self, name, key): 38 | """Returns the value of ``key`` within the hash ``name``""" 39 | return await self.execute_command('HGET', name, key) 40 | 41 | async def hgetall(self, name): 42 | """Returns a Python dict of the hash's name/value pairs""" 43 | return await self.execute_command('HGETALL', name) 44 | 45 | async def hincrby(self, name, key, amount=1): 46 | """Increments the value of ``key`` in hash ``name`` by ``amount``""" 47 | return await self.execute_command('HINCRBY', name, key, amount) 48 | 49 | async def hincrbyfloat(self, name, key, amount=1.0): 50 | """ 51 | Increments the value of ``key`` in hash ``name`` by floating 52 | ``amount`` 53 | """ 54 | return await self.execute_command('HINCRBYFLOAT', name, key, amount) 55 | 56 | async def hkeys(self, name): 57 | """Returns the list of keys within hash ``name``""" 58 | return await self.execute_command('HKEYS', name) 59 | 60 | async def hlen(self, name): 61 | """Returns the number of elements in hash ``name``""" 62 | return await self.execute_command('HLEN', name) 63 | 64 | async def hset(self, name, key, value): 65 | """ 66 | Sets ``key`` to ``value`` within hash ``name`` 67 | Returns 1 if HSET created a new field, otherwise 0 68 | """ 69 | return await self.execute_command('HSET', name, key, value) 70 | 71 | async def hsetnx(self, name, key, value): 72 | """ 73 | Sets ``key`` to ``value`` within hash ``name`` if ``key`` does not 74 | exist. Returns 1 if HSETNX created a field, otherwise 0. 75 | """ 76 | return await self.execute_command('HSETNX', name, key, value) 77 | 78 | async def hmset(self, name, mapping): 79 | """ 80 | Sets key to value within hash ``name`` for each corresponding 81 | key and value from the ``mapping`` dict. 82 | """ 83 | if not mapping: 84 | raise DataError("'hmset' with 'mapping' of length 0") 85 | items = [] 86 | for pair in iteritems(mapping): 87 | items.extend(pair) 88 | return await self.execute_command('HMSET', name, *items) 89 | 90 | async def hmget(self, name, keys, *args): 91 | """Returns a list of values ordered identically to ``keys``""" 92 | args = list_or_args(keys, args) 93 | return await self.execute_command('HMGET', name, *args) 94 | 95 | async def hvals(self, name): 96 | """Returns the list of values within hash ``name``""" 97 | return await self.execute_command('HVALS', name) 98 | 99 | async def hscan(self, name, cursor=0, match=None, count=None): 100 | """ 101 | Incrementallys return key/value slices in a hash. Also returns a 102 | cursor pointing to the scan position. 103 | 104 | ``match`` allows for filtering the keys by pattern 105 | 106 | ``count`` allows for hint the minimum number of returns 107 | """ 108 | pieces = [name, cursor] 109 | if match is not None: 110 | pieces.extend([b('MATCH'), match]) 111 | if count is not None: 112 | pieces.extend([b('COUNT'), count]) 113 | return await self.execute_command('HSCAN', *pieces) 114 | 115 | async def hstrlen(self, name, key): 116 | """ 117 | Returns the string length of the value associated 118 | with field in the hash stored at key. 119 | If the key or the field do not exist, 0 is returned. 120 | """ 121 | return await self.execute_command('HSTRLEN', name, key) 122 | 123 | 124 | class ClusterHashCommandMixin(HashCommandMixin): 125 | 126 | RESULT_CALLBACKS = { 127 | 'HSCAN': first_key 128 | } 129 | -------------------------------------------------------------------------------- /aredis/speedups.c: -------------------------------------------------------------------------------- 1 | #define PY_SSIZE_T_CLEAN 2 | #include 3 | #include 4 | 5 | static const uint16_t crc16tab[256]= { 6 | 0x0000,0x1021,0x2042,0x3063,0x4084,0x50a5,0x60c6,0x70e7, 7 | 0x8108,0x9129,0xa14a,0xb16b,0xc18c,0xd1ad,0xe1ce,0xf1ef, 8 | 0x1231,0x0210,0x3273,0x2252,0x52b5,0x4294,0x72f7,0x62d6, 9 | 0x9339,0x8318,0xb37b,0xa35a,0xd3bd,0xc39c,0xf3ff,0xe3de, 10 | 0x2462,0x3443,0x0420,0x1401,0x64e6,0x74c7,0x44a4,0x5485, 11 | 0xa56a,0xb54b,0x8528,0x9509,0xe5ee,0xf5cf,0xc5ac,0xd58d, 12 | 0x3653,0x2672,0x1611,0x0630,0x76d7,0x66f6,0x5695,0x46b4, 13 | 0xb75b,0xa77a,0x9719,0x8738,0xf7df,0xe7fe,0xd79d,0xc7bc, 14 | 0x48c4,0x58e5,0x6886,0x78a7,0x0840,0x1861,0x2802,0x3823, 15 | 0xc9cc,0xd9ed,0xe98e,0xf9af,0x8948,0x9969,0xa90a,0xb92b, 16 | 0x5af5,0x4ad4,0x7ab7,0x6a96,0x1a71,0x0a50,0x3a33,0x2a12, 17 | 0xdbfd,0xcbdc,0xfbbf,0xeb9e,0x9b79,0x8b58,0xbb3b,0xab1a, 18 | 0x6ca6,0x7c87,0x4ce4,0x5cc5,0x2c22,0x3c03,0x0c60,0x1c41, 19 | 0xedae,0xfd8f,0xcdec,0xddcd,0xad2a,0xbd0b,0x8d68,0x9d49, 20 | 0x7e97,0x6eb6,0x5ed5,0x4ef4,0x3e13,0x2e32,0x1e51,0x0e70, 21 | 0xff9f,0xefbe,0xdfdd,0xcffc,0xbf1b,0xaf3a,0x9f59,0x8f78, 22 | 0x9188,0x81a9,0xb1ca,0xa1eb,0xd10c,0xc12d,0xf14e,0xe16f, 23 | 0x1080,0x00a1,0x30c2,0x20e3,0x5004,0x4025,0x7046,0x6067, 24 | 0x83b9,0x9398,0xa3fb,0xb3da,0xc33d,0xd31c,0xe37f,0xf35e, 25 | 0x02b1,0x1290,0x22f3,0x32d2,0x4235,0x5214,0x6277,0x7256, 26 | 0xb5ea,0xa5cb,0x95a8,0x8589,0xf56e,0xe54f,0xd52c,0xc50d, 27 | 0x34e2,0x24c3,0x14a0,0x0481,0x7466,0x6447,0x5424,0x4405, 28 | 0xa7db,0xb7fa,0x8799,0x97b8,0xe75f,0xf77e,0xc71d,0xd73c, 29 | 0x26d3,0x36f2,0x0691,0x16b0,0x6657,0x7676,0x4615,0x5634, 30 | 0xd94c,0xc96d,0xf90e,0xe92f,0x99c8,0x89e9,0xb98a,0xa9ab, 31 | 0x5844,0x4865,0x7806,0x6827,0x18c0,0x08e1,0x3882,0x28a3, 32 | 0xcb7d,0xdb5c,0xeb3f,0xfb1e,0x8bf9,0x9bd8,0xabbb,0xbb9a, 33 | 0x4a75,0x5a54,0x6a37,0x7a16,0x0af1,0x1ad0,0x2ab3,0x3a92, 34 | 0xfd2e,0xed0f,0xdd6c,0xcd4d,0xbdaa,0xad8b,0x9de8,0x8dc9, 35 | 0x7c26,0x6c07,0x5c64,0x4c45,0x3ca2,0x2c83,0x1ce0,0x0cc1, 36 | 0xef1f,0xff3e,0xcf5d,0xdf7c,0xaf9b,0xbfba,0x8fd9,0x9ff8, 37 | 0x6e17,0x7e36,0x4e55,0x5e74,0x2e93,0x3eb2,0x0ed1,0x1ef0 38 | }; 39 | 40 | 41 | /* CRC16 implementation according to CCITT standards. 42 | * come from https://redis.io/topics/cluster-spec 43 | * 44 | * Note by @antirez: this is actually the XMODEM CRC 16 algorithm, using the 45 | * following parameters: 46 | * 47 | * Name : "XMODEM", also known as "ZMODEM", "CRC-16/ACORN" 48 | * Width : 16 bit 49 | * Poly : 1021 (That is actually x^16 + x^12 + x^5 + 1) 50 | * Initialization : 0000 51 | * Reflect Input byte : False 52 | * Reflect Output CRC : False 53 | * Xor constant to output CRC : 0000 54 | * Output for "123456789" : 31C3 55 | */ 56 | uint16_t _crc16(const char *buf, int len) { 57 | int counter; 58 | uint16_t crc = 0; 59 | for (counter = 0; counter < len; counter++) 60 | crc = (crc<<8) ^ crc16tab[((crc>>8) ^ *buf++)&0x00FF]; 61 | return crc; 62 | } 63 | 64 | 65 | unsigned int _hash_slot(char *key, int keylen) { 66 | int s, e; /* start-end indexes of { and } */ 67 | 68 | /* Search the first occurrence of '{'. */ 69 | for (s = 0; s < keylen; s++) 70 | if (key[s] == '{') break; 71 | 72 | /* No '{' ? Hash the whole key. This is the base case. */ 73 | if (s == keylen) return _crc16(key,keylen) & 16383; 74 | 75 | /* '{' found? Check if we have the corresponding '}'. */ 76 | for (e = s+1; e < keylen; e++) 77 | if (key[e] == '}') break; 78 | 79 | /* No '}' or nothing between {} ? Hash the whole key. */ 80 | if (e == keylen || e == s+1) return _crc16(key,keylen) & 16383; 81 | 82 | /* If we are here there is both a { and a } on its right. Hash 83 | * what is in the middle between { and }. */ 84 | return _crc16(key+s+1,e-s-1) & 16383; 85 | } 86 | 87 | 88 | static PyObject* crc16(PyObject* self, PyObject* args) { 89 | const char *buf; 90 | Py_ssize_t len; 91 | uint16_t crc = 0; 92 | PyObject* result; 93 | 94 | if (!PyArg_ParseTuple(args, "s#", &buf, &len)) { 95 | return NULL; 96 | } 97 | 98 | crc = _crc16(buf, (int)len); 99 | result = PyLong_FromLong(crc); 100 | if (!result) { 101 | return NULL; 102 | } 103 | return result; 104 | } 105 | 106 | 107 | static PyObject* hash_slot(PyObject* self, PyObject* args) { 108 | const char *key; 109 | int slot; 110 | Py_ssize_t keylen; 111 | PyObject* result; 112 | 113 | if (!PyArg_ParseTuple(args, "s#", &key, &keylen)) { 114 | return NULL; 115 | } 116 | 117 | slot = _hash_slot(key, (int)keylen); 118 | result = PyLong_FromLong(slot); 119 | if (!result) { 120 | return NULL; 121 | } 122 | return result; 123 | } 124 | 125 | 126 | 127 | static PyMethodDef methods[] = { 128 | {"crc16", crc16, METH_VARARGS, "crc16 used to hash key to slot"}, 129 | {"hash_slot", hash_slot, METH_VARARGS, "hash key to a redis cluster slot"}, 130 | {NULL, NULL, 0, NULL} 131 | }; 132 | 133 | 134 | static struct PyModuleDef speedupsmodule = { 135 | PyModuleDef_HEAD_INIT, 136 | "speedups", 137 | NULL, 138 | -1, 139 | methods 140 | }; 141 | 142 | 143 | PyMODINIT_FUNC 144 | PyInit_speedups(void) { 145 | return PyModule_Create(&speedupsmodule); 146 | } 147 | -------------------------------------------------------------------------------- /tests/client/test_scripting.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | import pytest 3 | 4 | from aredis.exceptions import (NoScriptError, 5 | ResponseError) 6 | from aredis.utils import b 7 | 8 | 9 | multiply_script = """ 10 | local value = redis.call('GET', KEYS[1]) 11 | value = tonumber(value) 12 | return value * ARGV[1]""" 13 | 14 | msgpack_hello_script = """ 15 | local message = cmsgpack.unpack(ARGV[1]) 16 | local name = message['name'] 17 | return "hello " .. name 18 | """ 19 | msgpack_hello_script_broken = """ 20 | local message = cmsgpack.unpack(ARGV[1]) 21 | local names = message['name'] 22 | return "hello " .. name 23 | """ 24 | 25 | 26 | class TestScripting: 27 | 28 | @pytest.mark.asyncio(forbid_global_loop=True) 29 | async def test_eval(self, r): 30 | await r.flushdb() 31 | await r.set('a', 2) 32 | # 2 * 3 == 6 33 | assert await r.eval(multiply_script, 1, 'a', 3) == 6 34 | 35 | @pytest.mark.asyncio(forbid_global_loop=True) 36 | async def test_evalsha(self, r): 37 | await r.set('a', 2) 38 | sha = await r.script_load(multiply_script) 39 | # 2 * 3 == 6 40 | assert await r.evalsha(sha, 1, 'a', 3) == 6 41 | 42 | @pytest.mark.asyncio(forbid_global_loop=True) 43 | async def test_evalsha_script_not_loaded(self, r): 44 | await r.set('a', 2) 45 | sha = await r.script_load(multiply_script) 46 | # remove the script from Redis's cache 47 | await r.script_flush() 48 | with pytest.raises(NoScriptError): 49 | await r.evalsha(sha, 1, 'a', 3) 50 | 51 | @pytest.mark.asyncio(forbid_global_loop=True) 52 | async def test_script_loading(self, r): 53 | # get the sha, then clear the cache 54 | sha = await r.script_load(multiply_script) 55 | await r.script_flush() 56 | assert await r.script_exists(sha) == [False] 57 | await r.script_load(multiply_script) 58 | assert await r.script_exists(sha) == [True] 59 | 60 | @pytest.mark.asyncio(forbid_global_loop=True) 61 | async def test_script_object(self, r): 62 | await r.script_flush() 63 | await r.set('a', 2) 64 | multiply = r.register_script(multiply_script) 65 | precalculated_sha = multiply.sha 66 | assert precalculated_sha 67 | assert await r.script_exists(multiply.sha) == [False] 68 | # Test second evalsha block (after NoScriptError) 69 | assert await multiply.execute(keys=['a'], args=[3]) == 6 70 | # At this point, the script should be loaded 71 | assert await r.script_exists(multiply.sha) == [True] 72 | # Test that the precalculated sha matches the one from redis 73 | assert multiply.sha == precalculated_sha 74 | # Test first evalsha block 75 | assert await multiply.execute(keys=['a'], args=[3]) == 6 76 | 77 | @pytest.mark.asyncio(forbid_global_loop=True) 78 | async def test_script_object_in_pipeline(self, r): 79 | await r.script_flush() 80 | multiply = r.register_script(multiply_script) 81 | precalculated_sha = multiply.sha 82 | assert precalculated_sha 83 | pipe = await r.pipeline() 84 | await pipe.set('a', 2) 85 | await pipe.get('a') 86 | await multiply.execute(keys=['a'], args=[3], client=pipe) 87 | assert await r.script_exists(multiply.sha) == [False] 88 | # [SET worked, GET 'a', result of multiple script] 89 | assert await pipe.execute() == [True, b('2'), 6] 90 | # The script should have been loaded by pipe.execute() 91 | assert await r.script_exists(multiply.sha) == [True] 92 | # The precalculated sha should have been the correct one 93 | assert multiply.sha == precalculated_sha 94 | 95 | # purge the script from redis's cache and re-run the pipeline 96 | # the multiply script should be reloaded by pipe.execute() 97 | await r.script_flush() 98 | pipe = await r.pipeline() 99 | await pipe.set('a', 2) 100 | await pipe.get('a') 101 | await multiply.execute(keys=['a'], args=[3], client=pipe) 102 | assert await r.script_exists(multiply.sha) == [False] 103 | # [SET worked, GET 'a', result of multiple script] 104 | assert await pipe.execute() == [True, b('2'), 6] 105 | assert await r.script_exists(multiply.sha) == [True] 106 | 107 | @pytest.mark.asyncio(forbid_global_loop=True) 108 | async def test_eval_msgpack_pipeline_error_in_lua(self, r): 109 | msgpack_hello = r.register_script(msgpack_hello_script) 110 | assert msgpack_hello.sha 111 | 112 | pipe = await r.pipeline() 113 | 114 | # avoiding a dependency to msgpack, this is the output of 115 | # msgpack.dumps({"name": "joe"}) 116 | msgpack_message_1 = b'\x81\xa4name\xa3Joe' 117 | 118 | await msgpack_hello.execute(args=[msgpack_message_1], client=pipe) 119 | 120 | assert await r.script_exists(msgpack_hello.sha) == [False] 121 | assert (await pipe.execute())[0] == b'hello Joe' 122 | assert await r.script_exists(msgpack_hello.sha) == [True] 123 | 124 | msgpack_hello_broken = r.register_script(msgpack_hello_script_broken) 125 | 126 | await msgpack_hello_broken.execute(args=[msgpack_message_1], client=pipe) 127 | with pytest.raises(ResponseError) as excinfo: 128 | await pipe.execute() 129 | assert excinfo.type == ResponseError 130 | -------------------------------------------------------------------------------- /aredis/commands/sentinel.py: -------------------------------------------------------------------------------- 1 | import warnings 2 | 3 | from aredis.utils import (dict_merge, nativestr, 4 | list_keys_to_dict, 5 | NodeFlag, bool_ok) 6 | 7 | SENTINEL_STATE_TYPES = { 8 | 'can-failover-its-master': int, 9 | 'config-epoch': int, 10 | 'down-after-milliseconds': int, 11 | 'failover-timeout': int, 12 | 'info-refresh': int, 13 | 'last-hello-message': int, 14 | 'last-ok-ping-reply': int, 15 | 'last-ping-reply': int, 16 | 'last-ping-sent': int, 17 | 'master-link-down-time': int, 18 | 'master-port': int, 19 | 'num-other-sentinels': int, 20 | 'num-slaves': int, 21 | 'o-down-time': int, 22 | 'pending-commands': int, 23 | 'parallel-syncs': int, 24 | 'port': int, 25 | 'quorum': int, 26 | 'role-reported-time': int, 27 | 's-down-time': int, 28 | 'slave-priority': int, 29 | 'slave-repl-offset': int, 30 | 'voted-leader-epoch': int 31 | } 32 | 33 | 34 | def pairs_to_dict_typed(response, type_info): 35 | it = iter(response) 36 | result = {} 37 | for key, value in zip(it, it): 38 | if key in type_info: 39 | try: 40 | value = type_info[key](value) 41 | except: 42 | # if for some reason the value can't be coerced, just use 43 | # the string value 44 | pass 45 | result[key] = value 46 | return result 47 | 48 | 49 | def parse_sentinel_state(item): 50 | result = pairs_to_dict_typed(item, SENTINEL_STATE_TYPES) 51 | flags = set(result['flags'].split(',')) 52 | for name, flag in (('is_master', 'master'), ('is_slave', 'slave'), 53 | ('is_sdown', 's_down'), ('is_odown', 'o_down'), 54 | ('is_sentinel', 'sentinel'), 55 | ('is_disconnected', 'disconnected'), 56 | ('is_master_down', 'master_down')): 57 | result[name] = flag in flags 58 | return result 59 | 60 | 61 | def parse_sentinel_master(response): 62 | return parse_sentinel_state(map(nativestr, response)) 63 | 64 | 65 | def parse_sentinel_masters(response): 66 | result = {} 67 | for item in response: 68 | state = parse_sentinel_state(map(nativestr, item)) 69 | result[state['name']] = state 70 | return result 71 | 72 | 73 | def parse_sentinel_slaves_and_sentinels(response): 74 | return [parse_sentinel_state(map(nativestr, item)) for item in response] 75 | 76 | 77 | def parse_sentinel_get_master(response): 78 | return response and (response[0], int(response[1])) or None 79 | 80 | 81 | class SentinelCommandMixin: 82 | RESPONSE_CALLBACKS = { 83 | 'SENTINEL GET-MASTER-ADDR-BY-NAME': parse_sentinel_get_master, 84 | 'SENTINEL MASTER': parse_sentinel_master, 85 | 'SENTINEL MASTERS': parse_sentinel_masters, 86 | 'SENTINEL MONITOR': bool_ok, 87 | 'SENTINEL REMOVE': bool_ok, 88 | 'SENTINEL SENTINELS': parse_sentinel_slaves_and_sentinels, 89 | 'SENTINEL SET': bool_ok, 90 | 'SENTINEL SLAVES': parse_sentinel_slaves_and_sentinels, 91 | } 92 | 93 | async def sentinel(self, *args): 94 | """Redis Sentinel's SENTINEL command.""" 95 | warnings.warn(DeprecationWarning('Use the individual sentinel_* methods')) 96 | 97 | async def sentinel_get_master_addr_by_name(self, service_name): 98 | """Returns a (host, port) pair for the given ``service_name``""" 99 | return await self.execute_command('SENTINEL GET-MASTER-ADDR-BY-NAME', 100 | service_name) 101 | 102 | async def sentinel_master(self, service_name): 103 | """Returns a dictionary containing the specified masters state.""" 104 | return await self.execute_command('SENTINEL MASTER', service_name) 105 | 106 | async def sentinel_masters(self): 107 | """Returns a list of dictionaries containing each master's state.""" 108 | return await self.execute_command('SENTINEL MASTERS') 109 | 110 | async def sentinel_monitor(self, name, ip, port, quorum): 111 | """Adds a new master to Sentinel to be monitored""" 112 | return await self.execute_command('SENTINEL MONITOR', name, ip, port, quorum) 113 | 114 | async def sentinel_remove(self, name): 115 | """Removes a master from Sentinel's monitoring""" 116 | return await self.execute_command('SENTINEL REMOVE', name) 117 | 118 | async def sentinel_sentinels(self, service_name): 119 | """Returns a list of sentinels for ``service_name``""" 120 | return await self.execute_command('SENTINEL SENTINELS', service_name) 121 | 122 | async def sentinel_set(self, name, option, value): 123 | """Sets Sentinel monitoring parameters for a given master""" 124 | return await self.execute_command('SENTINEL SET', name, option, value) 125 | 126 | async def sentinel_slaves(self, service_name): 127 | """Returns a list of slaves for ``service_name``""" 128 | return await self.execute_command('SENTINEL SLAVES', service_name) 129 | 130 | 131 | class ClusterSentinelCommands(SentinelCommandMixin): 132 | NODES_FLAGS = dict_merge( 133 | list_keys_to_dict( 134 | ['SENTINEL GET-MASTER-ADDR-BY-NAME', 'SENTINEL MASTER', 'SENTINEL MASTERS', 135 | 'SENTINEL MONITOR', 'SENTINEL REMOVE', 'SENTINEL SENTINELS', 'SENTINEL SET', 136 | 'SENTINEL SLAVES'], NodeFlag.BLOCKED 137 | ) 138 | ) 139 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import re 3 | import sys 4 | import pathlib 5 | 6 | 7 | try: 8 | from setuptools import setup 9 | from setuptools.command.test import test as TestCommand 10 | from setuptools.command.build_ext import build_ext 11 | from setuptools.extension import Extension 12 | 13 | 14 | class PyTest(TestCommand): 15 | def finalize_options(self): 16 | TestCommand.finalize_options(self) 17 | self.test_args = [] 18 | self.test_suite = True 19 | 20 | def run_tests(self): 21 | # import here, because outside the eggs aren't loaded 22 | import pytest 23 | errno = pytest.main(self.test_args) 24 | sys.exit(errno) 25 | 26 | except ImportError: 27 | 28 | from distutils.core import setup, Extension 29 | from distutils.command.build_ext import build_ext 30 | 31 | 32 | def PyTest(x): 33 | x 34 | 35 | 36 | class custom_build_ext(build_ext): 37 | """ 38 | NOTE: This code was originally taken from tornado. 39 | 40 | Allows C extension building to fail. 41 | 42 | The C extension speeds up crc16, but is not essential. 43 | """ 44 | 45 | warning_message = """ 46 | ******************************************************************** 47 | {target} could not 48 | be compiled. No C extensions are essential for aredis to run, 49 | although they do result in significant speed improvements for 50 | websockets. 51 | {comment} 52 | 53 | Here are some hints for popular operating systems: 54 | 55 | If you are seeing this message on Linux you probably need to 56 | install GCC and/or the Python development package for your 57 | version of Python. 58 | 59 | Debian and Ubuntu users should issue the following command: 60 | 61 | $ sudo apt-get install build-essential python-dev 62 | 63 | RedHat and CentOS users should issue the following command: 64 | 65 | $ sudo yum install gcc python-devel 66 | 67 | Fedora users should issue the following command: 68 | 69 | $ sudo dnf install gcc python-devel 70 | 71 | If you are seeing this message on OSX please read the documentation 72 | here: 73 | 74 | https://api.mongodb.org/python/current/installation.html#osx 75 | ******************************************************************** 76 | """ 77 | 78 | def run(self): 79 | try: 80 | super().run() 81 | except Exception as e: 82 | self.warn(e) 83 | self.warn( 84 | self.warning_message.format( 85 | target="Extension modules", 86 | comment=( 87 | "There is an issue with your platform configuration " 88 | "- see above." 89 | ) 90 | ) 91 | ) 92 | 93 | def build_extension(self, ext): 94 | try: 95 | super().build_extension(ext) 96 | except Exception as e: 97 | self.warn(e) 98 | self.warn( 99 | self.warning_message.format( 100 | target="The {} extension ".format(ext.name), 101 | comment=( 102 | "The output above this warning shows how the " 103 | "compilation failed." 104 | ) 105 | ) 106 | ) 107 | 108 | 109 | _ROOT_DIR = pathlib.Path(__file__).parent 110 | 111 | with open(str(_ROOT_DIR / 'README.rst')) as f: 112 | long_description = f.read() 113 | 114 | with open(str(_ROOT_DIR / 'aredis' / '__init__.py')) as f: 115 | str_regex = r"['\"]([^'\"]*)['\"]" 116 | try: 117 | version = re.findall( 118 | r"^__version__ = {}$".format(str_regex), f.read(), re.MULTILINE 119 | )[0] 120 | except IndexError: 121 | raise RuntimeError("Unable to find version in __init__.py") 122 | 123 | setup( 124 | name='aredis', 125 | version=version, 126 | description='Python async client for Redis key-value store', 127 | long_description=long_description, 128 | url='https://github.com/NoneGG/aredis', 129 | author='Jason Chen', 130 | author_email='847671011@qq.com', 131 | maintainer='Jason Chen', 132 | maintainer_email='847671011@qq.com', 133 | keywords=['Redis', 'key-value store', 'asyncio'], 134 | license='MIT', 135 | packages=['aredis', 'aredis.commands'], 136 | python_requires=">=3.5", 137 | extras_require={'hiredis': ['hiredis>=0.2.0']}, 138 | tests_require=['pytest', 139 | 'pytest_asyncio>=0.5.0'], 140 | cmdclass={ 141 | 'test': PyTest, 142 | 'build_ext': custom_build_ext 143 | }, 144 | classifiers=[ 145 | 'Development Status :: 5 - Production/Stable', 146 | 'Intended Audience :: Developers', 147 | 'License :: OSI Approved :: MIT License', 148 | 'Operating System :: OS Independent', 149 | 'Programming Language :: Python', 150 | 'Programming Language :: Python :: 3.5', 151 | 'Programming Language :: Python :: 3.6', 152 | 'Programming Language :: Python :: 3.7', 153 | 'Programming Language :: Python :: 3.8' 154 | ], 155 | ext_modules=[ 156 | Extension(name='aredis.speedups', 157 | sources=['aredis/speedups.c']), 158 | ], 159 | # The good news is that the standard library always 160 | # takes the precedence over site packages, 161 | # so even if a local contextvars module is installed, 162 | # the one from the standard library will be used. 163 | install_requires=[ 164 | 'contextvars;python_version<"3.7"' 165 | ] 166 | ) 167 | -------------------------------------------------------------------------------- /docs/source/conf.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding: utf-8 -*- 3 | # 4 | # aredis documentation build configuration file, created by 5 | # sphinx-quickstart on Mon May 8 17:29:14 2017. 6 | # 7 | # This file is execfile()d with the current directory set to its 8 | # containing dir. 9 | # 10 | # Note that not all possible configuration values are present in this 11 | # autogenerated file. 12 | # 13 | # All configuration values have a default; values that are commented out 14 | # serve to show the default. 15 | 16 | # If extensions (or modules to document with autodoc) are in another directory, 17 | # add these directories to sys.path here. If the directory is relative to the 18 | # documentation root, use os.path.abspath to make it absolute, like shown here. 19 | # 20 | # import os 21 | # import sys 22 | # sys.path.insert(0, os.path.abspath('.')) 23 | 24 | 25 | # -- General configuration ------------------------------------------------ 26 | 27 | # If your documentation needs a minimal Sphinx version, state it here. 28 | # 29 | # needs_sphinx = '1.0' 30 | 31 | # Add any Sphinx extension module names here, as strings. They can be 32 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 33 | # ones. 34 | extensions = ['sphinx.ext.autodoc', 35 | 'sphinx.ext.doctest', 36 | 'sphinx.ext.intersphinx', 37 | 'sphinx.ext.viewcode'] 38 | 39 | # Add any paths that contain templates here, relative to this directory. 40 | templates_path = ['_templates'] 41 | 42 | # The suffix(es) of source filenames. 43 | # You can specify multiple suffix as a list of string: 44 | # 45 | # source_suffix = ['.rst', '.md'] 46 | source_suffix = '.rst' 47 | 48 | # The master toctree document. 49 | master_doc = 'index' 50 | 51 | # General information about the project. 52 | project = 'aredis' 53 | copyright = '2017, NoneGG' 54 | author = 'NoneGG' 55 | 56 | # The version info for the project you're documenting, acts as replacement for 57 | # |version| and |release|, also used in various other places throughout the 58 | # built documents. 59 | # 60 | # The short X.Y version. 61 | version = '1.0.7' 62 | # The full version, including alpha/beta/rc tags. 63 | release = '1.0.7' 64 | 65 | # The language for content autogenerated by Sphinx. Refer to documentation 66 | # for a list of supported languages. 67 | # 68 | # This is also used if you do content translation via gettext catalogs. 69 | # Usually you set "language" from the command line for these cases. 70 | language = None 71 | 72 | # List of patterns, relative to source directory, that match files and 73 | # directories to ignore when looking for source files. 74 | # This patterns also effect to html_static_path and html_extra_path 75 | exclude_patterns = [] 76 | 77 | # The name of the Pygments (syntax highlighting) style to use. 78 | pygments_style = 'sphinx' 79 | 80 | # If true, `todo` and `todoList` produce output, else they produce nothing. 81 | todo_include_todos = False 82 | 83 | 84 | # -- Options for HTML output ---------------------------------------------- 85 | 86 | # The theme to use for HTML and HTML Help pages. See the documentation for 87 | # a list of builtin themes. 88 | # 89 | html_theme = 'alabaster' 90 | 91 | # Theme options are theme-specific and customize the look and feel of a theme 92 | # further. For a list of options available for each theme, see the 93 | # documentation. 94 | # 95 | # html_theme_options = {} 96 | 97 | # Add any paths that contain custom static files (such as style sheets) here, 98 | # relative to this directory. They are copied after the builtin static files, 99 | # so a file named "default.css" will overwrite the builtin "default.css". 100 | html_static_path = ['_static'] 101 | 102 | 103 | # -- Options for HTMLHelp output ------------------------------------------ 104 | 105 | # Output file base name for HTML help builder. 106 | htmlhelp_basename = 'aredisdoc' 107 | 108 | 109 | # -- Options for LaTeX output --------------------------------------------- 110 | 111 | latex_elements = { 112 | # The paper size ('letterpaper' or 'a4paper'). 113 | # 114 | # 'papersize': 'letterpaper', 115 | 116 | # The font size ('10pt', '11pt' or '12pt'). 117 | # 118 | # 'pointsize': '10pt', 119 | 120 | # Additional stuff for the LaTeX preamble. 121 | # 122 | # 'preamble': '', 123 | 124 | # Latex figure (float) alignment 125 | # 126 | # 'figure_align': 'htbp', 127 | } 128 | 129 | # Grouping the document tree into LaTeX files. List of tuples 130 | # (source start file, target name, title, 131 | # author, documentclass [howto, manual, or own class]). 132 | latex_documents = [ 133 | (master_doc, 'aredis.tex', 'aredis Documentation', 134 | 'NoneGG', 'manual'), 135 | ] 136 | 137 | 138 | # -- Options for manual page output --------------------------------------- 139 | 140 | # One entry per manual page. List of tuples 141 | # (source start file, name, description, authors, manual section). 142 | man_pages = [ 143 | (master_doc, 'aredis', 'aredis Documentation', 144 | [author], 1) 145 | ] 146 | 147 | 148 | # -- Options for Texinfo output ------------------------------------------- 149 | 150 | # Grouping the document tree into Texinfo files. List of tuples 151 | # (source start file, target name, title, author, 152 | # dir menu entry, description, category) 153 | texinfo_documents = [ 154 | (master_doc, 'aredis', 'aredis Documentation', 155 | author, 'aredis', 'One line description of project.', 156 | 'Miscellaneous'), 157 | ] 158 | 159 | 160 | 161 | 162 | # Example configuration for intersphinx: refer to the Python standard library. 163 | intersphinx_mapping = {'https://docs.python.org/': None} 164 | -------------------------------------------------------------------------------- /docs/source/pipelines.rst: -------------------------------------------------------------------------------- 1 | Pipelines 2 | ========= 3 | 4 | Pipelines are a subclass of the base Redis class that provide support for 5 | buffering multiple commands to the server in a single request. They can be used 6 | to dramatically increase the performance of groups of commands by reducing the 7 | number of back-and-forth TCP packets between the client and server. 8 | 9 | Pipelines are quite simple to use: 10 | 11 | 12 | .. code-block:: python 13 | 14 | async with await client.pipeline() as pipe: 15 | await pipe.delete('bar') 16 | await pipe.set('bar', 'foo') 17 | await pipe.execute() # needs to be called explicitly 18 | 19 | 20 | Here are more examples: 21 | 22 | 23 | .. code-block:: python 24 | 25 | async def example(client): 26 | async with await client.pipeline(transaction=True) as pipe: 27 | # will return self to send another command 28 | pipe = await (await pipe.flushdb()).set('foo', 'bar') 29 | # can also directly send command 30 | await pipe.set('bar', 'foo') 31 | # commands will be buffered 32 | await pipe.keys('*') 33 | res = await pipe.execute() 34 | # results should be in order corresponding to your command 35 | assert res == [True, True, True, [b'bar', b'foo']] 36 | 37 | For ease of use, all commands being buffered into the pipeline return the 38 | pipeline object itself. Which enable you to use it like the example provided. 39 | 40 | In addition, pipelines can also ensure the buffered commands are executed 41 | atomically as a group. This happens by default. If you want to disable the 42 | atomic nature of a pipeline but still want to buffer commands, you can turn 43 | off transactions. 44 | 45 | .. code-block:: python 46 | 47 | pipe = r.pipeline(transaction=False) 48 | 49 | A common issue occurs when requiring atomic transactions but needing to 50 | retrieve values in Redis prior for use within the transaction. For instance, 51 | let's assume that the INCR command didn't exist and we need to build an atomic 52 | version of INCR in Python. 53 | 54 | The completely naive implementation could GET the value, increment it in 55 | Python, and SET the new value back. However, this is not atomic because 56 | multiple clients could be doing this at the same time, each getting the same 57 | value from GET. 58 | 59 | Enter the WATCH command. WATCH provides the ability to monitor one or more keys 60 | prior to starting a transaction. If any of those keys change prior the 61 | execution of that transaction, the entire transaction will be canceled and a 62 | WatchError will be raised. To implement our own client-side INCR command, we 63 | could do something like this: 64 | 65 | .. code-block:: python 66 | 67 | async def example(): 68 | async with await r.pipeline() as pipe: 69 | while True: 70 | try: 71 | # put a WATCH on the key that holds our sequence value 72 | await pipe.watch('OUR-SEQUENCE-KEY') 73 | # after WATCHing, the pipeline is put into immediate execution 74 | # mode until we tell it to start buffering commands again. 75 | # this allows us to get the current value of our sequence 76 | current_value = await pipe.get('OUR-SEQUENCE-KEY') 77 | next_value = int(current_value) + 1 78 | # now we can put the pipeline back into buffered mode with MULTI 79 | pipe.multi() 80 | pipe.set('OUR-SEQUENCE-KEY', next_value) 81 | # and finally, execute the pipeline (the set command) 82 | await pipe.execute() 83 | # if a WatchError wasn't raised during execution, everything 84 | # we just did happened atomically. 85 | break 86 | except WatchError: 87 | # another client must have changed 'OUR-SEQUENCE-KEY' between 88 | # the time we started WATCHing it and the pipeline's execution. 89 | # our best bet is to just retry. 90 | continue 91 | 92 | Note that, because the Pipeline must bind to a single connection for the 93 | duration of a WATCH, care must be taken to ensure that the connection is 94 | returned to the connection pool by calling the reset() method. If the 95 | Pipeline is used as a context manager (as in the example above) reset() 96 | will be called automatically. Of course you can do this the manual way by 97 | explicitly calling reset(): 98 | 99 | .. code-block:: python 100 | 101 | async def example(): 102 | async with await r.pipeline() as pipe: 103 | while 1: 104 | try: 105 | await pipe.watch('OUR-SEQUENCE-KEY') 106 | ... 107 | await pipe.execute() 108 | break 109 | except WatchError: 110 | continue 111 | finally: 112 | await pipe.reset() 113 | 114 | A convenience method named "transaction" exists for handling all the 115 | boilerplate of handling and retrying watch errors. It takes a callable that 116 | should expect a single parameter, a pipeline object, and any number of keys to 117 | be WATCHed. Our client-side INCR command above can be written like this, 118 | which is much easier to read: 119 | 120 | .. code-block:: python 121 | 122 | async def client_side_incr(pipe): 123 | current_value = await pipe.get('OUR-SEQUENCE-KEY') 124 | next_value = int(current_value) + 1 125 | pipe.multi() 126 | await pipe.set('OUR-SEQUENCE-KEY', next_value) 127 | 128 | await r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY') 129 | # [True] 130 | -------------------------------------------------------------------------------- /tests/cluster/test_utils.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # python std lib 4 | from __future__ import with_statement 5 | 6 | # rediscluster imports 7 | from aredis.commands.cluster import parse_cluster_slots 8 | from aredis.exceptions import ( 9 | RedisClusterException, ClusterDownError 10 | ) 11 | from aredis.utils import ( 12 | list_keys_to_dict, 13 | b, dict_merge, 14 | blocked_command, 15 | merge_result, 16 | first_key, 17 | clusterdown_wrapper, 18 | ) 19 | 20 | # 3rd party imports 21 | import pytest 22 | 23 | 24 | def test_parse_cluster_slots(): 25 | """ 26 | Example raw output from redis cluster. Output is form a redis 3.2.x node 27 | that includes the id in the reponse. The test below that do not include the id 28 | is to validate that the code is compatible with redis versions that do not contain 29 | that value in the response from the server. 30 | 31 | 127.0.0.1:10000> cluster slots 32 | 1) 1) (integer) 5461 33 | 2) (integer) 10922 34 | 3) 1) "10.0.0.1" 35 | 2) (integer) 10000 36 | 3) "3588b4cf9fc72d57bb262a024747797ead0cf7ea" 37 | 4) 1) "10.0.0.4" 38 | 2) (integer) 10000 39 | 3) "a72c02c7d85f4ec3145ab2c411eefc0812aa96b0" 40 | 2) 1) (integer) 10923 41 | 2) (integer) 16383 42 | 3) 1) "10.0.0.2" 43 | 2) (integer) 10000 44 | 3) "ffd36d8d7cb10d813f81f9662a835f6beea72677" 45 | 4) 1) "10.0.0.5" 46 | 2) (integer) 10000 47 | 3) "5c15b69186017ddc25ebfac81e74694fc0c1a160" 48 | 3) 1) (integer) 0 49 | 2) (integer) 5460 50 | 3) 1) "10.0.0.3" 51 | 2) (integer) 10000 52 | 3) "069cda388c7c41c62abe892d9e0a2d55fbf5ffd5" 53 | 4) 1) "10.0.0.6" 54 | 2) (integer) 10000 55 | 3) "dc152a08b4cf1f2a0baf775fb86ad0938cb907dc" 56 | """ 57 | 58 | extended_mock_response = [ 59 | [0, 5460, ['172.17.0.2', 7000, 'ffd36d8d7cb10d813f81f9662a835f6beea72677'], ['172.17.0.2', 7003, '5c15b69186017ddc25ebfac81e74694fc0c1a160']], 60 | [5461, 10922, ['172.17.0.2', 7001, '069cda388c7c41c62abe892d9e0a2d55fbf5ffd5'], ['172.17.0.2', 7004, 'dc152a08b4cf1f2a0baf775fb86ad0938cb907dc']], 61 | [10923, 16383, ['172.17.0.2', 7002, '3588b4cf9fc72d57bb262a024747797ead0cf7ea'], ['172.17.0.2', 7005, 'a72c02c7d85f4ec3145ab2c411eefc0812aa96b0']] 62 | ] 63 | 64 | parse_cluster_slots(extended_mock_response) 65 | 66 | extended_mock_binary_response = [ 67 | [0, 5460, [b('172.17.0.2'), 7000, b('ffd36d8d7cb10d813f81f9662a835f6beea72677')], [b('172.17.0.2'), 7003, b('5c15b69186017ddc25ebfac81e74694fc0c1a160')]], 68 | [5461, 10922, [b('172.17.0.2'), 7001, b('069cda388c7c41c62abe892d9e0a2d55fbf5ffd5')], [b('172.17.0.2'), 7004, b('dc152a08b4cf1f2a0baf775fb86ad0938cb907dc')]], 69 | [10923, 16383, [b('172.17.0.2'), 7002, b('3588b4cf9fc72d57bb262a024747797ead0cf7ea')], [b('172.17.0.2'), 7005, b('a72c02c7d85f4ec3145ab2c411eefc0812aa96b0')]] 70 | ] 71 | 72 | extended_mock_parsed = { 73 | (0, 5460): [ 74 | {'host': b'172.17.0.2', 75 | 'node_id': b'ffd36d8d7cb10d813f81f9662a835f6beea72677', 76 | 'port': 7000, 77 | 'server_type': 'master'}, 78 | {'host': b'172.17.0.2', 79 | 'node_id': b'5c15b69186017ddc25ebfac81e74694fc0c1a160', 80 | 'port': 7003, 81 | 'server_type': 'slave'}], 82 | (5461, 10922): [ 83 | {'host': b'172.17.0.2', 84 | 'node_id': b'069cda388c7c41c62abe892d9e0a2d55fbf5ffd5', 85 | 'port': 7001, 86 | 'server_type': 'master'}, 87 | {'host': b'172.17.0.2', 88 | 'node_id': b'dc152a08b4cf1f2a0baf775fb86ad0938cb907dc', 89 | 'port': 7004, 90 | 'server_type': 'slave'}], 91 | (10923, 16383): [ 92 | {'host': b'172.17.0.2', 93 | 'node_id': b'3588b4cf9fc72d57bb262a024747797ead0cf7ea', 94 | 'port': 7002, 95 | 'server_type': 'master'}, 96 | {'host': b'172.17.0.2', 97 | 'node_id': b'a72c02c7d85f4ec3145ab2c411eefc0812aa96b0', 98 | 'port': 7005, 99 | 'server_type': 'slave'}] 100 | } 101 | 102 | assert parse_cluster_slots(extended_mock_binary_response) == extended_mock_parsed 103 | 104 | 105 | def test_list_keys_to_dict(): 106 | def mock_true(): 107 | return True 108 | assert list_keys_to_dict(["FOO", "BAR"], mock_true) == {"FOO": mock_true, "BAR": mock_true} 109 | 110 | 111 | def test_dict_merge(): 112 | a = {"a": 1} 113 | b = {"b": 2} 114 | c = {"c": 3} 115 | assert dict_merge(a, b, c) == {"a": 1, "b": 2, "c": 3} 116 | 117 | 118 | def test_dict_merge_empty_list(): 119 | assert dict_merge([]) == {} 120 | 121 | 122 | def test_blocked_command(): 123 | with pytest.raises(RedisClusterException) as ex: 124 | blocked_command(None, "SET") 125 | assert str(ex.value) == "Command: SET is blocked in redis cluster mode" 126 | 127 | 128 | def test_merge_result(): 129 | assert merge_result({"a": [1, 2, 3], "b": [4, 5, 6]}) == [1, 2, 3, 4, 5, 6] 130 | assert merge_result({"a": [1, 2, 3], "b": [1, 2, 3]}) == [1, 2, 3] 131 | 132 | 133 | def test_merge_result_value_error(): 134 | with pytest.raises(ValueError): 135 | merge_result([]) 136 | 137 | 138 | def test_first_key(): 139 | assert first_key({"foo": 1}) == 1 140 | 141 | with pytest.raises(RedisClusterException) as ex: 142 | first_key({"foo": 1, "bar": 2}) 143 | assert str(ex.value).startswith("More then 1 result from command") 144 | 145 | 146 | def test_first_key_value_error(): 147 | with pytest.raises(ValueError): 148 | first_key(None) 149 | 150 | @pytest.mark.asyncio(forbid_global_loop=True) 151 | async def test_clusterdown_wrapper(): 152 | @clusterdown_wrapper 153 | def bad_func(): 154 | raise ClusterDownError("CLUSTERDOWN") 155 | 156 | with pytest.raises(ClusterDownError) as cex: 157 | await bad_func() 158 | assert str(cex.value).startswith("CLUSTERDOWN error. Unable to rebuild the cluster") 159 | -------------------------------------------------------------------------------- /tests/cluster/test_scripting.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # python std lib 4 | from __future__ import with_statement 5 | 6 | # rediscluster imports 7 | import asyncio 8 | from aredis.exceptions import RedisClusterException, NoScriptError, ResponseError 9 | from aredis.utils import b 10 | 11 | # 3rd party imports 12 | import pytest 13 | 14 | 15 | multiply_script = """ 16 | local value = redis.call('GET', KEYS[1]) 17 | value = tonumber(value) 18 | return value * ARGV[1]""" 19 | 20 | msgpack_hello_script = """ 21 | local message = cmsgpack.unpack(ARGV[1]) 22 | local name = message['name'] 23 | return "hello " .. name 24 | """ 25 | msgpack_hello_script_broken = """ 26 | local message = cmsgpack.unpack(ARGV[1]) 27 | local names = message['name'] 28 | return "hello " .. name 29 | """ 30 | 31 | 32 | class TestScripting: 33 | 34 | async def reset_scripts(self, r): 35 | await r.script_flush() 36 | 37 | @pytest.mark.asyncio() 38 | async def test_eval(self, r): 39 | await r.set('a', 2) 40 | # 2 * 3 == 6 41 | assert await r.eval(multiply_script, 1, 'a', 3) == 6 42 | 43 | @pytest.mark.asyncio() 44 | async def test_eval_same_slot(self, r): 45 | await r.set('A{foo}', 2) 46 | await r.set('B{foo}', 4) 47 | # 2 * 4 == 8 48 | 49 | script = """ 50 | local value = redis.call('GET', KEYS[1]) 51 | local value2 = redis.call('GET', KEYS[2]) 52 | return value * value2 53 | """ 54 | result = await r.eval(script, 2, 'A{foo}', 'B{foo}') 55 | assert result == 8 56 | 57 | @pytest.mark.asyncio() 58 | async def test_eval_crossslot(self, r): 59 | """ 60 | This test assumes that {foo} and {bar} will not go to the same 61 | server when used. In 3 masters + 3 slaves config this should pass. 62 | """ 63 | await r.set('A{foo}', 2) 64 | await r.set('B{bar}', 4) 65 | # 2 * 4 == 8 66 | 67 | script = """ 68 | local value = redis.call('GET', KEYS[1]) 69 | local value2 = redis.call('GET', KEYS[2]) 70 | return value * value2 71 | """ 72 | with pytest.raises(RedisClusterException): 73 | await r.eval(script, 2, 'A{foo}', 'B{bar}') 74 | 75 | @pytest.mark.asyncio() 76 | async def test_evalsha(self, r): 77 | await r.set('a', 2) 78 | sha = await r.script_load(multiply_script) 79 | # 2 * 3 == 6 80 | assert await r.evalsha(sha, 1, 'a', 3) == 6 81 | 82 | @pytest.mark.asyncio() 83 | async def test_evalsha_script_not_loaded(self, r): 84 | await r.set('a', 2) 85 | sha = await r.script_load(multiply_script) 86 | # remove the script from Redis's cache 87 | await r.script_flush() 88 | with pytest.raises(NoScriptError): 89 | await r.evalsha(sha, 1, 'a', 3) 90 | 91 | @pytest.mark.asyncio() 92 | async def test_script_loading(self, r): 93 | # get the sha, then clear the cache 94 | sha = await r.script_load(multiply_script) 95 | await r.script_flush() 96 | assert await r.script_exists(sha) == [False] 97 | await r.script_load(multiply_script) 98 | assert await r.script_exists(sha) == [True] 99 | 100 | @pytest.mark.asyncio() 101 | async def test_script_object(self, r): 102 | await r.set('a', 2) 103 | multiply = r.register_script(multiply_script) 104 | assert multiply.sha == '29cdf3e36c89fa05d7e6d6b9734b342ab15c9ea7' 105 | # test evalsha fail -> script load + retry 106 | assert await multiply.execute(keys=['a'], args=[3]) == 6 107 | assert multiply.sha 108 | assert await r.script_exists(multiply.sha) == [True] 109 | # test first evalsha 110 | assert await multiply.execute(keys=['a'], args=[3]) == 6 111 | 112 | @pytest.mark.asyncio(forbid_global_loop=True) 113 | @pytest.mark.xfail(reason="Not Yet Implemented") 114 | async def test_script_object_in_pipeline(self, r): 115 | multiply = await r.register_script(multiply_script) 116 | assert not multiply.sha 117 | pipe = r.pipeline() 118 | await pipe.set('a', 2) 119 | await pipe.get('a') 120 | multiply(keys=['a'], args=[3], client=pipe) 121 | # even though the pipeline wasn't executed yet, we made sure the 122 | # script was loaded and got a valid sha 123 | assert multiply.sha 124 | assert await r.script_exists(multiply.sha) == [True] 125 | # [SET worked, GET 'a', result of multiple script] 126 | assert await pipe.execute() == [True, b('2'), 6] 127 | 128 | # purge the script from redis's cache and re-run the pipeline 129 | # the multiply script object knows it's sha, so it shouldn't get 130 | # reloaded until pipe.execute() 131 | await r.script_flush() 132 | pipe = await r.pipeline() 133 | await pipe.set('a', 2) 134 | await pipe.get('a') 135 | assert multiply.sha 136 | multiply(keys=['a'], args=[3], client=pipe) 137 | assert await r.script_exists(multiply.sha) == [False] 138 | # [SET worked, GET 'a', result of multiple script] 139 | assert await pipe.execute() == [True, b('2'), 6] 140 | 141 | @pytest.mark.asyncio(forbid_global_loop=True) 142 | @pytest.mark.xfail(reason="Not Yet Implemented") 143 | async def test_eval_msgpack_pipeline_error_in_lua(self, r): 144 | msgpack_hello = await r.register_script(msgpack_hello_script) 145 | assert not msgpack_hello.sha 146 | 147 | pipe = r.pipeline() 148 | 149 | # avoiding a dependency to msgpack, this is the output of 150 | # msgpack.dumps({"name": "joe"}) 151 | msgpack_message_1 = b'\x81\xa4name\xa3Joe' 152 | 153 | msgpack_hello(args=[msgpack_message_1], client=pipe) 154 | 155 | assert await r.script_exists(msgpack_hello.sha) == [True] 156 | assert await pipe.execute()[0] == b'hello Joe' 157 | 158 | msgpack_hello_broken = await r.register_script(msgpack_hello_script_broken) 159 | 160 | msgpack_hello_broken(args=[msgpack_message_1], client=pipe) 161 | with pytest.raises(ResponseError) as excinfo: 162 | await pipe.execute() 163 | assert excinfo.type == ResponseError 164 | -------------------------------------------------------------------------------- /docs/source/release_notes.rst: -------------------------------------------------------------------------------- 1 | Release Notes 2 | ============= 3 | 4 | master 5 | ------ 6 | 7 | * add TCP Keep-alive support by passing use the `socket_keepalive=True` 8 | option. Finer grain control can be achieved using the 9 | `socket_keepalive_options` option which expects a dictionary with any of 10 | the keys (`socket.TCP_KEEPIDLE`, `socket.TCP_KEEPCNT`, `socket.TCP_KEEPINTVL`) 11 | and integers for values. Thanks Stefan Tjarks. 12 | 13 | 1.0.1 14 | ----- 15 | 16 | * add scan_iter, sscan_iter, hscan_iter, zscan_iter and corresponding unit tests 17 | * fix bug of `PubSub.run_in_thread` 18 | * add more examples 19 | * change `Script.register` to `Script.execute` 20 | 21 | 1.0.2 22 | ----- 23 | * add support for cache (Cache and HerdCache class) 24 | * fix bug of `PubSub.run_in_thread` 25 | 26 | 1.0.4 27 | ----- 28 | * add support for command `pubsub channel`, `pubsub numpat` and `pubsub numsub` 29 | * add support for command `client pause` 30 | * reconsitution of commands to make develop easier(which is transparent to user) 31 | 32 | 1.0.5 33 | ----- 34 | * fix bug in setup.py when using pip to install aredis 35 | 36 | 1.0.6 37 | ----- 38 | * bitfield set/get/incrby/overflow supported 39 | * new command `hstrlen` supported 40 | * new command `unlink` supported 41 | * new command `touch` supported 42 | 43 | 1.0.7 44 | ----- 45 | * introduce loop argument to aredis 46 | * add support for command `cluster slots` 47 | * add support for redis cluster 48 | 49 | 1.0.8 50 | ----- 51 | * fix initialization bug of redis cluster client 52 | * add example to explain how to use `client reply on | off | skip` 53 | 54 | 1.0.9 55 | ----- 56 | * fix bug of pubsub, in some env AssertionError is raised because connection is used again after reader stream being fed eof 57 | * add reponse decoding related options(`encoding` & `decode_responses`), make client easier to use 58 | * add support for command `cluster forget` 59 | * add support for command option `spop count` 60 | 61 | 1.1.0 62 | ----- 63 | * sync optimization of scripting from redis-py made by `bgreenberg `_ `related pull request `_ 64 | * sync bug fixed of `geopos` from redis-py made by `categulario `_ `related pull request `_ 65 | * fix bug which makes pipeline callback function not executed 66 | * fix error caused by byte decode issues in sentinel 67 | * add basic transaction support for single node in cluster 68 | * fix bug of get_random_connection reported by myrfy001 69 | 70 | 1.1.1 71 | ----- 72 | * fix bug: connection with unread response being released to connection pool will lead to parse error, now this kind of connection will be destructed directly. `related issue `_ 73 | * fix bug: remove Connection.can_read check which may lead to block in awaiting pubsub message. Connection.can_read api will be deprecated in next release. `related issue `_ 74 | * add c extension to speedup crc16, which will speedup cluster slot hashing 75 | * add error handling for asyncio.futures.Cancelled error, which may cause error in response parsing. 76 | * sync optimization of client list made by swilly22 from redis-py 77 | * add support for distributed lock using redis cluster 78 | 79 | 1.1.2 80 | ----- 81 | * fix bug: redis command encoding bug 82 | * optimization: sync change on acquring lock from redis-py 83 | * fix bug: decrement connection count on connection disconnected 84 | * fix bug: optimize code proceed single node slots 85 | * fix bug: initiation error of aws cluster client caused by not appropiate function list used 86 | * fix bug: use `ssl_context` instead of ssl_keyfile,ssl_certfile,ssl_cert_reqs,ssl_ca_certs in intialization of connection_pool 87 | 88 | 1.1.3 89 | ----- 90 | * allow use of zadd options for zadd in sorted sets 91 | * fix bug: use inspect.isawaitable instead of typing.Awaitable to judge if an object is awaitable 92 | * fix bug: implicitly disconnection on cancelled error (#84) 93 | * new: add support for `streams`(including commands not officially released, see `streams `_ ) 94 | 95 | 1.1.4 96 | ----- 97 | * fix bug: fix cluster port parsing for redis 4+(node info) 98 | * fix bug: wrong parse method of scan_iter in cluster mode 99 | * fix bug: When using "zrange" with "desc=True" parameter, it returns a coroutine without "await" 100 | * fix bug: do not use stream_timeout in the PubSubWorkerThread 101 | * opt: add socket_keepalive options 102 | * new: add ssl param in get_redis_link to support ssl mode 103 | * new: add ssl_context to StrictRedis constructor and make it higher priority than ssl parameter 104 | 105 | 1.1.5 106 | ----- 107 | * new: Dev conn pool max idle time (#111) release connection if max-idle-time exceeded 108 | * update: discard travis-CI 109 | * Fix bug: new stream id used for test_streams 110 | 111 | 1.1.6 112 | ----- 113 | * Fixbug: parsing stream messgae with empty payload will cause error(#116) 114 | * Fixbug: Let ClusterConnectionPool handle skip_full_coverage_check (#118) 115 | * New: threading local issue in coroutine, use contextvars instead of threading local in case of the safety of thread local mechanism being broken by coroutine (#120) 116 | * New: support Python 3.8 117 | 118 | 1.1.7 119 | ----- 120 | * Fixbug: ModuleNotFoundError raised when install aredis 1.1.6 with Python3.6 121 | 122 | 1.1.8 123 | ----- 124 | * Fixbug: connection is disconnected before idel check, valueError will be raised if a connection(not exist) is removed from connection list 125 | * Fixbug: abstract compat.py to handle import problem of asyncio.future 126 | * Fixbug: When cancelling a task, CancelledError exception is not propagated to client 127 | * Fixbug: XREAD command should accept 0 as a block argument 128 | * Fixbug: In redis cluster mode, XREAD command does not function properly 129 | * Fixbug: slave connection params when there are no slaves 130 | -------------------------------------------------------------------------------- /docs/source/index.rst: -------------------------------------------------------------------------------- 1 | .. aredis documentation master file, created by 2 | sphinx-quickstart on Sun May 7 21:23:14 2017. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | Welcome to aredis's documentation! 7 | ================================== 8 | 9 | |pypi-ver| |circleci-status| |python-ver| 10 | 11 | An efficient and user-friendly async redis client ported from `redis-py `_ 12 | (which is a Python interface to the Redis key-value). And the cluster part is ported from `redis-py-cluster `_ 13 | aredis is the async version of these to redis clients, with effort to enable you using redis with asyncio more easily. 14 | 15 | The source code is `available on github`_. 16 | 17 | .. _available on github: https://github.com/NoneGG/aredis 18 | 19 | 20 | Installation 21 | ------------ 22 | 23 | aredis requires a running Redis server. 24 | 25 | To install aredis, simply: 26 | 27 | .. code-block:: bash 28 | 29 | $ pip3 install aredis[hiredis] 30 | 31 | 32 | or from source: 33 | 34 | .. code-block:: bash 35 | 36 | $ python setup.py install 37 | 38 | 39 | Getting started 40 | --------------- 41 | 42 | `For more example`_ 43 | 44 | .. _For more example: https://github.com/NoneGG/aredis/tree/master/examples 45 | 46 | Tip: since python 3.8 you can use asyncio REPL: 47 | 48 | .. code-block:: bash 49 | 50 | $ python -m asyncio 51 | 52 | single node client 53 | ^^^^^^^^^^^^^^^^^^ 54 | 55 | .. code-block:: python 56 | 57 | import asyncio 58 | from aredis import StrictRedis 59 | 60 | async def example(): 61 | client = StrictRedis(host='127.0.0.1', port=6379, db=0) 62 | await client.flushdb() 63 | await client.set('foo', 1) 64 | assert await client.exists('foo') is True 65 | await client.incr('foo', 100) 66 | 67 | assert int(await client.get('foo')) == 101 68 | await client.expire('foo', 1) 69 | await asyncio.sleep(0.1) 70 | await client.ttl('foo') 71 | await asyncio.sleep(1) 72 | assert not await client.exists('foo') 73 | 74 | loop = asyncio.get_event_loop() 75 | loop.run_until_complete(example()) 76 | 77 | cluster client 78 | ^^^^^^^^^^^^^^ 79 | 80 | .. code-block:: python 81 | 82 | import asyncio 83 | from aredis import StrictRedisCluster 84 | 85 | async def example(): 86 | client = StrictRedisCluster(host='172.17.0.2', port=7001) 87 | await client.flushdb() 88 | await client.set('foo', 1) 89 | await client.lpush('a', 1) 90 | print(await client.cluster_slots()) 91 | 92 | await client.rpoplpush('a', 'b') 93 | assert await client.rpop('b') == b'1' 94 | 95 | loop = asyncio.get_event_loop() 96 | loop.run_until_complete(example()) 97 | 98 | # {(10923, 16383): [{'host': b'172.17.0.2', 'node_id': b'332f41962b33fa44bbc5e88f205e71276a9d64f4', 'server_type': 'master', 'port': 7002}, 99 | # {'host': b'172.17.0.2', 'node_id': b'c02deb8726cdd412d956f0b9464a88812ef34f03', 'server_type': 'slave', 'port': 7005}], 100 | # (5461, 10922): [{'host': b'172.17.0.2', 'node_id': b'3d1b020fc46bf7cb2ffc36e10e7d7befca7c5533', 'server_type': 'master', 'port': 7001}, 101 | # {'host': b'172.17.0.2', 'node_id': b'aac4799b65ff35d8dd2ad152a5515d15c0dc8ab7', 'server_type': 'slave', 'port': 7004}], 102 | # (0, 5460): [{'host': b'172.17.0.2', 'node_id': b'0932215036dc0d908cf662fdfca4d3614f221b01', 'server_type': 'master', 'port': 7000}, 103 | # {'host': b'172.17.0.2', 'node_id': b'f6603ab4cb77e672de23a6361ec165f3a1a2bb42', 'server_type': 'slave', 'port': 7003}]} 104 | 105 | 106 | Dependencies & supported python versions 107 | ---------------------------------------- 108 | 109 | hiredis and uvloop can make aredis faster, but it is up to you whether to install them or not. 110 | 111 | - Optional Python: hiredis >= `0.2.0`. Older versions might work but is not tested. 112 | - Optional event loop policy: uvloop >= `0.8.0`. Older versions might work but is not tested. 113 | - A working Redis cluster based on version >= `3.0.0` is required. Only `3.0.x` releases is supported. 114 | 115 | 116 | 117 | Supported python versions 118 | ------------------------- 119 | 120 | - 3.5 121 | - 3.6 122 | 123 | Experimental: 124 | 125 | - 3.7-dev 126 | 127 | 128 | .. note:: Python < 3.5 129 | 130 | I tried to change my code to make aredis compatible for Python under 3.5, but it failed because of some api of asyncio. 131 | Since asyncio is stabilize from Python 3.5, i think it may be better to use the new release of asyncio. 132 | 133 | 134 | .. note:: pypy 135 | 136 | For now, uvloop is not supported by pypy, and you can only use it with cpython & hiredis to accelerate your code. 137 | pypy 3.5-v5.8.0 is tesed and with it code can run twice faster than before. 138 | 139 | 140 | API reference 141 | ------------- 142 | Most API are described in `redis command reference`_ what makes difference and those should be noticed are referred in doc specially. 143 | You can post a new issue / read redis command reference / read annotation of API (mainly about how to use them) if you have any problem about the API. 144 | Related issue are welcome. 145 | 146 | 147 | The Usage Guide 148 | --------------- 149 | 150 | .. toctree:: 151 | :maxdepth: 2 152 | :glob: 153 | 154 | notice 155 | benchmark 156 | pubsub 157 | sentinel 158 | scripting 159 | pipelines 160 | streams 161 | extra 162 | 163 | The Community Guide 164 | ------------------- 165 | 166 | .. toctree:: 167 | :maxdepth: 2 168 | :glob: 169 | 170 | testing 171 | release_notes 172 | authors 173 | license 174 | todo 175 | 176 | .. |circleci-status| image:: https://img.shields.io/circleci/project/github/NoneGG/aredis/master.svg 177 | :alt: CircleCI build status 178 | :target: https://circleci.com/gh/NoneGG/aredis/tree/master 179 | 180 | .. |pypi-ver| image:: https://img.shields.io/pypi/v/aredis.svg 181 | :target: https://pypi.python.org/pypi/aredis/ 182 | :alt: Latest Version in PyPI 183 | 184 | .. |python-ver| image:: https://img.shields.io/pypi/pyversions/aredis.svg 185 | :target: https://pypi.python.org/pypi/aredis/ 186 | :alt: Supported Python versions -------------------------------------------------------------------------------- /benchmarks/basic_operations.py: -------------------------------------------------------------------------------- 1 | import aredis 2 | import asyncio 3 | import uvloop 4 | import time 5 | import sys 6 | from functools import wraps 7 | from argparse import ArgumentParser 8 | 9 | if sys.version_info[0] == 3: 10 | long = int 11 | 12 | 13 | def parse_args(): 14 | parser = ArgumentParser() 15 | parser.add_argument('-n', 16 | type=int, 17 | help='Total number of requests (default 100000)', 18 | default=100000) 19 | parser.add_argument('-P', 20 | type=int, 21 | help=('Pipeline requests.' 22 | ' Default 1 (no pipeline).'), 23 | default=1) 24 | parser.add_argument('-s', 25 | type=int, 26 | help='Data size of SET/GET value in bytes (default 2)', 27 | default=2) 28 | 29 | args = parser.parse_args() 30 | print(args) 31 | return args 32 | 33 | 34 | async def run(): 35 | args = parse_args() 36 | r = aredis.StrictRedis() 37 | await r.flushall() 38 | await set_str(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 39 | await set_int(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 40 | await get_str(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 41 | await get_int(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 42 | await incr(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 43 | await lpush(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 44 | await lrange_300(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 45 | await lpop(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 46 | await hmset(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 47 | 48 | 49 | def timer(func): 50 | @wraps(func) 51 | async def wrapper(*args, **kwargs): 52 | start = time.clock() 53 | ret = await func(*args, **kwargs) 54 | duration = time.clock() - start 55 | if 'num' in kwargs: 56 | count = kwargs['num'] 57 | else: 58 | count = args[1] 59 | print('{0} - {1} Requests'.format(func.__name__, count)) 60 | print('Duration = {}'.format(duration)) 61 | print('Rate = {}'.format(count/duration)) 62 | print('') 63 | return ret 64 | return wrapper 65 | 66 | 67 | @timer 68 | async def set_str(conn, num, pipeline_size, data_size): 69 | if pipeline_size > 1: 70 | conn = await conn.pipeline() 71 | 72 | format_str = '{:0<%d}' % data_size 73 | set_data = format_str.format('a') 74 | for i in range(num): 75 | await conn.set('set_str:%d' % i, set_data) 76 | if pipeline_size > 1 and i % pipeline_size == 0: 77 | await conn.execute() 78 | 79 | if pipeline_size > 1: 80 | await conn.execute() 81 | await conn.reset() 82 | 83 | 84 | @timer 85 | async def set_int(conn, num, pipeline_size, data_size): 86 | if pipeline_size > 1: 87 | conn = await conn.pipeline() 88 | 89 | format_str = '{:0<%d}' % data_size 90 | set_data = int(format_str.format('1')) 91 | for i in range(num): 92 | await conn.set('set_int:%d' % i, set_data) 93 | if pipeline_size > 1 and i % pipeline_size == 0: 94 | await conn.execute() 95 | 96 | if pipeline_size > 1: 97 | await conn.execute() 98 | await conn.reset() 99 | 100 | 101 | @timer 102 | async def get_str(conn, num, pipeline_size, data_size): 103 | if pipeline_size > 1: 104 | conn = await conn.pipeline() 105 | 106 | for i in range(num): 107 | await conn.get('set_str:%d' % i) 108 | if pipeline_size > 1 and i % pipeline_size == 0: 109 | await conn.execute() 110 | 111 | if pipeline_size > 1: 112 | await conn.execute() 113 | await conn.reset() 114 | 115 | 116 | @timer 117 | async def get_int(conn, num, pipeline_size, data_size): 118 | if pipeline_size > 1: 119 | conn = await conn.pipeline() 120 | 121 | for i in range(num): 122 | await conn.get('set_int:%d' % i) 123 | if pipeline_size > 1 and i % pipeline_size == 0: 124 | await conn.execute() 125 | 126 | if pipeline_size > 1: 127 | await conn.execute() 128 | await conn.reset() 129 | 130 | 131 | @timer 132 | async def incr(conn, num, pipeline_size, *args, **kwargs): 133 | if pipeline_size > 1: 134 | conn = await conn.pipeline() 135 | 136 | for i in range(num): 137 | await conn.incr('incr_key') 138 | if pipeline_size > 1 and i % pipeline_size == 0: 139 | await conn.execute() 140 | 141 | if pipeline_size > 1: 142 | await conn.execute() 143 | await conn.reset() 144 | 145 | 146 | @timer 147 | async def lpush(conn, num, pipeline_size, data_size): 148 | if pipeline_size > 1: 149 | conn = await conn.pipeline() 150 | 151 | format_str = '{:0<%d}' % data_size 152 | set_data = int(format_str.format('1')) 153 | for i in range(num): 154 | await conn.lpush('lpush_key', set_data) 155 | if pipeline_size > 1 and i % pipeline_size == 0: 156 | await conn.execute() 157 | 158 | if pipeline_size > 1: 159 | await conn.execute() 160 | await conn.reset() 161 | 162 | 163 | @timer 164 | async def lrange_300(conn, num, pipeline_size, data_size): 165 | if pipeline_size > 1: 166 | conn = await conn.pipeline() 167 | 168 | for i in range(num): 169 | await conn.lrange('lpush_key', i, i+300) 170 | if pipeline_size > 1 and i % pipeline_size == 0: 171 | await conn.execute() 172 | 173 | if pipeline_size > 1: 174 | await conn.execute() 175 | await conn.reset() 176 | 177 | 178 | @timer 179 | async def lpop(conn, num, pipeline_size, data_size): 180 | if pipeline_size > 1: 181 | conn = await conn.pipeline() 182 | for i in range(num): 183 | await conn.lpop('lpush_key') 184 | if pipeline_size > 1 and i % pipeline_size == 0: 185 | await conn.execute() 186 | if pipeline_size > 1: 187 | await conn.execute() 188 | await conn.reset() 189 | 190 | 191 | @timer 192 | async def hmset(conn, num, pipeline_size, data_size): 193 | if pipeline_size > 1: 194 | conn = await conn.pipeline() 195 | 196 | set_data = {'str_value': 'string', 197 | 'int_value': 123456, 198 | 'long_value': long(123456), 199 | 'float_value': 123456.0} 200 | for i in range(num): 201 | await conn.hmset('hmset_key', set_data) 202 | if pipeline_size > 1 and i % pipeline_size == 0: 203 | await conn.execute() 204 | 205 | if pipeline_size > 1: 206 | await conn.execute() 207 | await conn.reset() 208 | 209 | if __name__ == '__main__': 210 | print('WITH ASYNCIO ONLY:') 211 | loop = asyncio.get_event_loop() 212 | loop.run_until_complete(run()) 213 | print('WITH UVLOOP:') 214 | asyncio.set_event_loop_policy(uvloop.EventLoopPolicy()) 215 | loop = asyncio.get_event_loop() 216 | loop.run_until_complete(run()) 217 | -------------------------------------------------------------------------------- /tests/client/test_lock.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | 3 | import asyncio 4 | import time 5 | 6 | import pytest 7 | 8 | from aredis.exceptions import LockError, ResponseError 9 | from aredis.lock import Lock, LuaLock 10 | 11 | 12 | class TestLock: 13 | lock_class = Lock 14 | 15 | def get_lock(self, redis, *args, **kwargs): 16 | kwargs['lock_class'] = self.lock_class 17 | return redis.lock(*args, **kwargs) 18 | 19 | @pytest.mark.asyncio() 20 | async def test_lock(self, r): 21 | await r.flushdb() 22 | lock = self.get_lock(r, 'foo') 23 | assert await lock.acquire(blocking=False) 24 | assert await r.get('foo') == lock.local.get() 25 | assert await r.ttl('foo') == -1 26 | await lock.release() 27 | assert await r.get('foo') is None 28 | 29 | @pytest.mark.asyncio() 30 | async def test_competing_locks(self, r): 31 | lock1 = self.get_lock(r, 'foo') 32 | lock2 = self.get_lock(r, 'foo') 33 | assert await lock1.acquire(blocking=False) 34 | assert not await lock2.acquire(blocking=False) 35 | await lock1.release() 36 | assert await lock2.acquire(blocking=False) 37 | assert not await lock1.acquire(blocking=False) 38 | await lock2.release() 39 | 40 | @pytest.mark.asyncio() 41 | async def test_timeout(self, r): 42 | lock = self.get_lock(r, 'foo', timeout=10) 43 | assert await lock.acquire(blocking=False) 44 | assert 8 < await r.ttl('foo') <= 10 45 | await lock.release() 46 | 47 | @pytest.mark.asyncio() 48 | async def test_float_timeout(self, r): 49 | lock = self.get_lock(r, 'foo', timeout=9.5) 50 | assert await lock.acquire(blocking=False) 51 | assert 8 < await r.pttl('foo') <= 9500 52 | await lock.release() 53 | 54 | @pytest.mark.asyncio() 55 | async def test_blocking_timeout(self, r): 56 | lock1 = self.get_lock(r, 'foo') 57 | assert await lock1.acquire(blocking=False) 58 | lock2 = self.get_lock(r, 'foo', blocking_timeout=0.2) 59 | start = time.time() 60 | assert not await lock2.acquire() 61 | assert (time.time() - start) > 0.2 62 | await lock1.release() 63 | 64 | @pytest.mark.asyncio() 65 | async def test_context_manager(self, r): 66 | # blocking_timeout prevents a deadlock if the lock can't be acquired 67 | # for some reason 68 | async with self.get_lock(r, 'foo', blocking_timeout=0.2) as lock: 69 | assert await r.get('foo') == lock.local.get() 70 | assert await r.get('foo') is None 71 | 72 | @pytest.mark.asyncio() 73 | async def test_high_sleep_raises_error(self, r): 74 | "If sleep is higher than timeout, it should raise an error" 75 | with pytest.raises(LockError): 76 | self.get_lock(r, 'foo', timeout=1, sleep=2) 77 | 78 | @pytest.mark.asyncio() 79 | async def test_releasing_unlocked_lock_raises_error(self, r): 80 | lock = self.get_lock(r, 'foo') 81 | with pytest.raises(LockError): 82 | await lock.release() 83 | 84 | @pytest.mark.asyncio() 85 | async def test_releasing_lock_no_longer_owned_raises_error(self, r): 86 | lock = self.get_lock(r, 'foo') 87 | await lock.acquire(blocking=False) 88 | # manually change the token 89 | await r.set('foo', 'a') 90 | with pytest.raises(LockError): 91 | await lock.release() 92 | # even though we errored, the token is still cleared 93 | assert lock.local.get() is None 94 | 95 | @pytest.mark.asyncio() 96 | async def test_extend_lock(self, r): 97 | await r.flushdb() 98 | lock = self.get_lock(r, 'foo', timeout=10) 99 | assert await lock.acquire(blocking=False) 100 | assert 8000 < await r.pttl('foo') <= 10000 101 | assert await lock.extend(10) 102 | assert 16000 < await r.pttl('foo') <= 20000 103 | await lock.release() 104 | 105 | @pytest.mark.asyncio() 106 | async def test_extend_lock_float(self, r): 107 | await r.flushdb() 108 | lock = self.get_lock(r, 'foo', timeout=10.0) 109 | assert await lock.acquire(blocking=False) 110 | assert 8000 < await r.pttl('foo') <= 10000 111 | assert await lock.extend(10.0) 112 | assert 16000 < await r.pttl('foo') <= 20000 113 | await lock.release() 114 | 115 | @pytest.mark.asyncio() 116 | async def test_extending_unlocked_lock_raises_error(self, r): 117 | lock = self.get_lock(r, 'foo', timeout=10) 118 | with pytest.raises(LockError): 119 | await lock.extend(10) 120 | 121 | @pytest.mark.asyncio() 122 | async def test_extending_lock_with_no_timeout_raises_error(self, r): 123 | lock = self.get_lock(r, 'foo') 124 | await r.flushdb() 125 | assert await lock.acquire(blocking=False) 126 | with pytest.raises(LockError): 127 | await lock.extend(10) 128 | await lock.release() 129 | 130 | @pytest.mark.asyncio() 131 | async def test_extending_lock_no_longer_owned_raises_error(self, r): 132 | lock = self.get_lock(r, 'foo') 133 | await r.flushdb() 134 | assert await lock.acquire(blocking=False) 135 | await r.set('foo', 'a') 136 | with pytest.raises(LockError): 137 | await lock.extend(10) 138 | 139 | 140 | class TestLuaLock(TestLock): 141 | lock_class = LuaLock 142 | 143 | 144 | class TestLockClassSelection: 145 | 146 | @pytest.mark.asyncio() 147 | async def test_lock_class_argument(self, r): 148 | lock = r.lock('foo', lock_class=Lock) 149 | assert type(lock) == Lock 150 | lock = r.lock('foo', lock_class=LuaLock) 151 | assert type(lock) == LuaLock 152 | 153 | @pytest.mark.asyncio() 154 | async def test_cached_lualock_flag(self, r): 155 | try: 156 | r._use_lua_lock = True 157 | lock = r.lock('foo') 158 | assert type(lock) == LuaLock 159 | finally: 160 | r._use_lua_lock = None 161 | 162 | @pytest.mark.asyncio() 163 | async def test_cached_lock_flag(self, r): 164 | try: 165 | r._use_lua_lock = False 166 | lock = r.lock('foo') 167 | assert type(lock) == Lock 168 | finally: 169 | r._use_lua_lock = None 170 | 171 | @pytest.mark.asyncio() 172 | async def test_lua_compatible_server(self, r, monkeypatch): 173 | @classmethod 174 | def mock_register(cls, redis): 175 | return 176 | 177 | monkeypatch.setattr(LuaLock, 'register_scripts', mock_register) 178 | try: 179 | lock = r.lock('foo') 180 | assert type(lock) == LuaLock 181 | assert r._use_lua_lock is True 182 | finally: 183 | r._use_lua_lock = None 184 | 185 | @pytest.mark.asyncio() 186 | async def test_lua_unavailable(self, r, monkeypatch): 187 | @classmethod 188 | def mock_register(cls, redis): 189 | raise ResponseError() 190 | 191 | monkeypatch.setattr(LuaLock, 'register_scripts', mock_register) 192 | try: 193 | lock = r.lock('foo') 194 | assert type(lock) == Lock 195 | assert r._use_lua_lock is False 196 | finally: 197 | r._use_lua_lock = None 198 | -------------------------------------------------------------------------------- /tests/client/test_sentinel.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | import pytest 3 | import aredis 4 | 5 | from aredis.exceptions import ConnectionError, TimeoutError 6 | from aredis.sentinel import (Sentinel, SentinelConnectionPool, 7 | MasterNotFoundError, SlaveNotFoundError) 8 | 9 | 10 | class SentinelTestClient: 11 | def __init__(self, cluster, id): 12 | self.cluster = cluster 13 | self.id = id 14 | 15 | async def sentinel_masters(self): 16 | self.cluster.connection_error_if_down(self) 17 | self.cluster.timeout_if_down(self) 18 | return {self.cluster.service_name: self.cluster.master} 19 | 20 | async def sentinel_slaves(self, master_name): 21 | self.cluster.connection_error_if_down(self) 22 | self.cluster.timeout_if_down(self) 23 | if master_name != self.cluster.service_name: 24 | return [] 25 | return self.cluster.slaves 26 | 27 | 28 | class SentinelTestCluster: 29 | def __init__(self, service_name='mymaster', ip='127.0.0.1', port=6379): 30 | self.clients = {} 31 | self.master = { 32 | 'ip': ip, 33 | 'port': port, 34 | 'is_master': True, 35 | 'is_sdown': False, 36 | 'is_odown': False, 37 | 'num-other-sentinels': 0, 38 | } 39 | self.service_name = service_name 40 | self.slaves = [] 41 | self.nodes_down = set() 42 | self.nodes_timeout = set() 43 | 44 | def connection_error_if_down(self, node): 45 | if node.id in self.nodes_down: 46 | raise ConnectionError 47 | 48 | def timeout_if_down(self, node): 49 | if node.id in self.nodes_timeout: 50 | raise TimeoutError 51 | 52 | def client(self, host, port, **kwargs): 53 | return SentinelTestClient(self, (host, port)) 54 | 55 | 56 | @pytest.fixture() 57 | def cluster(request): 58 | def teardown(): 59 | aredis.sentinel.StrictRedis = saved_StrictRedis 60 | cluster = SentinelTestCluster() 61 | saved_StrictRedis = aredis.sentinel.StrictRedis 62 | aredis.sentinel.StrictRedis = cluster.client 63 | request.addfinalizer(teardown) 64 | return cluster 65 | 66 | 67 | @pytest.fixture() 68 | def sentinel(request, cluster, event_loop): 69 | return Sentinel([('foo', 26379), ('bar', 26379)], loop=event_loop) 70 | 71 | 72 | @pytest.mark.asyncio(forbid_global_loop=True) 73 | async def test_discover_master(sentinel): 74 | address = await sentinel.discover_master('mymaster') 75 | assert address == ('127.0.0.1', 6379) 76 | 77 | 78 | @pytest.mark.asyncio(forbid_global_loop=True) 79 | async def test_discover_master_error(sentinel): 80 | with pytest.raises(MasterNotFoundError): 81 | await sentinel.discover_master('xxx') 82 | 83 | 84 | @pytest.mark.asyncio(forbid_global_loop=True) 85 | async def test_discover_master_sentinel_down(cluster, sentinel): 86 | # Put first sentinel 'foo' down 87 | cluster.nodes_down.add(('foo', 26379)) 88 | address = await sentinel.discover_master('mymaster') 89 | assert address == ('127.0.0.1', 6379) 90 | # 'bar' is now first sentinel 91 | assert sentinel.sentinels[0].id == ('bar', 26379) 92 | 93 | 94 | @pytest.mark.asyncio(forbid_global_loop=True) 95 | async def test_discover_master_sentinel_timeout(cluster, sentinel): 96 | # Put first sentinel 'foo' down 97 | cluster.nodes_timeout.add(('foo', 26379)) 98 | address = await sentinel.discover_master('mymaster') 99 | assert address == ('127.0.0.1', 6379) 100 | # 'bar' is now first sentinel 101 | assert sentinel.sentinels[0].id == ('bar', 26379) 102 | 103 | 104 | @pytest.mark.asyncio(forbid_global_loop=True) 105 | async def test_master_min_other_sentinels(cluster): 106 | sentinel = Sentinel([('foo', 26379)], min_other_sentinels=1) 107 | # min_other_sentinels 108 | with pytest.raises(MasterNotFoundError): 109 | await sentinel.discover_master('mymaster') 110 | cluster.master['num-other-sentinels'] = 2 111 | address = await sentinel.discover_master('mymaster') 112 | assert address == ('127.0.0.1', 6379) 113 | 114 | 115 | @pytest.mark.asyncio(forbid_global_loop=True) 116 | async def test_master_odown(cluster, sentinel): 117 | cluster.master['is_odown'] = True 118 | with pytest.raises(MasterNotFoundError): 119 | await sentinel.discover_master('mymaster') 120 | 121 | 122 | @pytest.mark.asyncio(forbid_global_loop=True) 123 | async def test_master_sdown(cluster, sentinel): 124 | cluster.master['is_sdown'] = True 125 | with pytest.raises(MasterNotFoundError): 126 | await sentinel.discover_master('mymaster') 127 | 128 | 129 | @pytest.mark.asyncio(forbid_global_loop=True) 130 | async def test_discover_slaves(cluster, sentinel): 131 | assert await sentinel.discover_slaves('mymaster') == [] 132 | 133 | cluster.slaves = [ 134 | {'ip': 'slave0', 'port': 1234, 'is_odown': False, 'is_sdown': False}, 135 | {'ip': 'slave1', 'port': 1234, 'is_odown': False, 'is_sdown': False}, 136 | ] 137 | assert await sentinel.discover_slaves('mymaster') == [ 138 | ('slave0', 1234), ('slave1', 1234)] 139 | 140 | # slave0 -> ODOWN 141 | cluster.slaves[0]['is_odown'] = True 142 | assert await sentinel.discover_slaves('mymaster') == [ 143 | ('slave1', 1234)] 144 | 145 | # slave1 -> SDOWN 146 | cluster.slaves[1]['is_sdown'] = True 147 | assert await sentinel.discover_slaves('mymaster') == [] 148 | 149 | cluster.slaves[0]['is_odown'] = False 150 | cluster.slaves[1]['is_sdown'] = False 151 | 152 | # node0 -> DOWN 153 | cluster.nodes_down.add(('foo', 26379)) 154 | assert await sentinel.discover_slaves('mymaster') == [ 155 | ('slave0', 1234), ('slave1', 1234)] 156 | cluster.nodes_down.clear() 157 | 158 | # node0 -> TIMEOUT 159 | cluster.nodes_timeout.add(('foo', 26379)) 160 | assert await sentinel.discover_slaves('mymaster') == [ 161 | ('slave0', 1234), ('slave1', 1234)] 162 | 163 | 164 | @pytest.mark.asyncio(forbid_global_loop=True) 165 | async def test_master_for(cluster, sentinel): 166 | master = sentinel.master_for('mymaster') 167 | assert await master.ping() 168 | assert master.connection_pool.master_address == ('127.0.0.1', 6379) 169 | 170 | # Use internal connection check 171 | master = sentinel.master_for('mymaster', check_connection=True) 172 | assert await master.ping() 173 | 174 | 175 | @pytest.mark.asyncio(forbid_global_loop=True) 176 | async def test_slave_for(cluster, sentinel): 177 | cluster.slaves = [ 178 | {'ip': '127.0.0.1', 'port': 6379, 179 | 'is_odown': False, 'is_sdown': False}, 180 | ] 181 | slave = sentinel.slave_for('mymaster') 182 | assert await slave.ping() 183 | 184 | 185 | @pytest.mark.asyncio(forbid_global_loop=True) 186 | async def test_slave_for_slave_not_found_error(cluster, sentinel): 187 | cluster.master['is_odown'] = True 188 | slave = sentinel.slave_for('mymaster', db=9) 189 | with pytest.raises(SlaveNotFoundError): 190 | await slave.ping() 191 | 192 | 193 | @pytest.mark.asyncio(forbid_global_loop=True) 194 | async def test_slave_round_robin(cluster, sentinel): 195 | cluster.slaves = [ 196 | {'ip': 'slave0', 'port': 6379, 'is_odown': False, 'is_sdown': False}, 197 | {'ip': 'slave1', 'port': 6379, 'is_odown': False, 'is_sdown': False}, 198 | ] 199 | pool = SentinelConnectionPool('mymaster', sentinel) 200 | rotator = await pool.rotate_slaves() 201 | assert set(rotator) == {('slave0', 6379), ('slave1', 6379)} 202 | -------------------------------------------------------------------------------- /aredis/commands/geo.py: -------------------------------------------------------------------------------- 1 | from aredis.exceptions import RedisError 2 | from aredis.utils import b, nativestr 3 | 4 | 5 | def parse_georadius_generic(response, **options): 6 | if options['store'] or options['store_dist']: 7 | # `store` and `store_diff` cant be combined 8 | # with other command arguments. 9 | return response 10 | 11 | if type(response) != list: 12 | response_list = [response] 13 | else: 14 | response_list = response 15 | 16 | if not options['withdist'] and not options['withcoord']\ 17 | and not options['withhash']: 18 | # just a bunch of places 19 | return [nativestr(r) for r in response_list] 20 | 21 | cast = { 22 | 'withdist': float, 23 | 'withcoord': lambda ll: (float(ll[0]), float(ll[1])), 24 | 'withhash': int 25 | } 26 | 27 | # zip all output results with each casting functino to get 28 | # the properly native Python value. 29 | f = [nativestr] 30 | f += [cast[o] for o in ['withdist', 'withhash', 'withcoord'] if options[o]] 31 | return [ 32 | list(map(lambda fv: fv[0](fv[1]), zip(f, r))) for r in response_list 33 | ] 34 | 35 | 36 | class GeoCommandMixin: 37 | 38 | RESPONSE_CALLBACKS = { 39 | 'GEOPOS': lambda r: list(map(lambda ll: (float(ll[0]), 40 | float(ll[1])) 41 | if ll is not None else None, r)), 42 | 'GEOHASH': lambda r: list(r), 43 | 'GEORADIUS': parse_georadius_generic, 44 | 'GEORADIUSBYMEMBER': parse_georadius_generic, 45 | 'GEODIST': float, 46 | 'GEOADD': int 47 | } 48 | 49 | # GEO COMMANDS 50 | async def geoadd(self, name, *values): 51 | """ 52 | Add the specified geospatial items to the specified key identified 53 | by the ``name`` argument. The Geospatial items are given as ordered 54 | members of the ``values`` argument, each item or place is formed by 55 | the triad latitude, longitude and name. 56 | """ 57 | if len(values) % 3 != 0: 58 | raise RedisError("GEOADD requires places with lon, lat and name" 59 | " values") 60 | return await self.execute_command('GEOADD', name, *values) 61 | 62 | async def geodist(self, name, place1, place2, unit=None): 63 | """ 64 | Return the distance between ``place1`` and ``place2`` members of the 65 | ``name`` key. 66 | The units must be one of the following : m, km mi, ft. By async default 67 | meters are used. 68 | """ 69 | pieces = [name, place1, place2] 70 | if unit and unit not in ('m', 'km', 'mi', 'ft'): 71 | raise RedisError("GEODIST invalid unit") 72 | elif unit: 73 | pieces.append(unit) 74 | return await self.execute_command('GEODIST', *pieces) 75 | 76 | async def geohash(self, name, *values): 77 | """ 78 | Return the geo hash string for each item of ``values`` members of 79 | the specified key identified by the ``name``argument. 80 | """ 81 | return await self.execute_command('GEOHASH', name, *values) 82 | 83 | async def geopos(self, name, *values): 84 | """ 85 | Return the positions of each item of ``values`` as members of 86 | the specified key identified by the ``name``argument. Each position 87 | is represented by the pairs lon and lat. 88 | """ 89 | return await self.execute_command('GEOPOS', name, *values) 90 | 91 | async def georadius(self, name, longitude, latitude, radius, unit=None, 92 | withdist=False, withcoord=False, withhash=False, count=None, 93 | sort=None, store=None, store_dist=None): 94 | """ 95 | Return the members of the specified key identified by the 96 | ``name`` argument which are within the borders of the area specified 97 | with the ``latitude`` and ``longitude`` location and the maximum 98 | distance from the center specified by the ``radius`` value. 99 | 100 | The units must be one of the following : m, km mi, ft. By default 101 | 102 | ``withdist`` indicates to return the distances of each place. 103 | 104 | ``withcoord`` indicates to return the latitude and longitude of 105 | each place. 106 | 107 | ``withhash`` indicates to return the geohash string of each place. 108 | 109 | ``count`` indicates to return the number of elements up to N. 110 | 111 | ``sort`` indicates to return the places in a sorted way, ASC for 112 | nearest to fairest and DESC for fairest to nearest. 113 | 114 | ``store`` indicates to save the places names in a sorted set named 115 | with a specific key, each element of the destination sorted set is 116 | populated with the score got from the original geo sorted set. 117 | 118 | ``store_dist`` indicates to save the places names in a sorted set 119 | named with a specific key, instead of ``store`` the sorted set 120 | destination score is set with the distance. 121 | """ 122 | return await self._georadiusgeneric('GEORADIUS', 123 | name, longitude, latitude, radius, 124 | unit=unit, withdist=withdist, 125 | withcoord=withcoord, withhash=withhash, 126 | count=count, sort=sort, store=store, 127 | store_dist=store_dist) 128 | 129 | async def georadiusbymember(self, name, member, radius, unit=None, 130 | withdist=False, withcoord=False, withhash=False, 131 | count=None, sort=None, store=None, store_dist=None): 132 | """ 133 | This command is exactly like ``georadius`` with the sole difference 134 | that instead of taking, as the center of the area to query, a longitude 135 | and latitude value, it takes the name of a member already existing 136 | inside the geospatial index represented by the sorted set. 137 | """ 138 | return await self._georadiusgeneric('GEORADIUSBYMEMBER', 139 | name, member, radius, unit=unit, 140 | withdist=withdist, withcoord=withcoord, 141 | withhash=withhash, count=count, 142 | sort=sort, store=store, 143 | store_dist=store_dist) 144 | 145 | async def _georadiusgeneric(self, command, *args, **kwargs): 146 | pieces = list(args) 147 | if kwargs['unit'] and kwargs['unit'] not in ('m', 'km', 'mi', 'ft'): 148 | raise RedisError("GEORADIUS invalid unit") 149 | elif kwargs['unit']: 150 | pieces.append(kwargs['unit']) 151 | else: 152 | pieces.append('m', ) 153 | 154 | for token in ('withdist', 'withcoord', 'withhash'): 155 | if kwargs[token]: 156 | pieces.append(b(token.upper())) 157 | 158 | if kwargs['count']: 159 | pieces.extend([b('COUNT'), kwargs['count']]) 160 | 161 | if kwargs['sort'] and kwargs['sort'] not in ('ASC', 'DESC'): 162 | raise RedisError("GEORADIUS invalid sort") 163 | elif kwargs['sort']: 164 | pieces.append(b(kwargs['sort'])) 165 | 166 | if kwargs['store'] and kwargs['store_dist']: 167 | raise RedisError("GEORADIUS store and store_dist cant be set" 168 | " together") 169 | 170 | if kwargs['store']: 171 | pieces.extend([b('STORE'), kwargs['store']]) 172 | 173 | if kwargs['store_dist']: 174 | pieces.extend([b('STOREDIST'), kwargs['store_dist']]) 175 | 176 | return await self.execute_command(command, *pieces, **kwargs) 177 | -------------------------------------------------------------------------------- /docs/source/notice.rst: -------------------------------------------------------------------------------- 1 | API Reference 2 | ============= 3 | 4 | The connection part is rewritten to make client async, and most API is ported from redis-py. 5 | So most API and usage are the same as redis-py. 6 | If you use redis-py in your code, just use `async/await` syntax with your code. 7 | `for more examples `_ 8 | 9 | The `official Redis command documentation `_ does a 10 | great job of explaining each command in detail. aredis only shift StrictRedis 11 | class from redis-py that implement these commands. The StrictRedis class attempts to adhere 12 | to the official command syntax. There are a few exceptions: 13 | 14 | * **SELECT**: Not implemented. See the explanation in the Thread Safety section 15 | below. 16 | * **DEL**: 'del' is a reserved keyword in the Python syntax. Therefore aredis 17 | uses 'delete' instead. 18 | * **CONFIG GET|SET**: These are implemented separately as config_get or config_set. 19 | * **MULTI/EXEC**: These are implemented as part of the Pipeline class. The 20 | pipeline is wrapped with the MULTI and EXEC statements by default when it 21 | is executed, which can be disabled by specifying transaction=False. 22 | See more about Pipelines below. 23 | * **SUBSCRIBE/LISTEN**: Similar to pipelines, PubSub is implemented as a separate 24 | class as it places the underlying connection in a state where it can't 25 | execute non-pubsub commands. Calling the pubsub method from the Redis client 26 | will return a PubSub instance where you can subscribe to channels and listen 27 | for messages. You can only call PUBLISH from the Redis client. 28 | * **SCAN/SSCAN/HSCAN/ZSCAN**: The \*SCAN commands are implemented as they 29 | exist in the Redis documentation. 30 | In addition, each command has an equivilant iterator method. 31 | These are purely for convenience so the user doesn't have to keep 32 | track of the cursor while iterating. (Use Python 3.6 and the scan_iter/sscan_iter/hscan_iter/zscan_iter 33 | methods for this behavior. **iter functions are not supported in Python 3.5**) 34 | 35 | Loop 36 | ^^^^ 37 | 38 | The event loop can be set with the loop keyword argugment. If no loop is given 39 | the default event loop will be 40 | 41 | **warning** 42 | 43 | **asyncio.AbstractEventLoop** is actually not thread safe and asyncio uses **BaseDefaultEventLoopPolicy** as default 44 | event policy(which create new event loop instead of sharing event loop between threads, 45 | being thread safe to some degree) So the StricRedis is still thread safe if your code works with default event loop. 46 | But if you customize event loop yourself, please make sure your event loop is thread safe(maybe you should customize 47 | on the base of **BaseDefaultEventLoopPolicy** instead of **AbstractEventLoop**) 48 | 49 | Detailed discussion about the problem is in `issue20 `_ 50 | 51 | .. code-block:: python 52 | 53 | import aredis 54 | import asyncio 55 | loop = asyncio.get_event_loop() 56 | r = aredis.StrictRedis(host='localhost', port=6379, db=0, loop=loop) 57 | 58 | Decoding 59 | ^^^^^^^^ 60 | 61 | Param **encoding** and **decode_responses** are now used to support response encoding. 62 | 63 | **encoding** is used for specifying with which encoding you want responses to be decoded. 64 | **decode_responses** is used for tell the client whether responses should be decoded. 65 | 66 | If decode_responses is set to True and no encoding is specified, client will use 'utf-8' by default. 67 | 68 | Connections 69 | ^^^^^^^^^^^ 70 | 71 | ConnectionPools manage a set of Connection instances. aredis ships with two 72 | types of Connections. The default, Connection, is a normal TCP socket based 73 | connection. The UnixDomainSocketConnection allows for clients running on the 74 | same device as the server to connect via a unix domain socket. To use a 75 | UnixDomainSocketConnection connection, simply pass the unix_socket_path 76 | argument, which is a string to the unix domain socket file. Additionally, make 77 | sure the unixsocket parameter is defined in your redis.conf file. It's 78 | commented out by default. 79 | 80 | .. code-block:: python 81 | 82 | r = redis.StrictRedis(unix_socket_path='/tmp/redis.sock') 83 | 84 | You can create your own Connection subclasses as well. This may be useful if 85 | you want to control the socket behavior within an async framework. To 86 | instantiate a client class using your own connection, you need to create 87 | a connection pool, passing your class to the connection_class argument. 88 | Other keyword parameters you pass to the pool will be passed to the class 89 | specified during initialization. 90 | 91 | .. code-block:: python 92 | 93 | pool = redis.ConnectionPool(connection_class=YourConnectionClass, 94 | your_arg='...', ...) 95 | 96 | Parsers 97 | ^^^^^^^ 98 | 99 | Parser classes provide a way to control how responses from the Redis server 100 | are parsed. aredis ships with two parser classes, the PythonParser and the 101 | HiredisParser. By default, aredis will attempt to use the HiredisParser if 102 | you have the hiredis module installed and will fallback to the PythonParser 103 | otherwise. 104 | 105 | Hiredis is a C library maintained by the core Redis team. Pieter Noordhuis was 106 | kind enough to create Python bindings. Using Hiredis can provide up to a 107 | 10x speed improvement in parsing responses from the Redis server. The 108 | performance increase is most noticeable when retrieving many pieces of data, 109 | such as from LRANGE or SMEMBERS operations. 110 | 111 | 112 | Hiredis is available on PyPI, and can be installed as an extra dependency to 113 | aredis. 114 | 115 | 116 | .. code-block:: bash 117 | 118 | $ pip install aredis[hiredis] 119 | 120 | 121 | or 122 | 123 | .. code-block:: bash 124 | 125 | $ easy_install aredis[hiredis] 126 | 127 | Response Callbacks 128 | ^^^^^^^^^^^^^^^^^^ 129 | 130 | The client class uses a set of callbacks to cast Redis responses to the 131 | appropriate Python type. There are a number of these callbacks defined on 132 | the Redis client class in a dictionary called RESPONSE_CALLBACKS. 133 | 134 | Custom callbacks can be added on a per-instance basis using the 135 | set_response_callback method. This method accepts two arguments: a command 136 | name and the callback. Callbacks added in this manner are only valid on the 137 | instance the callback is added to. If you want to define or override a callback 138 | globally, you should make a subclass of the Redis client and add your callback 139 | to its REDIS_CALLBACKS class dictionary. 140 | 141 | Response callbacks take at least one parameter: the response from the Redis 142 | server. Keyword arguments may also be accepted in order to further control 143 | how to interpret the response. These keyword arguments are specified during the 144 | command's call to execute_command. The ZRANGE implementation demonstrates the 145 | use of response callback keyword arguments with its "withscores" argument. 146 | 147 | Thread Safety 148 | ^^^^^^^^^^^^^ 149 | 150 | Redis client instances can safely be shared between threads. Internally, 151 | connection instances are only retrieved from the connection pool during 152 | command execution, and returned to the pool directly after. Command execution 153 | never modifies state on the client instance. 154 | 155 | However, there is one caveat: the Redis SELECT command. The SELECT command 156 | allows you to switch the database currently in use by the connection. That 157 | database remains selected until another is selected or until the connection is 158 | closed. This creates an issue in that connections could be returned to the pool 159 | that are connected to a different database. 160 | 161 | As a result, aredis does not implement the SELECT command on client 162 | instances. If you use multiple Redis databases within the same application, you 163 | should create a separate client instance (and possibly a separate connection 164 | pool) for each database. 165 | 166 | **It is not safe to pass PubSub or Pipeline objects between threads.** 167 | -------------------------------------------------------------------------------- /aredis/utils.py: -------------------------------------------------------------------------------- 1 | import sys 2 | from functools import wraps 3 | 4 | from aredis.exceptions import (ClusterDownError, RedisClusterException) 5 | 6 | _C_EXTENSION_SPEEDUP = False 7 | try: 8 | from aredis.speedups import crc16, hash_slot 9 | 10 | _C_EXTENSION_SPEEDUP = True 11 | except Exception: 12 | pass 13 | 14 | LOOP_DEPRECATED = sys.version_info >= (3, 8) 15 | 16 | 17 | def b(x): 18 | return x.encode('latin-1') if not isinstance(x, bytes) else x 19 | 20 | 21 | def nativestr(x): 22 | return x if isinstance(x, str) else x.decode('utf-8', 'replace') 23 | 24 | 25 | def iteritems(x): 26 | return iter(x.items()) 27 | 28 | 29 | def iterkeys(x): 30 | return iter(x.keys()) 31 | 32 | 33 | def itervalues(x): 34 | return iter(x.values()) 35 | 36 | 37 | def ban_python_version_lt(min_version): 38 | min_version = tuple(map(int, min_version.split('.'))) 39 | 40 | def decorator(func): 41 | @wraps(func) 42 | def _inner(*args, **kwargs): 43 | if sys.version_info[:2] < min_version: 44 | raise EnvironmentError( 45 | '{} not supported in Python version less than {}' 46 | .format(func.__name__, min_version) 47 | ) 48 | else: 49 | return func(*args, **kwargs) 50 | 51 | return _inner 52 | 53 | return decorator 54 | 55 | 56 | class dummy: 57 | """ 58 | Instances of this class can be used as an attribute container. 59 | """ 60 | 61 | def __init__(self): 62 | self.token = None 63 | 64 | def set(self, value): 65 | self.token = value 66 | 67 | def get(self): 68 | return self.token 69 | 70 | 71 | # ++++++++++ response callbacks ++++++++++++++ 72 | def string_keys_to_dict(key_string, callback): 73 | return dict.fromkeys(key_string.split(), callback) 74 | 75 | 76 | def list_keys_to_dict(key_list, callback): 77 | return dict.fromkeys(key_list, callback) 78 | 79 | 80 | def dict_merge(*dicts): 81 | merged = {} 82 | for d in dicts: 83 | merged.update(d) 84 | return merged 85 | 86 | 87 | def bool_ok(response): 88 | return nativestr(response) == 'OK' 89 | 90 | 91 | def list_or_args(keys, args): 92 | # returns a single list combining keys and args 93 | try: 94 | iter(keys) 95 | # a string or bytes instance can be iterated, but indicates 96 | # keys wasn't passed as a list 97 | if isinstance(keys, (str, bytes)): 98 | keys = [keys] 99 | except TypeError: 100 | keys = [keys] 101 | if args: 102 | keys.extend(args) 103 | return keys 104 | 105 | 106 | def int_or_none(response): 107 | if response is None: 108 | return None 109 | return int(response) 110 | 111 | 112 | def pairs_to_dict(response): 113 | """Creates a dict given a list of key/value pairs""" 114 | it = iter(response) 115 | return dict(zip(it, it)) 116 | 117 | 118 | # ++++++++++ result callbacks (cluster)++++++++++++++ 119 | def merge_result(res): 120 | """ 121 | Merges all items in `res` into a list. 122 | 123 | This command is used when sending a command to multiple nodes 124 | and they result from each node should be merged into a single list. 125 | """ 126 | if not isinstance(res, dict): 127 | raise ValueError('Value should be of dict type') 128 | 129 | result = set([]) 130 | 131 | for _, v in res.items(): 132 | for value in v: 133 | result.add(value) 134 | 135 | return list(result) 136 | 137 | 138 | def first_key(res): 139 | """ 140 | Returns the first result for the given command. 141 | 142 | If more then 1 result is returned then a `RedisClusterException` is raised. 143 | """ 144 | if not isinstance(res, dict): 145 | raise ValueError('Value should be of dict type') 146 | 147 | if len(res.keys()) != 1: 148 | raise RedisClusterException("More then 1 result from command") 149 | 150 | return list(res.values())[0] 151 | 152 | 153 | def blocked_command(self, command): 154 | """ 155 | Raises a `RedisClusterException` mentioning the command is blocked. 156 | """ 157 | raise RedisClusterException("Command: {0} is blocked in redis cluster mode".format(command)) 158 | 159 | 160 | def clusterdown_wrapper(func): 161 | """ 162 | Wrapper for CLUSTERDOWN error handling. 163 | 164 | If the cluster reports it is down it is assumed that: 165 | - connection_pool was disconnected 166 | - connection_pool was reseted 167 | - refereh_table_asap set to True 168 | 169 | It will try 3 times to rerun the command and raises ClusterDownException if it continues to fail. 170 | """ 171 | 172 | @wraps(func) 173 | async def inner(*args, **kwargs): 174 | for _ in range(0, 3): 175 | try: 176 | return await func(*args, **kwargs) 177 | except ClusterDownError: 178 | # Try again with the new cluster setup. All other errors 179 | # should be raised. 180 | pass 181 | 182 | # If it fails 3 times then raise exception back to caller 183 | raise ClusterDownError("CLUSTERDOWN error. Unable to rebuild the cluster") 184 | 185 | return inner 186 | 187 | 188 | if not _C_EXTENSION_SPEEDUP: 189 | x_mode_m_crc16_lookup = [ 190 | 0x0000, 0x1021, 0x2042, 0x3063, 0x4084, 0x50a5, 0x60c6, 0x70e7, 191 | 0x8108, 0x9129, 0xa14a, 0xb16b, 0xc18c, 0xd1ad, 0xe1ce, 0xf1ef, 192 | 0x1231, 0x0210, 0x3273, 0x2252, 0x52b5, 0x4294, 0x72f7, 0x62d6, 193 | 0x9339, 0x8318, 0xb37b, 0xa35a, 0xd3bd, 0xc39c, 0xf3ff, 0xe3de, 194 | 0x2462, 0x3443, 0x0420, 0x1401, 0x64e6, 0x74c7, 0x44a4, 0x5485, 195 | 0xa56a, 0xb54b, 0x8528, 0x9509, 0xe5ee, 0xf5cf, 0xc5ac, 0xd58d, 196 | 0x3653, 0x2672, 0x1611, 0x0630, 0x76d7, 0x66f6, 0x5695, 0x46b4, 197 | 0xb75b, 0xa77a, 0x9719, 0x8738, 0xf7df, 0xe7fe, 0xd79d, 0xc7bc, 198 | 0x48c4, 0x58e5, 0x6886, 0x78a7, 0x0840, 0x1861, 0x2802, 0x3823, 199 | 0xc9cc, 0xd9ed, 0xe98e, 0xf9af, 0x8948, 0x9969, 0xa90a, 0xb92b, 200 | 0x5af5, 0x4ad4, 0x7ab7, 0x6a96, 0x1a71, 0x0a50, 0x3a33, 0x2a12, 201 | 0xdbfd, 0xcbdc, 0xfbbf, 0xeb9e, 0x9b79, 0x8b58, 0xbb3b, 0xab1a, 202 | 0x6ca6, 0x7c87, 0x4ce4, 0x5cc5, 0x2c22, 0x3c03, 0x0c60, 0x1c41, 203 | 0xedae, 0xfd8f, 0xcdec, 0xddcd, 0xad2a, 0xbd0b, 0x8d68, 0x9d49, 204 | 0x7e97, 0x6eb6, 0x5ed5, 0x4ef4, 0x3e13, 0x2e32, 0x1e51, 0x0e70, 205 | 0xff9f, 0xefbe, 0xdfdd, 0xcffc, 0xbf1b, 0xaf3a, 0x9f59, 0x8f78, 206 | 0x9188, 0x81a9, 0xb1ca, 0xa1eb, 0xd10c, 0xc12d, 0xf14e, 0xe16f, 207 | 0x1080, 0x00a1, 0x30c2, 0x20e3, 0x5004, 0x4025, 0x7046, 0x6067, 208 | 0x83b9, 0x9398, 0xa3fb, 0xb3da, 0xc33d, 0xd31c, 0xe37f, 0xf35e, 209 | 0x02b1, 0x1290, 0x22f3, 0x32d2, 0x4235, 0x5214, 0x6277, 0x7256, 210 | 0xb5ea, 0xa5cb, 0x95a8, 0x8589, 0xf56e, 0xe54f, 0xd52c, 0xc50d, 211 | 0x34e2, 0x24c3, 0x14a0, 0x0481, 0x7466, 0x6447, 0x5424, 0x4405, 212 | 0xa7db, 0xb7fa, 0x8799, 0x97b8, 0xe75f, 0xf77e, 0xc71d, 0xd73c, 213 | 0x26d3, 0x36f2, 0x0691, 0x16b0, 0x6657, 0x7676, 0x4615, 0x5634, 214 | 0xd94c, 0xc96d, 0xf90e, 0xe92f, 0x99c8, 0x89e9, 0xb98a, 0xa9ab, 215 | 0x5844, 0x4865, 0x7806, 0x6827, 0x18c0, 0x08e1, 0x3882, 0x28a3, 216 | 0xcb7d, 0xdb5c, 0xeb3f, 0xfb1e, 0x8bf9, 0x9bd8, 0xabbb, 0xbb9a, 217 | 0x4a75, 0x5a54, 0x6a37, 0x7a16, 0x0af1, 0x1ad0, 0x2ab3, 0x3a92, 218 | 0xfd2e, 0xed0f, 0xdd6c, 0xcd4d, 0xbdaa, 0xad8b, 0x9de8, 0x8dc9, 219 | 0x7c26, 0x6c07, 0x5c64, 0x4c45, 0x3ca2, 0x2c83, 0x1ce0, 0x0cc1, 220 | 0xef1f, 0xff3e, 0xcf5d, 0xdf7c, 0xaf9b, 0xbfba, 0x8fd9, 0x9ff8, 221 | 0x6e17, 0x7e36, 0x4e55, 0x5e74, 0x2e93, 0x3eb2, 0x0ed1, 0x1ef0 222 | ] 223 | 224 | 225 | def _crc16(data): 226 | crc = 0 227 | for byte in data: 228 | crc = ((crc << 8) & 0xff00) ^ x_mode_m_crc16_lookup[((crc >> 8) & 0xff) ^ byte] 229 | return crc & 0xffff 230 | 231 | 232 | crc16 = _crc16 233 | 234 | 235 | def _hash_slot(key): 236 | start = key.find(b"{") 237 | if start > -1: 238 | end = key.find(b"}", start + 1) 239 | if end > -1 and end != start + 1: 240 | key = key[start + 1:end] 241 | return crc16(key) % 16384 242 | 243 | 244 | hash_slot = _hash_slot 245 | 246 | 247 | class NodeFlag: 248 | BLOCKED = 'blocked' 249 | ALL_NODES = 'all-nodes' 250 | ALL_MASTERS = 'all-masters' 251 | RANDOM = 'random' 252 | SLOT_ID = 'slot-id' 253 | -------------------------------------------------------------------------------- /tests/client/test_cache.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | # -*- coding: utf-8 -*- 3 | 4 | import asyncio 5 | import pytest 6 | import time 7 | 8 | from aredis.cache import Cache, HerdCache 9 | 10 | 11 | class TestCache: 12 | 13 | app = 'test_cache' 14 | key = 'test_key' 15 | data = {str(i): i for i in range(3)} 16 | 17 | def expensive_work(self, data): 18 | return data 19 | 20 | @pytest.mark.asyncio(forbid_global_loop=True) 21 | async def test_set(self, r): 22 | await r.flushdb() 23 | cache = Cache(r, self.app) 24 | res = await cache.set(self.key, 25 | self.expensive_work(self.data), 26 | self.data) 27 | assert res 28 | identity = cache._gen_identity(self.key, self.data) 29 | content = await r.get(identity) 30 | content = cache._unpack(content) 31 | assert content == self.data 32 | 33 | @pytest.mark.asyncio(forbid_global_loop=True) 34 | async def test_set_timeout(self, r, event_loop): 35 | await r.flushdb() 36 | cache = Cache(r, self.app) 37 | res = await cache.set(self.key, 38 | self.expensive_work(self.data), 39 | self.data, expire_time=1) 40 | assert res 41 | identity = cache._gen_identity(self.key, self.data) 42 | content = await r.get(identity) 43 | content = cache._unpack(content) 44 | assert content == self.data 45 | await asyncio.sleep(1, loop=event_loop) 46 | content = await r.get(identity) 47 | assert content is None 48 | 49 | @pytest.mark.asyncio(forbid_global_loop=True) 50 | async def test_set_with_plain_key(self, r): 51 | await r.flushdb() 52 | cache = Cache(r, self.app, identity_generator_class=None) 53 | res = await cache.set(self.key, 54 | self.expensive_work(self.data), 55 | self.data, expire_time=1) 56 | assert res 57 | identity = cache._gen_identity(self.key, self.data) 58 | assert identity == self.key 59 | content = await r.get(identity) 60 | content = cache._unpack(content) 61 | assert content == self.data 62 | 63 | @pytest.mark.asyncio(forbid_global_loop=True) 64 | async def test_get(self, r): 65 | await r.flushdb() 66 | cache = Cache(r, self.app) 67 | res = await cache.set(self.key, 68 | self.expensive_work(self.data), 69 | self.data, expire_time=1) 70 | assert res 71 | content = await cache.get(self.key, self.data) 72 | assert content == self.data 73 | 74 | @pytest.mark.asyncio(forbid_global_loop=True) 75 | async def test_set_many(self, r): 76 | await r.flushdb() 77 | cache = Cache(r, self.app) 78 | res = await cache.set_many(self.expensive_work(self.data), 79 | self.data) 80 | assert res 81 | for key, value in self.data.items(): 82 | assert await cache.get(key, self.data) == value 83 | 84 | @pytest.mark.asyncio(forbid_global_loop=True) 85 | async def test_delete(self, r): 86 | await r.flushdb() 87 | cache = Cache(r, self.app) 88 | res = await cache.set(self.key, 89 | self.expensive_work(self.data), 90 | self.data, expire_time=1) 91 | assert res 92 | content = await cache.get(self.key, self.data) 93 | assert content == self.data 94 | res = await cache.delete(self.key, self.data) 95 | assert res 96 | content = await cache.get(self.key, self.data) 97 | assert content is None 98 | 99 | @pytest.mark.asyncio(forbid_global_loop=True) 100 | async def test_delete_pattern(self, r): 101 | await r.flushdb() 102 | cache = Cache(r, self.app) 103 | await cache.set_many(self.expensive_work(self.data), 104 | self.data) 105 | res = await cache.delete_pattern('test_*', 10) 106 | assert res == 3 107 | content = await cache.get(self.key, self.data) 108 | assert content is None 109 | 110 | @pytest.mark.asyncio(forbid_global_loop=True) 111 | async def test_ttl(self, r, event_loop): 112 | await r.flushdb() 113 | cache = Cache(r, self.app) 114 | await cache.set(self.key, self.expensive_work(self.data), 115 | self.data, expire_time=1) 116 | ttl = await cache.ttl(self.key, self.data) 117 | assert ttl > 0 118 | await asyncio.sleep(1.1, loop=event_loop) 119 | ttl = await cache.ttl(self.key, self.data) 120 | assert ttl < 0 121 | 122 | @pytest.mark.asyncio(forbid_global_loop=True) 123 | async def test_exists(self, r, event_loop): 124 | await r.flushdb() 125 | cache = Cache(r, self.app) 126 | await cache.set(self.key, self.expensive_work(self.data), 127 | self.data, expire_time=1) 128 | exists = await cache.exist(self.key, self.data) 129 | assert exists is True 130 | await asyncio.sleep(1.1, loop=event_loop) 131 | exists = await cache.exist(self.key, self.data) 132 | assert exists is False 133 | 134 | 135 | class TestHerdCache: 136 | 137 | app = 'test_cache' 138 | key = 'test_key' 139 | data = {str(i): i for i in range(3)} 140 | 141 | def expensive_work(self, data): 142 | return data 143 | 144 | @pytest.mark.asyncio(forbid_global_loop=True) 145 | async def test_set(self, r): 146 | await r.flushdb() 147 | cache = HerdCache(r, self.app, default_herd_timeout=1, 148 | extend_herd_timeout=1) 149 | now = int(time.time()) 150 | res = await cache.set(self.key, 151 | self.expensive_work(self.data), 152 | self.data) 153 | assert res 154 | identity = cache._gen_identity(self.key, self.data) 155 | content = await r.get(identity) 156 | content, expect_expire_time = cache._unpack(content) 157 | # supposed equal to 1, but may there be latency 158 | assert expect_expire_time - now <= 1 159 | assert content == self.data 160 | 161 | @pytest.mark.asyncio(forbid_global_loop=True) 162 | async def test_get(self, r): 163 | await r.flushdb() 164 | cache = HerdCache(r, self.app, default_herd_timeout=1, 165 | extend_herd_timeout=1) 166 | res = await cache.set(self.key, 167 | self.expensive_work(self.data), 168 | self.data) 169 | assert res 170 | content = await cache.get(self.key, self.data) 171 | assert content == self.data 172 | 173 | @pytest.mark.asyncio(forbid_global_loop=True) 174 | async def test_set_many(self, r): 175 | await r.flushdb() 176 | cache = HerdCache(r, self.app, default_herd_timeout=1, 177 | extend_herd_timeout=1) 178 | res = await cache.set_many(self.expensive_work(self.data), 179 | self.data) 180 | assert res 181 | for key, value in self.data.items(): 182 | assert await cache.get(key, self.data) == value 183 | 184 | @pytest.mark.asyncio(forbid_global_loop=True) 185 | async def test_herd(self, r, event_loop): 186 | await r.flushdb() 187 | now = int(time.time()) 188 | cache = HerdCache(r, self.app, default_herd_timeout=1, 189 | extend_herd_timeout=1) 190 | await cache.set(self.key, 191 | self.expensive_work(self.data), 192 | self.data) 193 | await asyncio.sleep(1, loop=event_loop) 194 | # first get 195 | identity = cache._gen_identity(self.key, self.data) 196 | content = await r.get(identity) 197 | content, expect_expire_time = cache._unpack(content) 198 | assert now + 1 == expect_expire_time 199 | # HerdCach.get 200 | await asyncio.sleep(0.1, loop=event_loop) 201 | res = await cache.get(self.key, self.data) 202 | # first herd get will reset expire time and return None 203 | assert res is None 204 | # second get 205 | identity = cache._gen_identity(self.key, self.data) 206 | content = await r.get(identity) 207 | content, new_expire_time = cache._unpack(content) 208 | assert new_expire_time >= expect_expire_time + 1 209 | -------------------------------------------------------------------------------- /docs/source/pubsub.rst: -------------------------------------------------------------------------------- 1 | Publish / Subscribe 2 | =================== 3 | 4 | aredis includes a `PubSub` object that subscribes to channels and listens 5 | for new messages. Creating a `PubSub` object is easy. 6 | 7 | .. code-block:: python 8 | 9 | r = redis.StrictRedis(...) 10 | p = r.pubsub() 11 | 12 | Once a `PubSub` instance is created, channels and patterns can be subscribed 13 | to. 14 | 15 | .. code-block:: python 16 | 17 | await p.subscribe('my-first-channel', 'my-second-channel', ...) 18 | await p.psubscribe('my-*', ...) 19 | 20 | The `PubSub` instance is now subscribed to those channels/patterns. The 21 | subscription confirmations can be seen by reading messages from the `PubSub` 22 | instance. 23 | 24 | .. code-block:: python 25 | 26 | await p.get_message() 27 | # {'pattern': None, 'type': 'subscribe', 'channel': 'my-second-channel', 'data': 1L} 28 | await p.get_message() 29 | # {'pattern': None, 'type': 'subscribe', 'channel': 'my-first-channel', 'data': 2L} 30 | await p.get_message() 31 | # {'pattern': None, 'type': 'psubscribe', 'channel': 'my-*', 'data': 3L} 32 | 33 | Every message read from a `PubSub` instance will be a dictionary with the 34 | following keys. 35 | 36 | * **type**: One of the following: 'subscribe', 'unsubscribe', 'psubscribe', 37 | 'punsubscribe', 'message', 'pmessage' 38 | * **channel**: The channel [un]subscribed to or the channel a message was 39 | published to 40 | * **pattern**: The pattern that matched a published message's channel. Will be 41 | `None` in all cases except for 'pmessage' types. 42 | * **data**: The message data. With [un]subscribe messages, this value will be 43 | the number of channels and patterns the connection is currently subscribed 44 | to. With [p]message messages, this value will be the actual published 45 | message. 46 | 47 | Let's send a message now. 48 | 49 | .. code-block:: python 50 | 51 | # the publish method returns the number matching channel and pattern 52 | # subscriptions. 'my-first-channel' matches both the 'my-first-channel' 53 | # subscription and the 'my-*' pattern subscription, so this message will 54 | # be delivered to 2 channels/patterns 55 | await r.publish('my-first-channel', 'some data') 56 | # 2 57 | await p.get_message() 58 | # {'channel': 'my-first-channel', 'data': 'some data', 'pattern': None, 'type': 'message'} 59 | await p.get_message() 60 | # {'channel': 'my-first-channel', 'data': 'some data', 'pattern': 'my-*', 'type': 'pmessage'} 61 | 62 | Unsubscribing works just like subscribing. If no arguments are passed to 63 | [p]unsubscribe, all channels or patterns will be unsubscribed from. 64 | 65 | .. code-block:: python 66 | 67 | await p.unsubscribe() 68 | await p.punsubscribe('my-*') 69 | await p.get_message() 70 | # {'channel': 'my-second-channel', 'data': 2L, 'pattern': None, 'type': 'unsubscribe'} 71 | await p.get_message() 72 | # {'channel': 'my-first-channel', 'data': 1L, 'pattern': None, 'type': 'unsubscribe'} 73 | await p.get_message() 74 | # {'channel': 'my-*', 'data': 0L, 'pattern': None, 'type': 'punsubscribe'} 75 | 76 | aredis also allows you to register callback functions to handle published 77 | messages. Message handlers take a single argument, the message, which is a 78 | dictionary just like the examples above. To subscribe to a channel or pattern 79 | with a message handler, pass the channel or pattern name as a keyword argument 80 | with its value being the callback function. 81 | 82 | When a message is read on a channel or pattern with a message handler, the 83 | message dictionary is created and passed to the message handler. In this case, 84 | a `None` value is returned from get_message() since the message was already 85 | handled. 86 | 87 | .. code-block:: python 88 | 89 | def my_handler(message): 90 | print('MY HANDLER: ', message['data']) 91 | await p.subscribe(**{'my-channel': my_handler}) 92 | # read the subscribe confirmation message 93 | await p.get_message() 94 | # {'pattern': None, 'type': 'subscribe', 'channel': 'my-channel', 'data': 1L} 95 | await r.publish('my-channel', 'awesome data') 96 | # 1 97 | 98 | # for the message handler to work, we need tell the instance to read data. 99 | # this can be done in several ways (read more below). we'll just use 100 | # the familiar get_message() function for now 101 | await message = p.get_message() 102 | # 'MY HANDLER: awesome data' 103 | 104 | # note here that the my_handler callback printed the string above. 105 | # `message` is None because the message was handled by our handler. 106 | print(message) 107 | # None 108 | 109 | If your application is not interested in the (sometimes noisy) 110 | subscribe/unsubscribe confirmation messages, you can ignore them by passing 111 | `ignore_subscribe_messages=True` to `r.pubsub()`. This will cause all 112 | subscribe/unsubscribe messages to be read, but they won't bubble up to your 113 | application. 114 | 115 | .. code-block:: python 116 | 117 | p = r.pubsub(ignore_subscribe_messages=True) 118 | await p.subscribe('my-channel') 119 | await p.get_message() # hides the subscribe message and returns None 120 | await r.publish('my-channel') 121 | # 1 122 | await p.get_message() 123 | # {'channel': 'my-channel', 'data': 'my data', 'pattern': None, 'type': 'message'} 124 | 125 | There are three different strategies for reading messages. 126 | 127 | The examples above have been using `pubsub.get_message()`. 128 | If there's data available to be read, `get_message()` will 129 | read it, format the message and return it or pass it to a message handler. If 130 | there's no data to be read, `get_message()` will return None after the configured `timeout` 131 | (`timeout` should set to value larger than 0 or it will be ignore). 132 | This makes it trivial to integrate into an existing event loop inside your application. 133 | 134 | .. code-block:: python 135 | 136 | while True: 137 | message = await p.get_message() 138 | if message: 139 | # do something with the message 140 | await asyncio.sleep(0.001) # be nice to the system :) 141 | 142 | Older versions of aredis only read messages with `pubsub.listen()`. listen() 143 | is a generator that blocks until a message is available. If your application 144 | doesn't need to do anything else but receive and act on messages received from 145 | redis, listen() is an easy way to get up an running. 146 | 147 | .. code-block:: python 148 | 149 | for message in await p.listen(): 150 | # do something with the message 151 | 152 | The third option runs an event loop in a separate thread. 153 | `pubsub.run_in_thread()` creates a new thread and use the event loop in main thread. 154 | The thread object is returned to the caller of `run_in_thread()`. The caller can 155 | use the `thread.stop()` method to shut down the event loop and thread. Behind 156 | the scenes, this is simply a wrapper around `get_message()` that runs in a 157 | separate thread, and use `asyncio.run_coroutine_threadsafe()` to run coroutines. 158 | 159 | Note: Since we're running in a separate thread, there's no way to handle 160 | messages that aren't automatically handled with registered message handlers. 161 | Therefore, aredis prevents you from calling `run_in_thread()` if you're 162 | subscribed to patterns or channels that don't have message handlers attached. 163 | 164 | .. code-block:: python 165 | 166 | await p.subscribe(**{'my-channel': my_handler}) 167 | thread = p.run_in_thread(sleep_time=0.001) 168 | # the event loop is now running in the background processing messages 169 | # when it's time to shut it down... 170 | thread.stop() 171 | 172 | PubSub objects remember what channels and patterns they are subscribed to. In 173 | the event of a disconnection such as a network error or timeout, the 174 | PubSub object will re-subscribe to all prior channels and patterns when 175 | reconnecting. Messages that were published while the client was disconnected 176 | cannot be delivered. When you're finished with a PubSub object, call its 177 | `.close()` method to shutdown the connection. 178 | 179 | .. code-block:: python 180 | 181 | p = r.pubsub() 182 | ... 183 | p.close() 184 | 185 | The PUBSUB set of subcommands CHANNELS, NUMSUB and NUMPAT are also 186 | supported: 187 | 188 | .. code-block:: pycon 189 | 190 | await r.pubsub_channels() 191 | # ['foo', 'bar'] 192 | await r.pubsub_numsub('foo', 'bar') 193 | # [('foo', 9001), ('bar', 42)] 194 | await r.pubsub_numsub('baz') 195 | # [('baz', 0)] 196 | await r.pubsub_numpat() 197 | # 1204 198 | --------------------------------------------------------------------------------