├── tests ├── __init__.py ├── test_lock_cluster.py ├── test_utils_cluster.py ├── test_monitor_cluster.py ├── test_pubsub_cluster.py ├── test_scripting_cluster.py ├── test_monitor.py ├── test_encoding_cluster.py ├── test_encoding.py ├── test_utils.py ├── test_multiprocessing_cluster.py ├── test_scripting.py ├── test_multiprocessing.py ├── test_lock.py ├── conftest.py ├── test_pipeline.py └── test_pipeline_cluster.py ├── requirements.txt ├── MANIFEST.in ├── dev-requirements.txt ├── setup.cfg ├── docs ├── license.rst ├── development.rst ├── logging.rst ├── disclaimer.rst ├── project-status.rst ├── cluster-setup.rst ├── testing.rst ├── License.txt ├── scripting.rst ├── authors.rst ├── limitations-and-differences.rst ├── release-process.rst ├── readonly-mode.rst ├── benchmarks.rst ├── pubsub.rst ├── client.rst ├── index.rst ├── Makefile ├── upgrading.rst ├── conf.py └── release-notes.rst ├── .gitignore ├── examples ├── from_url_password_protected.py ├── incr-test-writer.py ├── basic_password_protected.py ├── README.md ├── basic.py ├── basic_elasticache_password_protected.py ├── pipeline-incrby.py ├── generate_slot_keys.py └── pipeline-readonly-replicas.py ├── ptp-debug.py ├── LICENSE ├── rediscluster ├── pubsub.py ├── __init__.py ├── exceptions.py ├── crc.py └── utils.py ├── tox.ini ├── .travis.yml ├── setup.py ├── benchmarks └── simple.py ├── README.md ├── CONTRIBUTING.md └── Makefile /tests/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | redis>=3.0.0,<4.0.0 2 | -------------------------------------------------------------------------------- /tests/test_lock_cluster.py: -------------------------------------------------------------------------------- 1 | # All pubsub tests is blocked out as pubsub do not work properly in a clustered environment 2 | -------------------------------------------------------------------------------- /tests/test_utils_cluster.py: -------------------------------------------------------------------------------- 1 | # All pubsub tests is blocked out as pubsub do not work properly in a clustered environment 2 | -------------------------------------------------------------------------------- /tests/test_monitor_cluster.py: -------------------------------------------------------------------------------- 1 | # All pubsub tests is blocked out as pubsub do not work properly in a clustered environment 2 | -------------------------------------------------------------------------------- /tests/test_pubsub_cluster.py: -------------------------------------------------------------------------------- 1 | # All pubsub tests is blocked out as pubsub do not work properly in a clustered environment 2 | -------------------------------------------------------------------------------- /tests/test_scripting_cluster.py: -------------------------------------------------------------------------------- 1 | # All pubsub tests is blocked out as pubsub do not work properly in a clustered environment 2 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | exclude *.py 2 | include docs/authors.rst 3 | include docs/License.txt 4 | include docs/release-notes.rst 5 | include CHANGES 6 | include setup.py 7 | include README.md 8 | -------------------------------------------------------------------------------- /dev-requirements.txt: -------------------------------------------------------------------------------- 1 | -r requirements.txt 2 | 3 | coverage 4 | pytest 5 | testfixtures 6 | mock 7 | docopt 8 | tox 9 | python-coveralls 10 | ptpdb 11 | ptpython 12 | pysnooper 13 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [bdist_wheel] 2 | universal=1 3 | 4 | [metadata] 5 | license_file = LICENSE 6 | 7 | [pycodestyle] 8 | show-source = 1 9 | exclude = .venv,.tox,dist,docs,build,*.egg,redis_install 10 | -------------------------------------------------------------------------------- /docs/license.rst: -------------------------------------------------------------------------------- 1 | Licensing 2 | --------- 3 | 4 | Copyright (c) 2013-2021 Johan Andersson 5 | 6 | MIT (See docs/License.txt file) 7 | 8 | The license should be the same as redis-py (https://github.com/andymccurdy/redis-py) 9 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Ignore all python compiled files 2 | *.pyc 3 | *.swp 4 | env27* 5 | .tox 6 | .coverage* 7 | dump.rdb 8 | redis-git 9 | htmlcov 10 | dist 11 | build 12 | *.egg-info 13 | .cache 14 | docs/_build 15 | docs/_build_html 16 | .idea 17 | -------------------------------------------------------------------------------- /examples/from_url_password_protected.py: -------------------------------------------------------------------------------- 1 | from rediscluster import RedisCluster 2 | 3 | url="redis://:R1NFTBWTE1@10.127.91.90:6572/0" 4 | 5 | rc = RedisCluster.from_url(url, skip_full_coverage_check=True) 6 | 7 | rc.set("foo", "bar") 8 | 9 | print(rc.get("foo")) 10 | -------------------------------------------------------------------------------- /examples/incr-test-writer.py: -------------------------------------------------------------------------------- 1 | from redis._compat import xrange 2 | from rediscluster import RedisCluster 3 | 4 | startup_nodes = [{"host": "127.0.0.1", "port": 7000}] 5 | r = RedisCluster(startup_nodes=startup_nodes, max_connections=32, decode_responses=True) 6 | 7 | for i in xrange(1000000): 8 | d = str(i) 9 | r.set(d, d) 10 | r.incrby(d, 1) 11 | -------------------------------------------------------------------------------- /ptp-debug.py: -------------------------------------------------------------------------------- 1 | from rediscluster import RedisCluster 2 | 3 | startup_nodes = [{"host": "127.0.0.1", "port": "7000"}] 4 | 5 | # Note: decode_responses must be set to True when used with python3 6 | rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True) 7 | url_client = RedisCluster.from_url('http://127.0.0.1:7000') 8 | 9 | __import__('ptpdb').set_trace() 10 | -------------------------------------------------------------------------------- /examples/basic_password_protected.py: -------------------------------------------------------------------------------- 1 | from rediscluster import RedisCluster 2 | 3 | startup_nodes = [{"host": "127.0.0.1", "port": "7100"}] 4 | 5 | # Note: decode_responses must be set to True when used with python3 6 | rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True, password='password_is_protected') 7 | 8 | rc.set("foo", "bar") 9 | 10 | print(rc.get("foo")) 11 | -------------------------------------------------------------------------------- /examples/README.md: -------------------------------------------------------------------------------- 1 | In this folder, you will find some example scripts that will both demonstrate some examples on how to use certain functionality and it also function as test scripts to ensure the client works as expected during usage of those functionalities. 2 | 3 | To really ensure the scripts is working, they should be runned during a resharding operation to ensure that all redirection and fault handling code works without throwing any unexpected errors. 4 | -------------------------------------------------------------------------------- /examples/basic.py: -------------------------------------------------------------------------------- 1 | from rediscluster import RedisCluster 2 | 3 | startup_nodes = [{"host": "127.0.0.1", "port": "7000"}] 4 | 5 | # Note: decode_responses must be set to True when used with python3 6 | rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True) 7 | rc.set("foo", "bar") 8 | print(rc.get("foo")) 9 | 10 | # Alternate simple mode of pointing to one startup node 11 | rc = RedisCluster( 12 | host="127.0.0.1", 13 | port=7000, 14 | decode_responses=True, 15 | ) 16 | rc.set("foo", "bar") 17 | print(rc.get("foo")) 18 | -------------------------------------------------------------------------------- /docs/development.rst: -------------------------------------------------------------------------------- 1 | Development 2 | =========== 3 | 4 | 5 | Documentation 6 | ------------- 7 | 8 | To build and test/view documentation you need to install sphinx and addons to be able to run the local dev server to render the documentation. 9 | 10 | Install sphinx plus addons 11 | 12 | .. code-block:: 13 | 14 | pip install sphinx sphinx-autobuild sphinx-rtd-theme 15 | 16 | To start the local development server run from the root folder of this git repo 17 | 18 | .. code-block:: 19 | 20 | sphinx-autobuild docs docs/_build/html 21 | 22 | Open up `localhost:8000` in your web-browser to view the online documentation 23 | -------------------------------------------------------------------------------- /docs/logging.rst: -------------------------------------------------------------------------------- 1 | Setup client logging 2 | ==================== 3 | 4 | To setup logging for debugging inside the client during development you can add this as an example to your own code to enable `DEBUG` logging when using the library. 5 | 6 | .. code-block:: python 7 | 8 | import logging 9 | 10 | from rediscluster import RedisCluster 11 | 12 | logging.basicConfig() 13 | logger = logging.getLogger('rediscluster') 14 | logger.setLevel(logging.DEBUG) 15 | logger.propagate = True 16 | 17 | Note that this logging is not recommended to be used inside production as it can cause a performance drain and a slowdown of your client. 18 | -------------------------------------------------------------------------------- /examples/basic_elasticache_password_protected.py: -------------------------------------------------------------------------------- 1 | from rediscluster import RedisCluster 2 | 3 | rc = RedisCluster( 4 | host='clustercfg.cfg-endpoint-name.aq25ta.euw1.cache.amazonaws.com', 5 | port=6379, 6 | password='password_is_protected', 7 | skip_full_coverage_check=True, # Bypass Redis CONFIG call to elasticache 8 | decode_responses=True, # decode_responses must be set to True when used with python3 9 | ssl=True, # in-transit encryption, https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/in-transit-encryption.html 10 | ssl_cert_reqs=None # see https://github.com/andymccurdy/redis-py#ssl-connections 11 | ) 12 | 13 | rc.set("foo", "bar") 14 | 15 | print(rc.get("foo")) 16 | -------------------------------------------------------------------------------- /examples/pipeline-incrby.py: -------------------------------------------------------------------------------- 1 | from redis._compat import xrange 2 | from rediscluster import RedisCluster 3 | 4 | startup_nodes = [{"host": "127.0.0.1", "port": 7000}] 5 | r = RedisCluster(startup_nodes=startup_nodes, max_connections=32, decode_responses=True) 6 | 7 | for i in xrange(1000000): 8 | d = str(i) 9 | pipe = r.pipeline(transaction=False) 10 | pipe.set(d, d) 11 | pipe.incrby(d, 1) 12 | pipe.execute() 13 | 14 | pipe = r.pipeline(transaction=False) 15 | pipe.set("foo-{0}".format(d), d) 16 | pipe.incrby("foo-{0}".format(d), 1) 17 | pipe.set("bar-{0}".format(d), d) 18 | pipe.incrby("bar-{0}".format(d), 1) 19 | pipe.set("bazz-{0}".format(d), d) 20 | pipe.incrby("bazz-{0}".format(d), 1) 21 | pipe.execute() 22 | -------------------------------------------------------------------------------- /docs/disclaimer.rst: -------------------------------------------------------------------------------- 1 | Disclaimer 2 | ========== 3 | 4 | Both Redis cluster and redis-py-cluster is considered stable and production ready. 5 | 6 | But this depends on what you are going to use clustering for. In the simple use cases with SET/GET and other single key functions there is not issues. If you require multi key functionality or pipelines then you must be very careful when developing because they work slightly different from the normal redis server. 7 | 8 | If you require advance features like pubsub or scripting, this lib and redis do not handle that kind of use-cases very well. You either need to develop a custom solution yourself or use a non clustered redis server for that. 9 | 10 | Finally, this lib itself is very stable and I know of at least 2 companies that use this in production with high loads and big cluster sizes. 11 | -------------------------------------------------------------------------------- /examples/generate_slot_keys.py: -------------------------------------------------------------------------------- 1 | import random 2 | import string 3 | import sys 4 | from rediscluster import RedisCluster 5 | 6 | startup_nodes = [{"host": "127.0.0.1", "port": "7000"}] 7 | 8 | # Note: decode_responses must be set to True when used with python3 9 | rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True) 10 | 11 | # 10 batches 12 | batch_set = {i: [] for i in range(0, 16384)} 13 | 14 | # Do 100000 slot randos in each block 15 | for j in range(0, 100000): 16 | rando_string = ''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(5)) 17 | 18 | keyslot = rc.connection_pool.nodes.keyslot(rando_string) 19 | 20 | # batch_set.setdefault(keyslot) 21 | batch_set[keyslot].append(rando_string) 22 | 23 | for i in range(0, 16384): 24 | if len(batch_set[i]) > 0: 25 | print(i, ':', batch_set[i]) 26 | sys.exit(0) 27 | -------------------------------------------------------------------------------- /docs/project-status.rst: -------------------------------------------------------------------------------- 1 | Project status 2 | ============== 3 | 4 | If you have a problem with the code or general questions about this lib, you can ping me inside the gitter channel that you can find here https://gitter.im/Grokzen/redis-py-cluster and i will help you out with problems or usage of this lib. 5 | 6 | As of release `1.0.0` this project will be considered stable and usable in production. If you are going to use redis cluster in your project, you should read up on all documentation that you can find in the bottom of this Readme file. It will contain usage examples and descriptions of what is and what is not implemented. It will also describe how and why things work the way they do in this client. 7 | 8 | On the topic about porting/moving this code into `redis-py` there is currently work over here https://github.com/andymccurdy/redis-py/pull/604 that will bring cluster support based on this code. But my suggestion is that until that work is completed that you should use this lib. 9 | -------------------------------------------------------------------------------- /docs/cluster-setup.rst: -------------------------------------------------------------------------------- 1 | Redis cluster setup 2 | =================== 3 | 4 | 5 | 6 | Manually 7 | -------- 8 | 9 | - Redis cluster tutorial: http://redis.io/topics/cluster-tutorial 10 | - Redis cluster specs: http://redis.io/topics/cluster-spec 11 | - This video will describe how to setup and use a redis cluster: http://vimeo.com/63672368 (This video is outdated but could server as a good tutorial/example) 12 | 13 | 14 | 15 | Docker 16 | ------ 17 | 18 | A fully functional docker image can be found at https://github.com/Grokzen/docker-redis-cluster 19 | 20 | See repo `README` for detailed instructions how to setup and run. 21 | 22 | 23 | 24 | Vagrant 25 | ------- 26 | 27 | A fully functional vagrant box can be found at https://github.com/72squared/vagrant-redis-cluster 28 | 29 | See repo `README` for detailed instructions how to setup and run. 30 | 31 | 32 | 33 | Simple makefile 34 | --------------- 35 | 36 | A simple makefile solution can be found at https://github.com/Grokzen/travis-redis-cluster 37 | 38 | See repo `README` for detailed instructions how to setup. 39 | -------------------------------------------------------------------------------- /docs/testing.rst: -------------------------------------------------------------------------------- 1 | Testing 2 | ======= 3 | 4 | All tests are currently built around a 6 redis server cluster setup (3 masters + 3 slaves). One server must be using port 7000 for redis cluster discovery. 5 | 6 | The easiest way to setup a cluster is to use either a Docker or Vagrant. They are both described in [Setup a redis cluster. Manually, Docker & Vagrant](docs/Cluster_Setup.md). 7 | 8 | 9 | 10 | Tox - Multi environment testing 11 | ------------------------------- 12 | 13 | To run all tests in all supported environments with `tox` read this [Tox multienv testing](docs/Tox.md) 14 | 15 | Tox is the easiest way to run all tests because it will manage all dependencies and run the correct test command for you. 16 | 17 | TravisCI will use tox to run tests on all supported python & hiredis versions. 18 | 19 | Install tox with `pip install tox` 20 | 21 | To run all environments you need all supported python versions installed on your machine. (See supported python versions list) and you also need the python-dev package for all python versions to build hiredis. 22 | 23 | To run a specific python version use either `tox -e py27` or `tox -e py34` 24 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2014-2021 Johan Andersson 2 | 3 | Permission is hereby granted, free of charge, to any person 4 | obtaining a copy of this software and associated documentation 5 | files (the "Software"), to deal in the Software without 6 | restriction, including without limitation the rights to use, 7 | copy, modify, merge, publish, distribute, sublicense, and/or sell 8 | copies of the Software, and to permit persons to whom the 9 | Software is furnished to do so, subject to the following 10 | conditions: 11 | 12 | The above copyright notice and this permission notice shall be 13 | included in all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 16 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 17 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 18 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 19 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 20 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 21 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 22 | OTHER DEALINGS IN THE SOFTWARE. 23 | -------------------------------------------------------------------------------- /docs/License.txt: -------------------------------------------------------------------------------- 1 | Copyright (c) 2014-2021 Johan Andersson 2 | 3 | Permission is hereby granted, free of charge, to any person 4 | obtaining a copy of this software and associated documentation 5 | files (the "Software"), to deal in the Software without 6 | restriction, including without limitation the rights to use, 7 | copy, modify, merge, publish, distribute, sublicense, and/or sell 8 | copies of the Software, and to permit persons to whom the 9 | Software is furnished to do so, subject to the following 10 | conditions: 11 | 12 | The above copyright notice and this permission notice shall be 13 | included in all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 16 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 17 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 18 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 19 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 20 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 21 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 22 | OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /docs/scripting.rst: -------------------------------------------------------------------------------- 1 | # Scripting support 2 | 3 | Scripting support is limited to scripts that operate on keys in the same key slot. 4 | If a script is executed via `evalsha`, `eval` or by calling the callable returned by 5 | `register_script` and the keys passed as arguments do not map to the same key slot, 6 | a `RedisClusterException` will be thrown. 7 | 8 | It is however, possible to query a key within the script, that is not passed 9 | as an argument of `eval`, `evalsha`. In this scenarios it is not possible to detect 10 | the error early and redis itself will raise an error which will be percolated 11 | to the user. For example: 12 | 13 | ```python 14 | cluster = RedisCluster('localhost', 7000) 15 | script = """ 16 | return redis.call('GET', KEYS[1]) * redis.call('GET', ARGV[1]) 17 | """ 18 | # this will succeed 19 | cluster.eval(script, 1, "A{Foo}", "A{Foo}") 20 | # this will fail as "A{Foo}" and "A{Bar}" are on different key slots. 21 | cluster.eval(script, 1, "A{Foo}", "A{Bar}") 22 | ``` 23 | 24 | ## Unsupported operations 25 | 26 | - The `SCRIPT KILL` command is not yet implemented. 27 | - Scripting in the context of a pipeline is not yet implemented. 28 | -------------------------------------------------------------------------------- /docs/authors.rst: -------------------------------------------------------------------------------- 1 | Project Authors 2 | =============== 3 | 4 | Added in the order they contributed. 5 | 6 | If you are mentioned in this document and want your row changed for any reason, open a new PR with changes. 7 | 8 | Lead author and maintainer: Grokzen - https://github.com/Grokzen 9 | 10 | Authors who contributed code or testing: 11 | 12 | - Dobrite - https://github.com/dobrite 13 | - 72squared - https://github.com/72squared 14 | - Neuron Teckid - https://github.com/neuront 15 | - iandyh - https://github.com/iandyh 16 | - mumumu - https://github.com/mumumu 17 | - awestendorf - https://github.com/awestendorf 18 | - Ali-Akber Saifee - https://github.com/alisaifee 19 | - etng - https://github.com/etng 20 | - gmolight - https://github.com/gmolight 21 | - baranbartu - https://github.com/baranbartu 22 | - monklof - https://github.com/monklof 23 | - dutradda - https://github.com/dutradda 24 | - AngusP - https://github.com/AngusP 25 | - Doug Kent - https://github.com/dkent 26 | - VascoVisser - https://github.com/VascoVisser 27 | - astrohsy - https://github.com/astrohsy 28 | - Artur Stawiarski - https://github.com/astawiarski 29 | - Matthew Anderson - https://github.com/mc3ander 30 | - Appurv Jain - https://github.com/appurvj 31 | -------------------------------------------------------------------------------- /rediscluster/pubsub.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # 3rd party imports 4 | from redis.client import PubSub 5 | 6 | 7 | class ClusterPubSub(PubSub): 8 | """ 9 | Wrapper for PubSub class. 10 | """ 11 | 12 | def __init__(self, *args, **kwargs): 13 | super(ClusterPubSub, self).__init__(*args, **kwargs) 14 | 15 | def execute_command(self, *args, **kwargs): 16 | """ 17 | Execute a publish/subscribe command. 18 | 19 | Taken code from redis-py and tweak to make it work within a cluster. 20 | """ 21 | # NOTE: don't parse the response in this function -- it could pull a 22 | # legitimate message off the stack if the connection is already 23 | # subscribed to one or more channels 24 | 25 | if self.connection is None: 26 | self.connection = self.connection_pool.get_connection( 27 | 'pubsub', 28 | self.shard_hint, 29 | channel=args[1], 30 | ) 31 | # register a callback that re-subscribes to any channels we 32 | # were listening to when we were disconnected 33 | self.connection.register_connect_callback(self.on_connect) 34 | connection = self.connection 35 | self._execute(connection, connection.send_command, *args) 36 | -------------------------------------------------------------------------------- /rediscluster/__init__.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # python std lib 4 | import logging 5 | 6 | # rediscluster imports 7 | from rediscluster.client import RedisCluster 8 | from rediscluster.connection import ( 9 | ClusterBlockingConnectionPool, 10 | ClusterConnection, 11 | ClusterConnectionPool, 12 | ) 13 | from rediscluster.exceptions import ( 14 | RedisClusterException, 15 | RedisClusterError, 16 | ClusterDownException, 17 | ClusterError, 18 | ClusterCrossSlotError, 19 | ClusterDownError, 20 | AskError, 21 | TryAgainError, 22 | MovedError, 23 | MasterDownError, 24 | ) 25 | from rediscluster.pipeline import ClusterPipeline 26 | 27 | 28 | def int_or_str(value): 29 | try: 30 | return int(value) 31 | except ValueError: 32 | return value 33 | 34 | 35 | # Major, Minor, Fix version 36 | __version__ = '2.1.3' 37 | VERSION = tuple(map(int_or_str, __version__.split('.'))) 38 | 39 | __all__ = [ 40 | 'AskError', 41 | 'ClusterBlockingConnectionPool', 42 | 'ClusterConnection', 43 | 'ClusterConnectionPool', 44 | 'ClusterCrossSlotError', 45 | 'ClusterDownError', 46 | 'ClusterDownException', 47 | 'ClusterError', 48 | 'ClusterPipeline', 49 | 'MasterDownError', 50 | 'MovedError', 51 | 'RedisCluster', 52 | 'RedisClusterError', 53 | 'RedisClusterException', 54 | 'TryAgainError', 55 | ] 56 | 57 | # Set default logging handler to avoid "No handler found" warnings. 58 | logging.getLogger(__name__).addHandler(logging.NullHandler()) 59 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | # Tox (http://tox.testrun.org/) is a tool for running tests in 2 | # multiple virtualenvs. This configuration file will run the 3 | # test suite on all supported python versions. To use it, "pip 4 | # install tox" and then run "tox" from this directory. 5 | 6 | [tox] 7 | envlist = py27, py35, py36, py37, py38, hi27, hi35, hi36, hi37, hi38, flake8-py34, flake8-py27 8 | 9 | [testenv] 10 | deps = -r{toxinidir}/dev-requirements.txt 11 | commands = python {envbindir}/coverage run --source rediscluster -p -m py.test 12 | 13 | [testenv:hi27] 14 | basepython = python2.7 15 | deps = 16 | -r{toxinidir}/dev-requirements.txt 17 | hiredis == 0.2.0 18 | 19 | [testenv:hi35] 20 | basepython = python3.5 21 | deps = 22 | -r{toxinidir}/dev-requirements.txt 23 | hiredis == 0.2.0 24 | 25 | [testenv:hi36] 26 | basepython = python3.6 27 | deps = 28 | -r{toxinidir}/dev-requirements.txt 29 | hiredis == 0.2.0 30 | 31 | [testenv:hi37] 32 | basepython = python3.7 33 | deps = 34 | -r{toxinidir}/dev-requirements.txt 35 | hiredis == 0.2.0 36 | 37 | [testenv:hi38] 38 | basepython = python3.8 39 | deps = 40 | -r{toxinidir}/dev-requirements.txt 41 | hiredis == 0.2.0 42 | 43 | [testenv:flake8-py36] 44 | basepython= python3.6 45 | deps = 46 | flake8==3.7.9 47 | commands = flake8 --show-source --exclude=.venv,.tox,dist,docs,build,.git --ignore=E501,E731,E402 . 48 | 49 | [testenv:flake8-py27] 50 | basepython= python2.7 51 | deps = 52 | flake8==3.7.9 53 | commands = flake8 --show-source --exclude=.venv,.tox,dist,docs,build,.git --ignore=E501,E731,E402 . 54 | -------------------------------------------------------------------------------- /docs/limitations-and-differences.rst: -------------------------------------------------------------------------------- 1 | Limitations and differences 2 | =========================== 3 | 4 | This will compare against `redis-py` 5 | 6 | There is a lot of differences that have to be taken into consideration when using redis cluster. 7 | 8 | Any method that can operate on multiple keys have to be reimplemented in the client and in some cases that is not possible to do. In general any method that is overridden in RedisCluster have lost the ability of being atomic. 9 | 10 | Pipelines do not work the same way in a cluster. In `Redis` it batches all commands so that they can be executed at the same time when requested. But with RedisCluster pipelines will send the command directly to the server when it is called, but it will still store the result internally and return the same data from .execute(). This is done so that the code still behaves like a pipeline and no code will break. A better solution will be implemented in the future. 11 | 12 | A lot of methods will behave very different when using RedisCluster. Some methods send the same request to all servers and return the result in another format than `Redis` does. Some methods are blocked because they do not work / are not implemented / are dangerous to use in redis cluster. 13 | 14 | Some of the commands are only partially supported when using RedisCluster. The commands ``zinterstore`` and ``zunionstore`` are only supported if all the keys map to the same key slot in the cluster. This can be achieved by namespacing related keys with a prefix followed by a bracketed common key. Example: 15 | 16 | .. code-block:: python 17 | 18 | r.zunionstore('d{foo}', ['a{foo}', 'b{foo}', 'c{foo}']) 19 | 20 | This corresponds to how redis behaves in cluster mode. Eventually these commands will likely be more fully supported by implementing the logic in the client library at the expense of atomicity and performance. 21 | -------------------------------------------------------------------------------- /docs/release-process.rst: -------------------------------------------------------------------------------- 1 | Release process 2 | =============== 3 | 4 | This section describes the process and how a release is made of this package. 5 | 6 | All steps for twine tool can be found here https://twine.readthedocs.io/en/latest/ 7 | 8 | 9 | Install helper tools 10 | -------------------- 11 | 12 | We use the standard sdist package build solution to package the source dist and wheel package into the format that pip and pypi understands. 13 | 14 | We then use `twine` as the helper tool to upload and interact with pypi to submit the package to both pypi & testpypi. 15 | 16 | First create a new venv that uses at least python3.7 but it is recommended to use the latest python version always. Published releases will be built with python 3.9.0+ 17 | 18 | Install twine with 19 | 20 | .. code-block:: 21 | 22 | pip install twine 23 | 24 | 25 | Build python package 26 | -------------------- 27 | 28 | First ensure that your `dist/` folder is empty so that you will not attempt to upload a dev version or other packages to the public index. 29 | 30 | Create the source dist and wheel dist by running 31 | 32 | .. code-block:: 33 | 34 | python setup.py sdist bdist_wheel 35 | 36 | The built python pakages can be found in ´dist/` 37 | 38 | 39 | Submit to testpypi 40 | ------------------ 41 | 42 | It is always good to test out the build first locally so there are no obvious code problems but also to submit the build to testpypi to verify that the upload works and that you get the version number and `README` section working correct. 43 | 44 | To upload to `testpypi` run 45 | 46 | .. code-block:: 47 | 48 | twine upload -r testpypi dist/* 49 | 50 | It will upload everything to https://test.pypi.org/project/redis-py-cluster/ 51 | 52 | 53 | Submit build to public pypi 54 | --------------------------- 55 | 56 | To submit the final package to public official pypi run 57 | 58 | .. code-block:: 59 | 60 | twine upload dist/* 61 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | os: linux 2 | dist: xenial 3 | language: python 4 | cache: pip 5 | python: 6 | - "2.7" 7 | - "3.5" 8 | - "3.6" 9 | - "3.7" 10 | - "3.8" 11 | - "nightly" 12 | services: 13 | - redis 14 | install: 15 | - "if [[ $REDIS_VERSION == '3.0' ]]; then REDIS_VERSION=3.0 make redis-install; fi" 16 | - "if [[ $REDIS_VERSION == '3.2' ]]; then REDIS_VERSION=3.2 make redis-install; fi" 17 | - "if [[ $REDIS_VERSION == '4.0' ]]; then REDIS_VERSION=4.0 make redis-install; fi" 18 | - "if [[ $REDIS_VERSION == '5.0' ]]; then REDIS_VERSION=5.0 make redis-install; fi" 19 | - "if [[ $REDIS_VERSION == '6.0' ]]; then REDIS_VERSION=6.0 make redis-install; fi" 20 | - "if [[ $TEST_PYCODESTYLE == '1' ]]; then pip install pycodestyle; fi" 21 | - pip install -r dev-requirements.txt 22 | - pip install -e . 23 | - "if [[ $HIREDIS == '1' ]]; then pip install hiredis; fi" 24 | - "pip freeze | grep redis" 25 | - "pip freeze" 26 | env: 27 | # Redis 3.0 & HIREDIS 28 | - HIREDIS=0 REDIS_VERSION=3.0 29 | - HIREDIS=1 REDIS_VERSION=3.0 30 | # Redis 3.2 & HIREDIS 31 | - HIREDIS=0 REDIS_VERSION=3.2 32 | - HIREDIS=1 REDIS_VERSION=3.2 33 | # Redis 4.0 & HIREDIS 34 | - HIREDIS=0 REDIS_VERSION=4.0 35 | - HIREDIS=1 REDIS_VERSION=4.0 36 | # Redis 5.0 & HIREDIS 37 | - HIREDIS=0 REDIS_VERSION=5.0 38 | - HIREDIS=1 REDIS_VERSION=5.0 39 | # Redis 6.0 & HIREDIS 40 | - HIREDIS=0 REDIS_VERSION=6.0 41 | - HIREDIS=1 REDIS_VERSION=6.0 42 | script: 43 | - make start 44 | - coverage erase 45 | - coverage run --source rediscluster -p -m py.test 46 | after_success: 47 | - coverage combine 48 | - coveralls 49 | - "if [[ $TEST_PYCODESTYLE == '1' ]]; then pycodestyle --repeat --show-source --exclude=.venv,.tox,dist,docs,build,*.egg,redis_install .; fi" 50 | jobs: 51 | allow_failures: 52 | - python: "nightly" 53 | - python: 2.7 54 | env: TEST_PYCODESTYLE=1 55 | - python: 3.6 56 | env: TEST_PYCODESTYLE=1 57 | # python 3.7 has to be specified manually in the matrix 58 | # https://github.com/travis-ci/travis-ci/issues/9815 59 | - python: 3.7 60 | dist: xenial 61 | env: TEST_HIREDIS=0 62 | - python: 3.7 63 | dist: xenial 64 | env: TEST_HIREDIS=1 65 | -------------------------------------------------------------------------------- /docs/readonly-mode.rst: -------------------------------------------------------------------------------- 1 | Readonly mode 2 | ============= 3 | 4 | By default, Redis Cluster always returns MOVE redirection response on accessing slave node. You can overcome this limitation [for scaling read with READONLY mode](http://redis.io/topics/cluster-spec#scaling-reads-using-slave-nodes). 5 | 6 | redis-py-cluster also implements this mode. You can access slave by passing `readonly_mode=True` to RedisCluster (or RedisCluster) constructor. 7 | 8 | .. code-block:: python 9 | 10 | >>> from rediscluster import RedisCluster 11 | >>> startup_nodes = [{"host": "127.0.0.1", "port": "7000"}] 12 | >>> rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True) 13 | >>> rc.set("foo16706", "bar") 14 | >>> rc.set("foo81", "foo") 15 | True 16 | >>> rc_readonly = RedisCluster(startup_nodes=startup_nodes, decode_responses=True, readonly_mode=True) 17 | >>> rc_readonly.get("foo16706") 18 | u'bar' 19 | >>> rc_readonly.get("foo81") 20 | u'foo' 21 | 22 | We can use pipeline via `readonly_mode=True` object. 23 | 24 | .. code-block:: python 25 | 26 | >>> with rc_readonly.pipeline() as readonly_pipe: 27 | ... readonly_pipe.get('foo81') 28 | ... readonly_pipe.get('foo16706') 29 | ... readonly_pipe.execute() 30 | ... 31 | [u'foo', u'bar'] 32 | 33 | But this mode has some downside or limitations. 34 | 35 | - It is possible that you cannot get the latest data from READONLY mode enabled object because Redis implements asynchronous replication. 36 | - **You MUST NOT use SET related operation with READONLY mode enabled object**, otherwise you can possibly get 'Too many Cluster redirections' error because we choose master and its slave nodes randomly. 37 | - You should use get related stuff only. 38 | - Ditto with pipeline, otherwise you can get 'Command # X (XXXX) of pipeline: MOVED' error. 39 | 40 | .. code-block:: python 41 | 42 | >>> rc_readonly = RedisCluster(startup_nodes=startup_nodes, decode_responses=True, readonly_mode=True) 43 | >>> # NO: This works in almost case, but possibly emits Too many Cluster redirections error... 44 | >>> rc_readonly.set('foo', 'bar') 45 | >>> # OK: You should always use get related stuff... 46 | >>> rc_readonly.get('foo') 47 | -------------------------------------------------------------------------------- /rediscluster/exceptions.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | from redis.exceptions import ( 4 | ResponseError, RedisError, 5 | ) 6 | 7 | 8 | class RedisClusterConfigError(Exception): 9 | """ 10 | """ 11 | pass 12 | 13 | 14 | class RedisClusterException(Exception): 15 | """ 16 | """ 17 | pass 18 | 19 | 20 | class RedisClusterError(Exception): 21 | """ 22 | """ 23 | pass 24 | 25 | 26 | class ClusterDownException(Exception): 27 | """ 28 | """ 29 | pass 30 | 31 | 32 | class ClusterError(RedisError): 33 | """ 34 | """ 35 | pass 36 | 37 | 38 | class ClusterCrossSlotError(ResponseError): 39 | """ 40 | """ 41 | message = "Keys in request don't hash to the same slot" 42 | 43 | 44 | class ClusterDownError(ClusterError, ResponseError): 45 | """ 46 | """ 47 | 48 | def __init__(self, resp): 49 | self.args = (resp, ) 50 | self.message = resp 51 | 52 | 53 | class AskError(ResponseError): 54 | """ 55 | src node: MIGRATING to dst node 56 | get > ASK error 57 | ask dst node > ASKING command 58 | dst node: IMPORTING from src node 59 | asking command only affects next command 60 | any op will be allowed after asking command 61 | """ 62 | 63 | def __init__(self, resp): 64 | """should only redirect to master node""" 65 | self.args = (resp, ) 66 | self.message = resp 67 | slot_id, new_node = resp.split(' ') 68 | host, port = new_node.rsplit(':', 1) 69 | self.slot_id = int(slot_id) 70 | self.node_addr = self.host, self.port = host, int(port) 71 | 72 | 73 | class TryAgainError(ResponseError): 74 | """ 75 | """ 76 | 77 | def __init__(self, *args, **kwargs): 78 | pass 79 | 80 | 81 | class MovedError(AskError): 82 | """ 83 | """ 84 | pass 85 | 86 | 87 | class MasterDownError(ClusterDownError): 88 | """ 89 | """ 90 | pass 91 | 92 | 93 | class SlotNotCoveredError(RedisClusterException): 94 | """ 95 | This error only happens in the case where the connection pool will try to 96 | fetch what node that is covered by a given slot. 97 | 98 | If this error is raised the client should drop the current node layout and 99 | attempt to reconnect and refresh the node layout again 100 | """ 101 | pass 102 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # python std lib 4 | import os 5 | 6 | try: 7 | from setuptools import setup 8 | except ImportError: 9 | from distutils.core import setup 10 | 11 | # if you are using vagrant, just delete os.link directly, 12 | # The hard link only saves a little disk space, so you should not care 13 | if os.getenv('USER', '').lower() == 'vagrant': 14 | del os.link 15 | 16 | with open('README.md') as f: 17 | readme = f.read() 18 | with open(os.path.join('docs', 'release-notes.rst')) as f: 19 | history = f.read() 20 | 21 | setup( 22 | name="redis-py-cluster", 23 | version="2.1.3", 24 | description="Library for communicating with Redis Clusters. Built on top of redis-py lib", 25 | long_description=readme + '\n\n' + history, 26 | long_description_content_type="text/markdown", 27 | author="Johan Andersson", 28 | author_email="Grokzen@gmail.com", 29 | maintainer='Johan Andersson', 30 | maintainer_email='Grokzen@gmail.com', 31 | packages=["rediscluster"], 32 | url='http://github.com/grokzen/redis-py-cluster', 33 | license='MIT', 34 | install_requires=[ 35 | 'redis>=3.0.0,<4.0.0' 36 | ], 37 | python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4", 38 | extras_require={ 39 | 'hiredis': [ 40 | "hiredis>=0.1.3", 41 | ], 42 | }, 43 | keywords=[ 44 | 'redis', 45 | 'redis cluster', 46 | ], 47 | classifiers=[ 48 | # As from https://pypi.python.org/pypi?%3Aaction=list_classifiers 49 | # 'Development Status :: 1 - Planning', 50 | # 'Development Status :: 2 - Pre-Alpha', 51 | # 'Development Status :: 3 - Alpha', 52 | # 'Development Status :: 4 - Beta', 53 | 'Development Status :: 5 - Production/Stable', 54 | # 'Development Status :: 6 - Mature', 55 | # 'Development Status :: 7 - Inactive', 56 | 'Programming Language :: Python', 57 | 'Programming Language :: Python :: 2', 58 | 'Programming Language :: Python :: 2.7', 59 | 'Programming Language :: Python :: 3', 60 | 'Programming Language :: Python :: 3.5', 61 | 'Programming Language :: Python :: 3.6', 62 | 'Programming Language :: Python :: 3.7', 63 | 'Programming Language :: Python :: 3.8', 64 | 'Environment :: Web Environment', 65 | 'Operating System :: POSIX', 66 | 'License :: OSI Approved :: MIT License', 67 | ] 68 | ) 69 | -------------------------------------------------------------------------------- /docs/benchmarks.rst: -------------------------------------------------------------------------------- 1 | Benchmarks 2 | ========== 3 | 4 | These are a few benchmarks that are designed to test specific parts of the code to demonstrate the performance difference between using this lib and the normal Redis client. 5 | 6 | 7 | 8 | Setup benchmarks 9 | ---------------- 10 | 11 | Before running any benchmark you should install this lib in editable mode inside a virtualenv so it can import `RedisCluster` lib. 12 | 13 | Install with 14 | 15 | .. code-block:: bash 16 | 17 | pip install -e . 18 | 19 | You also need a few redis servers to test against. You must have one cluster with at least one node on port `7001` and you must also have a non-clustered server on port `7007`. 20 | 21 | 22 | 23 | Implemented benchmarks 24 | --------------------- 25 | 26 | - `simple.py`, This benchmark can be used to measure a simple `set` and `get` operation chain. It also supports running pipelines by adding the flag `--pipeline`. 27 | 28 | 29 | 30 | Run predefined benchmarks 31 | ------------------------- 32 | 33 | These are a set of predefined benchmarks that can be run to measure the performance drop from using this library. 34 | 35 | To run the benchmarks run 36 | 37 | .. code-block:: bash 38 | 39 | make benchmark 40 | 41 | Example output and comparison of different runmodes 42 | 43 | .. code-block:: 44 | 45 | -- Running Simple benchmark with Redis lib and non cluster server, 50 concurrent processes and total 50000*2 requests -- 46 | python benchmarks/simple.py --host 127.0.0.1 --timeit --nocluster -c 50 -n 50000 47 | 50.0k SET/GET operations took: 2.45 seconds... 40799.93 operations per second 48 | 49 | -- Running Simple benchmark with RedisCluster lib and cluster server, 50 concurrent processes and total 50000*2 requests -- 50 | python benchmarks/simple.py --host 127.0.0.1 --timeit -c 50 -n 50000 51 | 50.0k SET & GET (each 50%) operations took: 9.51 seconds... 31513.71 operations per second 52 | 53 | -- Running Simple benchmark with pipelines & Redis lib and non cluster server -- 54 | python benchmarks/simple.py --host 127.0.0.1 --timeit --nocluster -c 50 -n 50000 --pipeline 55 | 50.0k SET & GET (each 50%) operations took: 2.1728243827819824 seconds... 46023.047602201834 operations per second 56 | 57 | -- Running Simple benchmark with RedisCluster lib and cluster server 58 | python benchmarks/simple.py --host 127.0.0.1 --timeit -c 50 -n 50000 --pipeline 59 | 50.0k SET & GET (each 50%) operations took: 1.7181339263916016 seconds... 58202.68051514381 operations per second 60 | -------------------------------------------------------------------------------- /tests/test_monitor.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | from redis._compat import unicode 3 | from .conftest import skip_if_server_version_lt, wait_for_command 4 | 5 | # 3rd party imports 6 | import pytest 7 | 8 | 9 | class TestMonitor(object): 10 | @skip_if_server_version_lt('5.0.0') 11 | @pytest.mark.xfail(reason="Monitor feature not yet implemented") 12 | def test_wait_command_not_found(self, r): 13 | "Make sure the wait_for_command func works when command is not found" 14 | with r.monitor() as m: 15 | response = wait_for_command(r, m, 'nothing') 16 | assert response is None 17 | 18 | @skip_if_server_version_lt('5.0.0') 19 | @pytest.mark.xfail(reason="Monitor feature not yet implemented") 20 | def test_response_values(self, r): 21 | with r.monitor() as m: 22 | r.ping() 23 | response = wait_for_command(r, m, 'PING') 24 | assert isinstance(response['time'], float) 25 | assert response['db'] == 9 26 | assert response['client_type'] in ('tcp', 'unix') 27 | assert isinstance(response['client_address'], unicode) 28 | assert isinstance(response['client_port'], unicode) 29 | assert response['command'] == 'PING' 30 | 31 | @skip_if_server_version_lt('5.0.0') 32 | @pytest.mark.xfail(reason="Monitor feature not yet implemented") 33 | def test_command_with_quoted_key(self, r): 34 | with r.monitor() as m: 35 | r.get('foo"bar') 36 | response = wait_for_command(r, m, 'GET foo"bar') 37 | assert response['command'] == 'GET foo"bar' 38 | 39 | @skip_if_server_version_lt('5.0.0') 40 | @pytest.mark.xfail(reason="Monitor feature not yet implemented") 41 | def test_command_with_binary_data(self, r): 42 | with r.monitor() as m: 43 | byte_string = b'foo\x92' 44 | r.get(byte_string) 45 | response = wait_for_command(r, m, 'GET foo\\x92') 46 | assert response['command'] == 'GET foo\\x92' 47 | 48 | @skip_if_server_version_lt('5.0.0') 49 | @pytest.mark.xfail(reason="Monitor feature not yet implemented") 50 | def test_lua_script(self, r): 51 | with r.monitor() as m: 52 | script = 'return redis.call("GET", "foo")' 53 | assert r.eval(script, 0) is None 54 | response = wait_for_command(r, m, 'GET foo') 55 | assert response['command'] == 'GET foo' 56 | assert response['client_type'] == 'lua' 57 | assert response['client_address'] == 'lua' 58 | assert response['client_port'] == '' 59 | -------------------------------------------------------------------------------- /examples/pipeline-readonly-replicas.py: -------------------------------------------------------------------------------- 1 | from rediscluster import RedisCluster 2 | import threading 3 | from time import sleep 4 | 5 | """ 6 | This file will show the difference and how to use the READONLY feature to offload READ specific commands 7 | to replica nodes in your cluster. The script will do two runs with 10 sets of commands each in a threaded environment 8 | both with read_from_replica feature turned off and turned on so you can simulate both cases and test out your code 9 | and ensure that it works before opting in to that feature. 10 | 11 | The absolute best way to show what node is used inside the pipeline is to add a print(node) here 12 | 13 | # pipeline.py 14 | def _send_cluster_command(...): 15 | ... 16 | slot = self._determine_slot(*c.args) 17 | node = self.connection_pool.get_node_by_slot(slot, self.read_from_replicas and c.args[0] in READ_COMMANDS) 18 | print(node) 19 | ... 20 | 21 | and when you run this test script it will show you what node is used in both cases and the first scenario it should show 22 | only "master" as the node type all commands will be sent to. In the second run with read_from_replica=True it should 23 | be a mix of "master" and "slave". 24 | """ 25 | 26 | 27 | def test_run(read_from_replica): 28 | print(f"########\nStarting test run with read_from_replica={read_from_replica}") 29 | rc = RedisCluster(host="127.0.0.1", port=7000, decode_responses=True, read_from_replicas=read_from_replica) 30 | 31 | print(rc.set("foo1", "bar")) 32 | print(rc.set("foo2", "bar")) 33 | print(rc.set("foo3", "bar")) 34 | print(rc.set("foo4", "bar")) 35 | print(rc.set("foo5", "bar")) 36 | print(rc.set("foo6", "bar")) 37 | print(rc.set("foo7", "bar")) 38 | print(rc.set("foo8", "bar")) 39 | print(rc.set("foo9", "bar")) 40 | 41 | print(rc.get("foo1")) 42 | print(rc.get("foo2")) 43 | print(rc.get("foo3")) 44 | print(rc.get("foo4")) 45 | print(rc.get("foo5")) 46 | print(rc.get("foo6")) 47 | print(rc.get("foo7")) 48 | print(rc.get("foo8")) 49 | print(rc.get("foo9")) 50 | 51 | def thread_func(num): 52 | # sleep(0.1) 53 | pipe = rc.pipeline(read_from_replicas=read_from_replica) 54 | pipe.set(f"foo{num}", "bar") 55 | pipe.get(f"foo{num}") 56 | pipe.get(f"foo{num}") 57 | pipe.get(f"foo{num}") 58 | pipe.get(f"foo{num}") 59 | pipe.get(f"foo{num}") 60 | pipe.get(f"foo{num}") 61 | pipe.get(f"foo{num}") 62 | pipe.get(f"foo{num}") 63 | print(threading.current_thread().getName(), pipe.execute()) 64 | 65 | for i in range(0, 15): 66 | x = threading.Thread(target=thread_func, args=(i,), name=f"{i}") 67 | x.start() 68 | 69 | 70 | test_run(False) 71 | sleep(2) 72 | test_run(True) 73 | -------------------------------------------------------------------------------- /rediscluster/crc.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # python std lib 4 | import sys 5 | 6 | x_mode_m_crc16_lookup = [ 7 | 0x0000, 0x1021, 0x2042, 0x3063, 0x4084, 0x50a5, 0x60c6, 0x70e7, 8 | 0x8108, 0x9129, 0xa14a, 0xb16b, 0xc18c, 0xd1ad, 0xe1ce, 0xf1ef, 9 | 0x1231, 0x0210, 0x3273, 0x2252, 0x52b5, 0x4294, 0x72f7, 0x62d6, 10 | 0x9339, 0x8318, 0xb37b, 0xa35a, 0xd3bd, 0xc39c, 0xf3ff, 0xe3de, 11 | 0x2462, 0x3443, 0x0420, 0x1401, 0x64e6, 0x74c7, 0x44a4, 0x5485, 12 | 0xa56a, 0xb54b, 0x8528, 0x9509, 0xe5ee, 0xf5cf, 0xc5ac, 0xd58d, 13 | 0x3653, 0x2672, 0x1611, 0x0630, 0x76d7, 0x66f6, 0x5695, 0x46b4, 14 | 0xb75b, 0xa77a, 0x9719, 0x8738, 0xf7df, 0xe7fe, 0xd79d, 0xc7bc, 15 | 0x48c4, 0x58e5, 0x6886, 0x78a7, 0x0840, 0x1861, 0x2802, 0x3823, 16 | 0xc9cc, 0xd9ed, 0xe98e, 0xf9af, 0x8948, 0x9969, 0xa90a, 0xb92b, 17 | 0x5af5, 0x4ad4, 0x7ab7, 0x6a96, 0x1a71, 0x0a50, 0x3a33, 0x2a12, 18 | 0xdbfd, 0xcbdc, 0xfbbf, 0xeb9e, 0x9b79, 0x8b58, 0xbb3b, 0xab1a, 19 | 0x6ca6, 0x7c87, 0x4ce4, 0x5cc5, 0x2c22, 0x3c03, 0x0c60, 0x1c41, 20 | 0xedae, 0xfd8f, 0xcdec, 0xddcd, 0xad2a, 0xbd0b, 0x8d68, 0x9d49, 21 | 0x7e97, 0x6eb6, 0x5ed5, 0x4ef4, 0x3e13, 0x2e32, 0x1e51, 0x0e70, 22 | 0xff9f, 0xefbe, 0xdfdd, 0xcffc, 0xbf1b, 0xaf3a, 0x9f59, 0x8f78, 23 | 0x9188, 0x81a9, 0xb1ca, 0xa1eb, 0xd10c, 0xc12d, 0xf14e, 0xe16f, 24 | 0x1080, 0x00a1, 0x30c2, 0x20e3, 0x5004, 0x4025, 0x7046, 0x6067, 25 | 0x83b9, 0x9398, 0xa3fb, 0xb3da, 0xc33d, 0xd31c, 0xe37f, 0xf35e, 26 | 0x02b1, 0x1290, 0x22f3, 0x32d2, 0x4235, 0x5214, 0x6277, 0x7256, 27 | 0xb5ea, 0xa5cb, 0x95a8, 0x8589, 0xf56e, 0xe54f, 0xd52c, 0xc50d, 28 | 0x34e2, 0x24c3, 0x14a0, 0x0481, 0x7466, 0x6447, 0x5424, 0x4405, 29 | 0xa7db, 0xb7fa, 0x8799, 0x97b8, 0xe75f, 0xf77e, 0xc71d, 0xd73c, 30 | 0x26d3, 0x36f2, 0x0691, 0x16b0, 0x6657, 0x7676, 0x4615, 0x5634, 31 | 0xd94c, 0xc96d, 0xf90e, 0xe92f, 0x99c8, 0x89e9, 0xb98a, 0xa9ab, 32 | 0x5844, 0x4865, 0x7806, 0x6827, 0x18c0, 0x08e1, 0x3882, 0x28a3, 33 | 0xcb7d, 0xdb5c, 0xeb3f, 0xfb1e, 0x8bf9, 0x9bd8, 0xabbb, 0xbb9a, 34 | 0x4a75, 0x5a54, 0x6a37, 0x7a16, 0x0af1, 0x1ad0, 0x2ab3, 0x3a92, 35 | 0xfd2e, 0xed0f, 0xdd6c, 0xcd4d, 0xbdaa, 0xad8b, 0x9de8, 0x8dc9, 36 | 0x7c26, 0x6c07, 0x5c64, 0x4c45, 0x3ca2, 0x2c83, 0x1ce0, 0x0cc1, 37 | 0xef1f, 0xff3e, 0xcf5d, 0xdf7c, 0xaf9b, 0xbfba, 0x8fd9, 0x9ff8, 38 | 0x6e17, 0x7e36, 0x4e55, 0x5e74, 0x2e93, 0x3eb2, 0x0ed1, 0x1ef0 39 | ] 40 | 41 | 42 | def _crc16_py3(data): 43 | """ 44 | """ 45 | crc = 0 46 | for byte in data: 47 | crc = ((crc << 8) & 0xff00) ^ x_mode_m_crc16_lookup[((crc >> 8) & 0xff) ^ byte] 48 | return crc & 0xffff 49 | 50 | 51 | def _crc16_py2(data): 52 | """ 53 | """ 54 | crc = 0 55 | for byte in data: 56 | crc = ((crc << 8) & 0xff00) ^ x_mode_m_crc16_lookup[((crc >> 8) & 0xff) ^ ord(byte)] 57 | return crc & 0xffff 58 | 59 | 60 | if sys.version_info >= (3, 0, 0): 61 | crc16 = _crc16_py3 62 | else: 63 | crc16 = _crc16_py2 64 | -------------------------------------------------------------------------------- /tests/test_encoding_cluster.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | import pytest 3 | 4 | from rediscluster import RedisCluster 5 | 6 | from redis._compat import unichr, unicode 7 | from .conftest import _get_client 8 | 9 | 10 | class TestEncodingCluster(object): 11 | """ 12 | We must import the entire class due to the seperate fixture that uses RedisCluster as client 13 | class instead of the normal Redis instance. 14 | 15 | FIXME: If possible, monkeypatching TestEncoding class would be preferred but kinda impossible in reality 16 | """ 17 | @pytest.fixture() 18 | def r(self, request): 19 | return _get_client(RedisCluster, request=request, decode_responses=True) 20 | 21 | @pytest.fixture() 22 | def r_no_decode(self, request): 23 | return _get_client( 24 | RedisCluster, 25 | request=request, 26 | decode_responses=False, 27 | ) 28 | 29 | def test_simple_encoding(self, r_no_decode): 30 | unicode_string = unichr(3456) + 'abcd' + unichr(3421) 31 | r_no_decode['unicode-string'] = unicode_string.encode('utf-8') 32 | cached_val = r_no_decode['unicode-string'] 33 | assert isinstance(cached_val, bytes) 34 | assert unicode_string == cached_val.decode('utf-8') 35 | 36 | def test_simple_encoding_and_decoding(self, r): 37 | unicode_string = unichr(3456) + 'abcd' + unichr(3421) 38 | r['unicode-string'] = unicode_string 39 | cached_val = r['unicode-string'] 40 | assert isinstance(cached_val, unicode) 41 | assert unicode_string == cached_val 42 | 43 | def test_memoryview_encoding(self, r_no_decode): 44 | unicode_string = unichr(3456) + 'abcd' + unichr(3421) 45 | unicode_string_view = memoryview(unicode_string.encode('utf-8')) 46 | r_no_decode['unicode-string-memoryview'] = unicode_string_view 47 | cached_val = r_no_decode['unicode-string-memoryview'] 48 | # The cached value won't be a memoryview because it's a copy from Redis 49 | assert isinstance(cached_val, bytes) 50 | assert unicode_string == cached_val.decode('utf-8') 51 | 52 | def test_memoryview_encoding_and_decoding(self, r): 53 | unicode_string = unichr(3456) + 'abcd' + unichr(3421) 54 | unicode_string_view = memoryview(unicode_string.encode('utf-8')) 55 | r['unicode-string-memoryview'] = unicode_string_view 56 | cached_val = r['unicode-string-memoryview'] 57 | assert isinstance(cached_val, unicode) 58 | assert unicode_string == cached_val 59 | 60 | def test_list_encoding(self, r): 61 | unicode_string = unichr(3456) + 'abcd' + unichr(3421) 62 | result = [unicode_string, unicode_string, unicode_string] 63 | r.rpush('a', *result) 64 | assert r.lrange('a', 0, -1) == result 65 | 66 | 67 | class TestEncodingErrors(object): 68 | def test_ignore(self, request): 69 | r = _get_client(RedisCluster, request=request, decode_responses=True, 70 | encoding_errors='ignore') 71 | r.set('a', b'foo\xff') 72 | assert r.get('a') == 'foo' 73 | 74 | def test_replace(self, request): 75 | r = _get_client(RedisCluster, request=request, decode_responses=True, 76 | encoding_errors='replace') 77 | r.set('a', b'foo\xff') 78 | assert r.get('a') == 'foo\ufffd' 79 | -------------------------------------------------------------------------------- /docs/pubsub.rst: -------------------------------------------------------------------------------- 1 | Pubsub 2 | ====== 3 | 4 | After testing pubsub in cluster mode one big problem was discovered with the `PUBLISH` command. 5 | 6 | According to the current official redis documentation on `PUBLISH`:: 7 | 8 | Integer reply: the number of clients that received the message. 9 | 10 | It was initially assumed that if we had clients connected to different nodes in the cluster it would still report back the correct number of clients that received the message. 11 | 12 | However after some testing of this command it was discovered that it would only report the number of clients that have subscribed on the same server the `PUBLISH` command was executed on. 13 | 14 | Because of this, if there is some functionality that relies on an exact and correct number of clients that listen/subscribed to a specific channel it will be broken or behave wrong. 15 | 16 | Currently the only known workarounds is to: 17 | 18 | - Ignore the returned value 19 | - All clients talk to the same server 20 | - Use a non clustered redis server for pubsub operations 21 | 22 | Discussion on this topic can be found here: https://groups.google.com/forum/?hl=sv#!topic/redis-db/BlwSOYNBUl8 23 | 24 | 25 | 26 | Scalability issues 27 | ------------------ 28 | 29 | The following part is from this discussion https://groups.google.com/forum/?hl=sv#!topic/redis-db/B0_fvfDWLGM and it describes the scalability issue that pubsub has and the performance that goes with it when used in a cluster environment. 30 | 31 | according to [1] and [2] PubSub works by broadcasting every publish to every other 32 | Redis Cluster node. This limits the PubSub throughput to the bisection bandwidth 33 | of the underlying network infrastructure divided by the number of nodes times 34 | message size. So if a typical message has 1KB, the cluster has 10 nodes and 35 | bandwidth is 1 GBit/s, throughput is already limited to 12.5K RPS. If we increase 36 | the message size to 5 KB and the number of nodes to 50, we only get 500 RPS 37 | much less than a single Redis instance could service (>100K RPS), while putting 38 | maximum pressure on the network. PubSub thus scales linearly wrt. to the cluster size, 39 | but in the the negative direction! 40 | 41 | 42 | 43 | How pubsub works in RedisCluster 44 | -------------------------------- 45 | 46 | In release `1.2.0` the pubsub was code was reworked to now work like this. 47 | 48 | For `PUBLISH` and `SUBSCRIBE` commands: 49 | 50 | - The channel name is hashed and the keyslot is determined. 51 | - Determine the node that handles the keyslot. 52 | - Send the command to the node. 53 | 54 | The old solution was that all pubsub connections would talk to the same node all the time. This would ensure that the commands would work. 55 | 56 | This new solution is probably future safe and it will probably be a similar solution when `redis` fixes the scalability issues. 57 | 58 | 59 | 60 | Known limitations with pubsub 61 | ----------------------------- 62 | 63 | Pattern subscribe and publish do not work properly because if we hash a pattern like `fo*` we will get a keyslot for that string but there is a endless possibility of channel names based on that pattern that we can't know in advance. This feature is not limited but the commands is not recommended to use right now. 64 | 65 | The implemented solution will only work if other clients use/adopt the same behaviour. If some other client behaves differently, there might be problems with `PUBLISH` and `SUBSCRIBE` commands behaving wrong. 66 | 67 | 68 | 69 | Other solutions 70 | --------------- 71 | 72 | The simplest solution is to have a seperate non clustered redis instance that you have a regular `Redis` instance that works with your pubsub code. It is not recommended to use pubsub until `redis` fixes the implementation in the server itself. 73 | -------------------------------------------------------------------------------- /benchmarks/simple.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # -*- coding:utf-8 -*- 3 | """ 4 | Usage: 5 | redis-cluster-benchmark.py [--host ] [-p ] [-n ] [-c ] [--nocluster] [--timeit] [--pipeline] [--resetlastkey] [-h] [--version] 6 | 7 | Options: 8 | --host Redis server to test against [default: 127.0.0.1] 9 | -p Port on redis server [default: 7000] 10 | -n Request number [default: 100000] 11 | -c Concurrent client number [default: 1] 12 | --nocluster If flag is set then Redis will be used instead of cluster lib 13 | --timeit Run a mini benchmark to test performance 14 | --pipeline Only usable with --timeit flag. Runs SET/GET inside pipelines. 15 | --resetlastkey Reset __last__ key 16 | -h --help Output this help and exit 17 | --version Output version and exit 18 | """ 19 | 20 | import time 21 | from multiprocessing import Process 22 | # 3rd party imports 23 | from docopt import docopt 24 | 25 | 26 | def loop(rc, reset_last_key=None): 27 | """ 28 | Regular debug loop that can be used to test how redis behaves during changes in the cluster. 29 | """ 30 | if reset_last_key: 31 | rc.set("__last__", 0) 32 | 33 | last = False 34 | while last is False: 35 | try: 36 | last = rc.get("__last__") 37 | last = 0 if not last else int(last) 38 | print("starting at foo{0}".format(last)) 39 | except Exception as e: 40 | print("error {0}".format(e)) 41 | time.sleep(1) 42 | 43 | for i in range(last, 1000000000): # noqa 44 | try: 45 | print("SET foo{0} {1}".format(i, i)) 46 | rc.set("foo{0}".format(i), i) 47 | got = rc.get("foo{0}".format(i)) 48 | print("GET foo{0} {1}".format(i, got)) 49 | rc.set("__last__", i) 50 | except Exception as e: 51 | print("error {0}".format(e)) 52 | 53 | time.sleep(0.05) 54 | 55 | 56 | def timeit(rc, num): 57 | """ 58 | Time how long it take to run a number of set/get:s 59 | """ 60 | for i in range(0, num//2): # noqa 61 | s = "foo{0}".format(i) 62 | rc.set(s, i) 63 | rc.get(s) 64 | 65 | 66 | def timeit_pipeline(rc, num): 67 | """ 68 | Time how long it takes to run a number of set/get:s inside a cluster pipeline 69 | """ 70 | for i in range(0, num//2): # noqa 71 | s = "foo{0}".format(i) 72 | p = rc.pipeline() 73 | p.set(s, i) 74 | p.get(s) 75 | p.execute() 76 | 77 | 78 | if __name__ == "__main__": 79 | args = docopt(__doc__, version="0.3.1") 80 | startup_nodes = [{"host": args['--host'], "port": args['-p']}] 81 | 82 | if not args["--nocluster"]: 83 | from rediscluster import RedisCluster 84 | rc = RedisCluster(startup_nodes=startup_nodes, max_connections=32, socket_timeout=0.1, decode_responses=True) 85 | else: 86 | from redis import Redis 87 | rc = Redis(host=args["--host"], port=args["-p"], socket_timeout=0.1, decode_responses=True) 88 | # create specified number processes 89 | processes = [] 90 | single_request = int(args["-n"]) // int(args["-c"]) 91 | for j in range(int(args["-c"])): 92 | if args["--timeit"]: 93 | if args["--pipeline"]: 94 | p = Process(target=timeit_pipeline, args=(rc, single_request)) 95 | else: 96 | p = Process(target=timeit, args=(rc, single_request)) 97 | else: 98 | p = Process(target=loop, args=(rc, args["--resetlastkey"])) 99 | processes.append(p) 100 | t1 = time.time() 101 | for p in processes: 102 | p.start() 103 | for p in processes: 104 | p.join() 105 | t2 = time.time() - t1 106 | print("Tested {0}k SET & GET (each 50%) operations took: {1} seconds... {2} operations per second".format(int(args["-n"]) / 1000, t2, int(args["-n"]) / t2 * 2)) 107 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # redis-py-cluster EOL 2 | 3 | In the upstream package *redis-py* that this librar extends, they have since version `* 4.1.0 (Dec 26, 2021)` ported in this code base into the main branch. That basically ends the need for this package if you are using any version after that release as it is natively supported there. If you are upgrading your redis-py version you should plan in time to migrate out from this package into their package. The move into the first released version should be seamless with very few and small changes required. This means that the release `2.1.x` is the very last major release of this package. This do not mean that there might be some small support version if that is needed to sort out some critical issue here. This is not expected as the development time spent on this package in the last few years have been very low. This repo will not be put into a real github Archive mode but this repo should be considered in archive state. 4 | 5 | I want to give a few big thanks to some of the people that has provided many contributions, work, time and effort into making this project into what it is today. First is one of the main contributors 72Squared and his team who helped to build many of the core features and trying out new and untested code and provided many optimizations. The team over at AWS for putting in the time and effort and skill into porting over this to `redis-py`. The team at RedisLabs for all of their support and time in creating a fantastic redis community the last few years. Antirez for making the reference client which this repo was written and based on and for making one of my favorite databases in the ecosystem. And last all the contributions and use of this repo by the entire community. 6 | 7 | 8 | # redis-py-cluster 9 | 10 | This client provides a client for redis cluster that was added in redis 3.0. 11 | 12 | This project is a port of `redis-rb-cluster` by antirez, with a lot of added functionality. The original source can be found at https://github.com/antirez/redis-rb-cluster 13 | 14 | [![Build Status](https://travis-ci.org/Grokzen/redis-py-cluster.svg?branch=master)](https://travis-ci.org/Grokzen/redis-py-cluster) [![Coverage Status](https://coveralls.io/repos/Grokzen/redis-py-cluster/badge.png)](https://coveralls.io/r/Grokzen/redis-py-cluster) [![PyPI version](https://badge.fury.io/py/redis-py-cluster.svg)](http://badge.fury.io/py/redis-py-cluster) 15 | 16 | The branch `master` will always contain the latest unstable/development code that has been merged from Pull Requests. Use the latest commit from master branch on your own risk, there is no guarantees of compatibility or stability of non tagged commits on the master branch. Only tagged releases on the `master` branch is considered stable for use. 17 | 18 | 19 | # Python 2 Compatibility Note 20 | 21 | This library follows the announced change from our upstream package redis-py. Due to this, 22 | we will follow the same python 2.7 deprecation timeline as stated in there. 23 | 24 | redis-py-cluster 2.1.x will be the last major version release that supports Python 2.7. 25 | The 2.1.x line will continue to get bug fixes and security patches that 26 | support Python 2 until August 1, 2020. redis-py-cluster 3.0.x will be the next major 27 | version and will require Python 3.5+. 28 | 29 | 30 | # Documentation 31 | 32 | All documentation can be found at https://redis-py-cluster.readthedocs.io/en/master 33 | 34 | This Readme contains a reduced version of the full documentation. 35 | 36 | Upgrading instructions between each released version can be found [here](docs/upgrading.rst) 37 | 38 | Changelog for next release and all older releases can be found [here](docs/release-notes.rst) 39 | 40 | 41 | 42 | ## Installation 43 | 44 | Latest stable release from pypi 45 | 46 | ``` 47 | $ pip install redis-py-cluster 48 | ``` 49 | 50 | This major version of `redis-py-cluster` supports `redis-py >=3.0.0, <4.0.0`. 51 | 52 | 53 | 54 | ## Usage example 55 | 56 | Small sample script that shows how to get started with RedisCluster. It can also be found in [examples/basic.py](examples/basic.py) 57 | 58 | ```python 59 | >>> from rediscluster import RedisCluster 60 | 61 | >>> # Requires at least one node for cluster discovery. Multiple nodes is recommended. 62 | >>> startup_nodes = [{"host": "127.0.0.1", "port": "7000"}, {"host": "127.0.0.1", "port": "7001"}] 63 | >>> rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True) 64 | 65 | # Or you can use the simpler format of providing one node same way as with a Redis() instance 66 | <<< rc = RedisCluster(host="127.0.0.1", port=7000, decode_responses=True) 67 | 68 | >>> rc.set("foo", "bar") 69 | True 70 | >>> print(rc.get("foo")) 71 | 'bar' 72 | ``` 73 | 74 | 75 | 76 | ## License & Authors 77 | 78 | Copyright (c) 2013-2021 Johan Andersson 79 | 80 | MIT (See docs/License.txt file) 81 | 82 | The license should be the same as redis-py (https://github.com/andymccurdy/redis-py) 83 | -------------------------------------------------------------------------------- /tests/test_encoding.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | import pytest 3 | import redis 4 | 5 | from redis._compat import unichr, unicode 6 | from redis.connection import Connection 7 | from .conftest import _get_client 8 | 9 | 10 | class TestEncoding(object): 11 | @pytest.fixture() 12 | def r(self, request): 13 | return _get_client(redis.Redis, request=request, decode_responses=True) 14 | 15 | @pytest.fixture() 16 | def r_no_decode(self, request): 17 | return _get_client( 18 | redis.Redis, 19 | request=request, 20 | decode_responses=False, 21 | ) 22 | 23 | @pytest.mark.skip(reason="Cluster specific override") 24 | def test_simple_encoding(self, r_no_decode): 25 | unicode_string = unichr(3456) + 'abcd' + unichr(3421) 26 | r_no_decode['unicode-string'] = unicode_string.encode('utf-8') 27 | cached_val = r_no_decode['unicode-string'] 28 | assert isinstance(cached_val, bytes) 29 | assert unicode_string == cached_val.decode('utf-8') 30 | 31 | @pytest.mark.skip(reason="Cluster specific override") 32 | def test_simple_encoding_and_decoding(self, r): 33 | unicode_string = unichr(3456) + 'abcd' + unichr(3421) 34 | r['unicode-string'] = unicode_string 35 | cached_val = r['unicode-string'] 36 | assert isinstance(cached_val, unicode) 37 | assert unicode_string == cached_val 38 | 39 | @pytest.mark.skip(reason="Cluster specific override") 40 | def test_memoryview_encoding(self, r_no_decode): 41 | unicode_string = unichr(3456) + 'abcd' + unichr(3421) 42 | unicode_string_view = memoryview(unicode_string.encode('utf-8')) 43 | r_no_decode['unicode-string-memoryview'] = unicode_string_view 44 | cached_val = r_no_decode['unicode-string-memoryview'] 45 | # The cached value won't be a memoryview because it's a copy from Redis 46 | assert isinstance(cached_val, bytes) 47 | assert unicode_string == cached_val.decode('utf-8') 48 | 49 | @pytest.mark.skip(reason="Cluster specific override") 50 | def test_memoryview_encoding_and_decoding(self, r): 51 | unicode_string = unichr(3456) + 'abcd' + unichr(3421) 52 | unicode_string_view = memoryview(unicode_string.encode('utf-8')) 53 | r['unicode-string-memoryview'] = unicode_string_view 54 | cached_val = r['unicode-string-memoryview'] 55 | assert isinstance(cached_val, unicode) 56 | assert unicode_string == cached_val 57 | 58 | @pytest.mark.skip(reason="Cluster specific override") 59 | def test_list_encoding(self, r): 60 | unicode_string = unichr(3456) + 'abcd' + unichr(3421) 61 | result = [unicode_string, unicode_string, unicode_string] 62 | r.rpush('a', *result) 63 | assert r.lrange('a', 0, -1) == result 64 | 65 | 66 | class TestEncodingErrors(object): 67 | @pytest.mark.skip(reason="Cluster specific override") 68 | def test_ignore(self, request): 69 | r = _get_client(redis.Redis, request=request, decode_responses=True, 70 | encoding_errors='ignore') 71 | r.set('a', b'foo\xff') 72 | assert r.get('a') == 'foo' 73 | 74 | @pytest.mark.skip(reason="Cluster specific override") 75 | def test_replace(self, request): 76 | r = _get_client(redis.Redis, request=request, decode_responses=True, 77 | encoding_errors='replace') 78 | r.set('a', b'foo\xff') 79 | assert r.get('a') == 'foo\ufffd' 80 | 81 | 82 | class TestMemoryviewsAreNotPacked(object): 83 | def test_memoryviews_are_not_packed(self): 84 | c = Connection() 85 | arg = memoryview(b'some_arg') 86 | arg_list = ['SOME_COMMAND', arg] 87 | cmd = c.pack_command(*arg_list) 88 | assert cmd[1] is arg 89 | cmds = c.pack_commands([arg_list, arg_list]) 90 | assert cmds[1] is arg 91 | assert cmds[3] is arg 92 | 93 | 94 | class TestCommandsAreNotEncoded(object): 95 | @pytest.fixture() 96 | def r(self, request): 97 | return _get_client(redis.Redis, request=request, encoding='utf-16') 98 | 99 | def test_basic_command(self, r): 100 | r.set('hello', 'world') 101 | 102 | 103 | class TestInvalidUserInput(object): 104 | def test_boolean_fails(self, r): 105 | with pytest.raises(redis.DataError): 106 | r.set('a', True) 107 | 108 | def test_none_fails(self, r): 109 | with pytest.raises(redis.DataError): 110 | r.set('a', None) 111 | 112 | def test_user_type_fails(self, r): 113 | class Foo(object): 114 | def __str__(self): 115 | return 'Foo' 116 | 117 | def __unicode__(self): 118 | return 'Foo' 119 | 120 | with pytest.raises(redis.DataError): 121 | r.set('a', Foo()) 122 | -------------------------------------------------------------------------------- /tests/test_utils.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # python std lib 4 | from __future__ import with_statement 5 | 6 | # rediscluster imports 7 | from rediscluster.exceptions import RedisClusterException 8 | from rediscluster.utils import ( 9 | string_keys_to_dict, 10 | dict_merge, 11 | blocked_command, 12 | merge_result, 13 | first_key, 14 | parse_cluster_slots, 15 | ) 16 | 17 | # 3rd party imports 18 | import pytest 19 | from redis._compat import unicode 20 | 21 | 22 | def test_parse_cluster_slots(): 23 | """ 24 | Example raw output from redis cluster. Output is form a redis 3.2.x node 25 | that includes the id in the reponse. The test below that do not include the id 26 | is to validate that the code is compatible with redis versions that do not contain 27 | that value in the response from the server. 28 | 29 | 127.0.0.1:10000> cluster slots 30 | 1) 1) (integer) 5461 31 | 2) (integer) 10922 32 | 3) 1) "10.0.0.1" 33 | 2) (integer) 10000 34 | 3) "3588b4cf9fc72d57bb262a024747797ead0cf7ea" 35 | 4) 1) "10.0.0.4" 36 | 2) (integer) 10000 37 | 3) "a72c02c7d85f4ec3145ab2c411eefc0812aa96b0" 38 | 2) 1) (integer) 10923 39 | 2) (integer) 16383 40 | 3) 1) "10.0.0.2" 41 | 2) (integer) 10000 42 | 3) "ffd36d8d7cb10d813f81f9662a835f6beea72677" 43 | 4) 1) "10.0.0.5" 44 | 2) (integer) 10000 45 | 3) "5c15b69186017ddc25ebfac81e74694fc0c1a160" 46 | 3) 1) (integer) 0 47 | 2) (integer) 5460 48 | 3) 1) "10.0.0.3" 49 | 2) (integer) 10000 50 | 3) "069cda388c7c41c62abe892d9e0a2d55fbf5ffd5" 51 | 4) 1) "10.0.0.6" 52 | 2) (integer) 10000 53 | 3) "dc152a08b4cf1f2a0baf775fb86ad0938cb907dc" 54 | """ 55 | mock_response = [ 56 | [0, 5460, ['172.17.0.2', 7000], ['172.17.0.2', 7003]], 57 | [5461, 10922, ['172.17.0.2', 7001], ['172.17.0.2', 7004]], 58 | [10923, 16383, ['172.17.0.2', 7002], ['172.17.0.2', 7005]] 59 | ] 60 | parse_cluster_slots(mock_response) 61 | 62 | extended_mock_response = [ 63 | [0, 5460, ['172.17.0.2', 7000, 'ffd36d8d7cb10d813f81f9662a835f6beea72677'], ['172.17.0.2', 7003, '5c15b69186017ddc25ebfac81e74694fc0c1a160']], 64 | [5461, 10922, ['172.17.0.2', 7001, '069cda388c7c41c62abe892d9e0a2d55fbf5ffd5'], ['172.17.0.2', 7004, 'dc152a08b4cf1f2a0baf775fb86ad0938cb907dc']], 65 | [10923, 16383, ['172.17.0.2', 7002, '3588b4cf9fc72d57bb262a024747797ead0cf7ea'], ['172.17.0.2', 7005, 'a72c02c7d85f4ec3145ab2c411eefc0812aa96b0']] 66 | ] 67 | 68 | parse_cluster_slots(extended_mock_response) 69 | 70 | mock_binary_response = [ 71 | [0, 5460, [b'172.17.0.2', 7000], [b'172.17.0.2', 7003]], 72 | [5461, 10922, [b'172.17.0.2', 7001], [b'172.17.0.2', 7004]], 73 | [10923, 16383, [b'172.17.0.2', 7002], [b'172.17.0.2', 7005]] 74 | ] 75 | parse_cluster_slots(mock_binary_response) 76 | 77 | extended_mock_binary_response = [ 78 | [0, 5460, [b'172.17.0.2', 7000, b'ffd36d8d7cb10d813f81f9662a835f6beea72677'], [b'172.17.0.2', 7003, b'5c15b69186017ddc25ebfac81e74694fc0c1a160']], 79 | [5461, 10922, [b'172.17.0.2', 7001, b'069cda388c7c41c62abe892d9e0a2d55fbf5ffd5'], [b'172.17.0.2', 7004, b'dc152a08b4cf1f2a0baf775fb86ad0938cb907dc']], 80 | [10923, 16383, [b'172.17.0.2', 7002, b'3588b4cf9fc72d57bb262a024747797ead0cf7ea'], [b'172.17.0.2', 7005, b'a72c02c7d85f4ec3145ab2c411eefc0812aa96b0']] 81 | ] 82 | 83 | extended_mock_parsed = { 84 | (0, 5460): {'master': ('172.17.0.2', 7000), 'slaves': [('172.17.0.2', 7003)]}, 85 | (5461, 10922): {'master': ('172.17.0.2', 7001), 86 | 'slaves': [('172.17.0.2', 7004)]}, 87 | (10923, 16383): {'master': ('172.17.0.2', 7002), 88 | 'slaves': [('172.17.0.2', 7005)]} 89 | } 90 | 91 | assert parse_cluster_slots(extended_mock_binary_response) == extended_mock_parsed 92 | 93 | 94 | def test_string_keys_to(): 95 | def mock_true(): 96 | return True 97 | assert string_keys_to_dict(["FOO", "BAR"], mock_true) == {"FOO": mock_true, "BAR": mock_true} 98 | 99 | 100 | def test_dict_merge(): 101 | a = {"a": 1} 102 | b = {"b": 2} 103 | c = {"c": 3} 104 | assert dict_merge(a, b, c) == {"a": 1, "b": 2, "c": 3} 105 | 106 | 107 | def test_dict_merge_value_error(): 108 | with pytest.raises(ValueError): 109 | dict_merge([]) 110 | 111 | 112 | def test_blocked_command(): 113 | with pytest.raises(RedisClusterException) as ex: 114 | blocked_command(None, "SET") 115 | assert unicode(ex.value) == "Command: SET is blocked in redis cluster mode" 116 | 117 | 118 | def test_merge_result(): 119 | assert merge_result("foobar", {"a": [1, 2, 3], "b": [4, 5, 6]}) == [1, 2, 3, 4, 5, 6] 120 | assert merge_result("foobar", {"a": [1, 2, 3], "b": [1, 2, 3]}) == [1, 2, 3] 121 | 122 | 123 | def test_merge_result_value_error(): 124 | with pytest.raises(ValueError): 125 | merge_result("foobar", []) 126 | 127 | 128 | def test_first_key(): 129 | assert first_key("foobar", {"foo": 1}) == 1 130 | 131 | with pytest.raises(RedisClusterException) as ex: 132 | first_key("foobar", {"foo": 1, "bar": 2}) 133 | assert unicode(ex.value).startswith("More then 1 result from command: foobar") 134 | 135 | 136 | def test_first_key_value_error(): 137 | with pytest.raises(ValueError): 138 | first_key("foobar", None) 139 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | 2 | # Pull Request 3 | 4 | For bug fixes you should provide some information about how to reproduce the problem so it can be verified if the new code solves the bug. 5 | 6 | All CI tests must pass (Travis-CI) 7 | 8 | Follow the code quality standards described in this file. 9 | 10 | You are responsible for ensuring the code is mergeable and for fixing any issues that can occur if other code was merged before your code. 11 | 12 | Always ensure docs are up to date based on your changes. If docs are missing and you think it should exists you are responsible for writing it. 13 | 14 | For all PRs you should do/include the following 15 | - A line about the change in the `CHANGES` file Add it in the section `Next release`, create it if needed. 16 | - If you change something already implemented, for example adding/removing an argument you should add a line in `docs/Upgrading.md` describing how to migrate existing code from the old to the new code. Add it in the section `Next release`, create it if needed. 17 | - Add yourself to `docs/Authors` file (This is optional if you want) 18 | 19 | 20 | 21 | # Code standard 22 | 23 | In general, you should follow the established pep8 coding standard, but with the following exceptions/changes. https://www.python.org/dev/peps/pep-0008/ 24 | 25 | - The default max line length (80) should not be followed religiously. Instead try to not exceed ~140 characters. 26 | Use the `flake8` tool to ensure you have good code quality. 27 | - Try to document as much as possible in the method docstring and avoid doc inside the code. Code should describe itself as much as possible. 28 | - Follow the `KISS` rule and `Make it work first, optimize later` 29 | - When indenting, try to indent with json style. For example: 30 | ``` 31 | # Do not use this style 32 | from foo import (bar, qwe, rty, 33 | foobar, barfoo) 34 | 35 | print("foobar {barfoo} {qwert}".format(barfoo=foo, 36 | qwerty=bar)) 37 | ``` 38 | 39 | ``` 40 | # Use this style instead 41 | from foo import ( 42 | bar, qwe, rty, 43 | foobar, barfoo, 44 | ) 45 | 46 | print("foobar {barfoo} {qwert}".format( 47 | barfoo=foo, qwerty=bar)) 48 | ``` 49 | 50 | 51 | 52 | # Documentation 53 | 54 | This project currently uses RST files and sphinx to build the documentation and to allow for it to be hosted on ReadTheDocs. 55 | 56 | To test your documentation changes you must first install sphinx and sphinx-reload to render and view the docs files on your local machine before commiting them to this repo. 57 | 58 | Install the dependencies inside a python virtualenv 59 | 60 | ``` 61 | pip install sphinx sphinx-reload 62 | ``` 63 | 64 | To start the local webbserver and render the docs folder, run from the root of this project 65 | 66 | ``` 67 | sphinx-reload docs/ 68 | ``` 69 | 70 | It will open up the rendered website in your browser automatically. 71 | 72 | At some point in the future the docs format will change from RST to MkDocs. 73 | 74 | 75 | 76 | # Tests 77 | 78 | I (Johan/Grokzen) have been allowed (by andymccurdy) explicitly to use all test code that already exists inside `redis-py` lib. If possible you should reuse code that exists in there. 79 | 80 | All code should aim to have 100% test coverage. This is just a target and not a requirement. 81 | 82 | All new features must implement tests to show that it works as intended. 83 | 84 | All implemented tests must pass on all supported python versions. List of supported versions can be found in `README.md`. 85 | 86 | All tests should be assumed to work against the test environment that is implemented when running in `travis-ci`. Currently that means 6 nodes in the cluster, 3 masters, 3 slaves, using port `7000-7005` and the node on port `7000` must be accessible on `127.0.0.1` 87 | 88 | 89 | ## Testing strategy and how to implement cluster specific tests 90 | 91 | A new way of having the old upstream tests from redis-py combined with the cluster specific and unique tests that is needed to validate cluster functionality. This has been designed to improve the speed of which tests is updated from upstream as new redis-py releases is made and to make it easier to port them into the cluster variant. 92 | 93 | How do you implement a test for this code? 94 | 95 | The simplest case, this is a new cluster only/specific test that has nothing to do with the upstream redis-py package. If the test is related or could be classified to be added to one of the already existing test files that is mirrored from redis-py, then you should put this new test in the `..._cluster.py` version of the same file. 96 | 97 | If you need to make some kind of cluster unique adjustment to a test mirrorer from redis-py upstream, then do the following. In the mirrored file, for example `test_commands.py` you add the following decorator `@skip_for_no_cluster_impl()` to the method you want to modify. Then you copy the entire method and add it to the same class/method structure but inside the cluster specific version of the test file. In this example you would put it in `test_commands_cluster.py`. Copy the entire test method and keep it as similar as possible to make it easier to update in the future in-case there is changes in upstream redis-py tests. 98 | 99 | In the case where some command or feature is not supported natively or is decided not to be supported by this library, you should block any tests from upstream redis-py package that deals with that feature with the following decorator `@skip_for_no_cluster_impl()`. This will mark it to not be run during tests. This is also a good indicator for users of this library what features is not supported or there is not really a good cluster implementation for. 100 | -------------------------------------------------------------------------------- /docs/client.rst: -------------------------------------------------------------------------------- 1 | RedisCluster client configuration options 2 | ========================================= 3 | 4 | This chapter will describe all the configuration options and flags that can be sent into the RedisCluster class instance. 5 | 6 | Each option will be described in a seperate topic to describe how it works and what it does. This will only describe any options that does anything else when compared to the options that redis-py already provides, or new options that is cluster specific. To find out what options redis-py provides please consult the documentation and/or git repo for that project. 7 | 8 | 9 | 10 | Host port remapping 11 | ------------------- 12 | 13 | This option exists to enable the client to fix a problem where the redis-server internally tracks a different ip:port compared to what your clients would like to connect to. 14 | 15 | A simple example to describe this problem is if you start a redis cluster through docker on your local machine. If we assume that you start the docker image grokzen/redis-cluster, 16 | when the redis cluster is initialized it will track the docker network IP for each node in the cluster. 17 | 18 | For example this could be 172.18.0.2. The problem is that a client that runs outside on your local machine will receive from the redis cluster that each node is reachable on the ip 172.18.0.2. 19 | But in some cases this IP is not available on your host system.To solve this we need a remapping table where we can tell this client that if you get back from your cluster 172.18.0.2 then your should remap it to localhost instead. 20 | When the client does this it can now connect and reach all nodes in your cluster. 21 | 22 | 23 | Remapping works off a rules list. Each rule is a dictionary of the form shown below 24 | 25 | .. code-block:: 26 | 27 | { 28 | 'from_host': , # String 29 | 'from_port': , # Integer 30 | 'to_host': , # String 31 | 'to_port': # Integer 32 | } 33 | 34 | 35 | Remapping properties: 36 | 37 | - This host_port_remap feature will not work on the startup_nodes so you still need to put in a valid and reachable set of startup nodes. 38 | - The remapping logic treats host_port_remap list as a "rules list" and only the first matching remapping entry will be applied 39 | - A remapping rule may contain just host or just port mapping, but both sides of the maping( i.e. from_host and to_host or from_port and to_port) are required for either 40 | - If both from_host and from_port are specified, then both will be used to decide if a remapping rule applies 41 | 42 | Examples of valid rules: 43 | 44 | .. code-block:: python 45 | 46 | {'from_host': "1.2.3.4", 'from_port': 1000, 'to_host': "2.2.2.2", 'to_port': 2000} 47 | 48 | {'from_host': "1.1.1.1", 'to_host': "127.0.0.1"} 49 | 50 | {'from_port': 1000, 'to_port': 2000} 51 | 52 | 53 | Example scripts: 54 | 55 | .. code-block:: python 56 | 57 | from rediscluster import RedisCluster 58 | 59 | startup_nodes = [{"host": "127.0.0.1", "port": "7000"}] 60 | 61 | rc = RedisCluster( 62 | startup_nodes=startup_nodes, 63 | decode_responses=True, 64 | host_port_remap=[ 65 | { 66 | 'from_host': '172.18.0.2', 67 | 'from_port': 7000, 68 | 'to_host': 'localhost', 69 | 'to_port': 7000, 70 | }, 71 | { 72 | 'from_host': '172.22.0.1', 73 | 'from_port': 7000, 74 | 'to_host': 'localhost', 75 | 'to_port': 7000, 76 | }, 77 | ] 78 | ) 79 | 80 | ## Debug output to show the client config/setup after client has been initialized. 81 | ## It should point to localhost:7000 for those nodes. 82 | print(rc.connection_pool.nodes.nodes) 83 | 84 | ## Test the client that it can still send and recieve data from the nodes after the remap has been done 85 | print(rc.set('foo', 'bar')) 86 | 87 | This feature is also useful in cases such as when one is trying to access AWS ElastiCache cluster secured by Stunnel (https://www.stunnel.org/) 88 | 89 | .. code-block:: python 90 | 91 | from rediscluster import RedisCluster 92 | 93 | startup_nodes = [ 94 | {"host": "127.0.0.1", "port": "17000"}, 95 | {"host": "127.0.0.1", "port": "17001"}, 96 | {"host": "127.0.0.1", "port": "17002"}, 97 | {"host": "127.0.0.1", "port": "17003"}, 98 | {"host": "127.0.0.1", "port": "17004"}, 99 | {"host": "127.0.0.1", "port": "17005"} 100 | ] 101 | 102 | host_port_remap=[ 103 | {'from_host': '41.1.3.1', 'from_port': 6379, 'to_host': '127.0.0.1', 'to_port': 17000}, 104 | {'from_host': '41.1.3.5', 'from_port': 6379, 'to_host': '127.0.0.1', 'to_port': 17001}, 105 | {'from_host': '41.1.4.2', 'from_port': 6379, 'to_host': '127.0.0.1', 'to_port': 17002}, 106 | {'from_host': '50.0.1.7', 'from_port': 6379, 'to_host': '127.0.0.1', 'to_port': 17003}, 107 | {'from_host': '50.0.7.3', 'from_port': 6379, 'to_host': '127.0.0.1', 'to_port': 17004}, 108 | {'from_host': '32.0.1.1', 'from_port': 6379, 'to_host': '127.0.0.1', 'to_port': 17005} 109 | ] 110 | 111 | 112 | # Note: decode_responses must be set to True when used with python3 113 | rc = RedisCluster( 114 | startup_nodes=startup_nodes, 115 | host_port_remap=host_port_remap, 116 | decode_responses=True, 117 | ssl=True, 118 | ssl_cert_reqs=None, 119 | # Needed for Elasticache Clusters 120 | skip_full_coverage_check=True) 121 | 122 | 123 | print(rc.connection_pool.nodes.nodes) 124 | print(rc.ping()) 125 | print(rc.set('foo', 'bar')) 126 | print(rc.get('foo')) 127 | -------------------------------------------------------------------------------- /tests/test_multiprocessing_cluster.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | import multiprocessing 3 | import contextlib 4 | 5 | import rediscluster 6 | from rediscluster.connection import ClusterConnection, ClusterConnectionPool 7 | from redis.exceptions import ConnectionError 8 | 9 | from .conftest import _get_client 10 | 11 | 12 | @contextlib.contextmanager 13 | def exit_callback(callback, *args): 14 | try: 15 | yield 16 | finally: 17 | callback(*args) 18 | 19 | 20 | class TestMultiprocessing(object): 21 | """ 22 | Cluster: tests must use the cluster specific connection class and client class 23 | to make tests valid for a cluster case. 24 | """ 25 | # Test connection sharing between forks. 26 | # See issue #1085 for details. 27 | 28 | # use a multi-connection client as that's the only type that is 29 | # actuall fork/process-safe 30 | @pytest.fixture() 31 | def r(self, request): 32 | return _get_client( 33 | rediscluster.RedisCluster, 34 | request=request, 35 | single_connection_client=False) 36 | 37 | def test_close_connection_in_child(self): 38 | """ 39 | A connection owned by a parent and closed by a child doesn't 40 | destroy the file descriptors so a parent can still use it. 41 | """ 42 | conn = ClusterConnection(port=7000) 43 | conn.send_command('ping') 44 | assert conn.read_response() == b'PONG' 45 | 46 | def target(conn): 47 | conn.send_command('ping') 48 | assert conn.read_response() == b'PONG' 49 | conn.disconnect() 50 | 51 | proc = multiprocessing.Process(target=target, args=(conn,)) 52 | proc.start() 53 | proc.join(3) 54 | assert proc.exitcode == 0 55 | 56 | # The connection was created in the parent but disconnected in the 57 | # child. The child called socket.close() but did not call 58 | # socket.shutdown() because it wasn't the "owning" process. 59 | # Therefore the connection still works in the parent. 60 | conn.send_command('ping') 61 | assert conn.read_response() == b'PONG' 62 | 63 | def test_close_connection_in_parent(self): 64 | """ 65 | A connection owned by a parent is unusable by a child if the parent 66 | (the owning process) closes the connection. 67 | """ 68 | conn = ClusterConnection(port=7000) 69 | conn.send_command('ping') 70 | assert conn.read_response() == b'PONG' 71 | 72 | def target(conn, ev): 73 | ev.wait() 74 | # the parent closed the connection. because it also created the 75 | # connection, the connection is shutdown and the child 76 | # cannot use it. 77 | with pytest.raises(ConnectionError): 78 | conn.send_command('ping') 79 | 80 | ev = multiprocessing.Event() 81 | proc = multiprocessing.Process(target=target, args=(conn, ev)) 82 | proc.start() 83 | 84 | conn.disconnect() 85 | ev.set() 86 | 87 | proc.join(3) 88 | assert proc.exitcode == 0 89 | 90 | @pytest.mark.parametrize('max_connections', [1, 2, None]) 91 | def test_pool(self, max_connections): 92 | """ 93 | A child will create its own connections when using a pool created 94 | by a parent. 95 | """ 96 | pool = ClusterConnectionPool.from_url('redis://localhost:7000', 97 | max_connections=max_connections) 98 | 99 | conn = pool.get_random_connection() 100 | main_conn_pid = conn.pid 101 | with exit_callback(pool.release, conn): 102 | conn.send_command('ping') 103 | assert conn.read_response() == b'PONG' 104 | 105 | def target(pool): 106 | with exit_callback(pool.disconnect): 107 | conn = pool.get_random_connection() 108 | assert conn.pid != main_conn_pid 109 | with exit_callback(pool.release, conn): 110 | assert conn.send_command('ping') is None 111 | assert conn.read_response() == b'PONG' 112 | 113 | proc = multiprocessing.Process(target=target, args=(pool,)) 114 | proc.start() 115 | proc.join(3) 116 | assert proc.exitcode == 0 117 | 118 | # Check that connection is still alive after fork process has exited 119 | # and disconnected the connections in its pool 120 | conn = pool.get_random_connection() 121 | with exit_callback(pool.release, conn): 122 | assert conn.send_command('ping') is None 123 | assert conn.read_response() == b'PONG' 124 | 125 | @pytest.mark.parametrize('max_connections', [1, 2, None]) 126 | def test_close_pool_in_main(self, max_connections): 127 | """ 128 | A child process that uses the same pool as its parent isn't affected 129 | when the parent disconnects all connections within the pool. 130 | """ 131 | pool = ClusterConnectionPool.from_url('redis://localhost:7000', 132 | max_connections=max_connections) 133 | 134 | conn = pool.get_random_connection() 135 | assert conn.send_command('ping') is None 136 | assert conn.read_response() == b'PONG' 137 | 138 | def target(pool, disconnect_event): 139 | conn = pool.get_random_connection() 140 | with exit_callback(pool.release, conn): 141 | assert conn.send_command('ping') is None 142 | assert conn.read_response() == b'PONG' 143 | disconnect_event.wait() 144 | assert conn.send_command('ping') is None 145 | assert conn.read_response() == b'PONG' 146 | 147 | ev = multiprocessing.Event() 148 | 149 | proc = multiprocessing.Process(target=target, args=(pool, ev)) 150 | proc.start() 151 | 152 | pool.disconnect() 153 | ev.set() 154 | proc.join(3) 155 | assert proc.exitcode == 0 156 | -------------------------------------------------------------------------------- /tests/test_scripting.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # python std lib 4 | from __future__ import unicode_literals 5 | 6 | # rediscluster imports 7 | from rediscluster.exceptions import RedisClusterException 8 | 9 | # 3rd party imports 10 | from redis import exceptions 11 | import pytest 12 | 13 | 14 | multiply_script = """ 15 | local value = redis.call('GET', KEYS[1]) 16 | value = tonumber(value) 17 | return value * ARGV[1]""" 18 | 19 | msgpack_hello_script = """ 20 | local message = cmsgpack.unpack(ARGV[1]) 21 | local name = message['name'] 22 | return "hello " .. name 23 | """ 24 | msgpack_hello_script_broken = """ 25 | local message = cmsgpack.unpack(ARGV[1]) 26 | local names = message['name'] 27 | return "hello " .. name 28 | """ 29 | 30 | 31 | class TestScripting(object): 32 | @pytest.fixture(autouse=True) 33 | def reset_scripts(self, r): 34 | r.script_flush() 35 | 36 | def test_eval(self, r): 37 | r.set('a', 2) 38 | # 2 * 3 == 6 39 | assert r.eval(multiply_script, 1, 'a', 3) == 6 40 | 41 | def test_eval_same_slot(self, r): 42 | r.set('A{foo}', 2) 43 | r.set('B{foo}', 4) 44 | # 2 * 4 == 8 45 | 46 | script = """ 47 | local value = redis.call('GET', KEYS[1]) 48 | local value2 = redis.call('GET', KEYS[2]) 49 | return value * value2 50 | """ 51 | result = r.eval(script, 2, 'A{foo}', 'B{foo}') 52 | assert result == 8 53 | 54 | def test_eval_crossslot(self, r): 55 | """ 56 | This test assumes that {foo} and {bar} will not go to the same 57 | server when used. In 3 masters + 3 slaves config this should pass. 58 | """ 59 | r.set('A{foo}', 2) 60 | r.set('B{bar}', 4) 61 | # 2 * 4 == 8 62 | 63 | script = """ 64 | local value = redis.call('GET', KEYS[1]) 65 | local value2 = redis.call('GET', KEYS[2]) 66 | return value * value2 67 | """ 68 | with pytest.raises(RedisClusterException): 69 | r.eval(script, 2, 'A{foo}', 'B{bar}') 70 | 71 | def test_evalsha(self, r): 72 | r.set('a', 2) 73 | sha = r.script_load(multiply_script) 74 | # 2 * 3 == 6 75 | assert r.evalsha(sha, 1, 'a', 3) == 6 76 | 77 | def test_evalsha_script_not_loaded(self, r): 78 | r.set('a', 2) 79 | sha = r.script_load(multiply_script) 80 | # remove the script from Redis's cache 81 | r.script_flush() 82 | with pytest.raises(exceptions.NoScriptError): 83 | r.evalsha(sha, 1, 'a', 3) 84 | 85 | def test_script_loading(self, r): 86 | # get the sha, then clear the cache 87 | sha = r.script_load(multiply_script) 88 | r.script_flush() 89 | assert r.script_exists(sha) == [False] 90 | r.script_load(multiply_script) 91 | assert r.script_exists(sha) == [True] 92 | 93 | def test_script_object(self, r): 94 | r.set('a', 2) 95 | multiply = r.register_script(multiply_script) 96 | precalculated_sha = multiply.sha 97 | assert precalculated_sha 98 | assert r.script_exists(multiply.sha) == [False] 99 | # Test second evalsha block (after NoScriptError) 100 | assert multiply(keys=['a'], args=[3]) == 6 101 | # At this point, the script should be loaded 102 | assert r.script_exists(multiply.sha) == [True] 103 | # Test that the precalculated sha matches the one from redis 104 | assert multiply.sha == precalculated_sha 105 | # Test first evalsha block 106 | assert multiply(keys=['a'], args=[3]) == 6 107 | 108 | @pytest.mark.xfail(reason="Script object not supported in cluster") 109 | def test_script_object_in_pipeline(self, r): 110 | multiply = r.register_script(multiply_script) 111 | precalculated_sha = multiply.sha 112 | assert precalculated_sha 113 | pipe = r.pipeline() 114 | pipe.set('a', 2) 115 | pipe.get('a') 116 | multiply(keys=['a'], args=[3], client=pipe) 117 | assert r.script_exists(multiply.sha) == [False] 118 | # [SET worked, GET 'a', result of multiple script] 119 | assert pipe.execute() == [True, b'2', 6] 120 | # The script should have been loaded by pipe.execute() 121 | assert r.script_exists(multiply.sha) == [True] 122 | # The precalculated sha should have been the correct one 123 | assert multiply.sha == precalculated_sha 124 | 125 | # purge the script from redis's cache and re-run the pipeline 126 | # the multiply script should be reloaded by pipe.execute() 127 | r.script_flush() 128 | pipe = r.pipeline() 129 | pipe.set('a', 2) 130 | pipe.get('a') 131 | multiply(keys=['a'], args=[3], client=pipe) 132 | assert r.script_exists(multiply.sha) == [False] 133 | # [SET worked, GET 'a', result of multiple script] 134 | assert pipe.execute() == [True, b'2', 6] 135 | assert r.script_exists(multiply.sha) == [True] 136 | 137 | @pytest.mark.xfail(reason="LUA is not supported in cluster") 138 | def test_eval_msgpack_pipeline_error_in_lua(self, r): 139 | msgpack_hello = r.register_script(msgpack_hello_script) 140 | assert msgpack_hello.sha 141 | 142 | pipe = r.pipeline() 143 | 144 | # avoiding a dependency to msgpack, this is the output of 145 | # msgpack.dumps({"name": "joe"}) 146 | msgpack_message_1 = b'\x81\xa4name\xa3Joe' 147 | 148 | msgpack_hello(args=[msgpack_message_1], client=pipe) 149 | 150 | assert r.script_exists(msgpack_hello.sha) == [False] 151 | assert pipe.execute()[0] == b'hello Joe' 152 | assert r.script_exists(msgpack_hello.sha) == [True] 153 | 154 | msgpack_hello_broken = r.register_script(msgpack_hello_script_broken) 155 | 156 | msgpack_hello_broken(args=[msgpack_message_1], client=pipe) 157 | with pytest.raises(exceptions.ResponseError) as excinfo: 158 | pipe.execute() 159 | assert excinfo.type == exceptions.ResponseError 160 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | Welcome to redis-py-cluster's documentation! 2 | ============================================ 3 | 4 | This project is a port of `redis-rb-cluster` by antirez, with a lot of added functionality. 5 | 6 | The original source can be found at https://github.com/antirez/redis-rb-cluster. 7 | 8 | The source code for this project is `available on github`_. 9 | 10 | .. _available on github: http://github.com/grokzen/redis-py-cluster 11 | 12 | 13 | 14 | Installation 15 | ------------ 16 | 17 | Latest stable release from pypi 18 | 19 | .. code-block:: bash 20 | 21 | $ pip install redis-py-cluster 22 | 23 | or from source code 24 | 25 | .. code-block:: bash 26 | 27 | $ python setup.py install 28 | 29 | 30 | 31 | Basic usage example 32 | ------------------- 33 | 34 | Small sample script that shows how to get started with RedisCluster. It can also be found in the file `examples/basic.py`. 35 | 36 | Additional code examples of more advance functionality can be found in the `examples/` folder in the source code git repo. 37 | 38 | .. code-block:: python 39 | 40 | >>> from rediscluster import RedisCluster 41 | 42 | >>> # Requires at least one node for cluster discovery. Multiple nodes is recommended. 43 | >>> startup_nodes = [{"host": "127.0.0.1", "port": "7000"}] 44 | 45 | >>> # Note: See note on Python 3 for decode_responses behaviour 46 | >>> rc = RedisCluster(startup_nodes=startup_nodes, decode_responses=True) 47 | 48 | >>> rc.set("foo", "bar") 49 | True 50 | >>> print(rc.get("foo")) 51 | 'bar' 52 | 53 | .. note:: Python 3 54 | 55 | Since Python 3 changed to Unicode strings from Python 2's ASCII, the return type of *most* commands will be binary strings, 56 | unless the class is instantiated with the option ``decode_responses=True``. 57 | 58 | In this case, the responses will be Python 3 strings (Unicode). 59 | 60 | For the init argument `decode_responses`, when set to False, redis-py-cluster will not attempt to decode the responses it receives. 61 | 62 | In Python 3, this means the responses will be of type `bytes`. In Python 2, they will be native strings (`str`). 63 | 64 | If `decode_responses` is set to True, for Python 3 responses will be `str`, for Python 2 they will be `unicode`. 65 | 66 | 67 | 68 | Library Dependencies 69 | -------------------- 70 | 71 | Even if the goal is to support all major versions of redis-py in the 3.x.x track, this is not a guarantee that all versions will work. 72 | 73 | It is always recommended to use the latest version of the dependencies of this project. 74 | 75 | - Redis-py: 'redis>=3.0.0,<4.0.0' is required in this major version of this cluster lib. 76 | - Optional Python: hiredis >= `0.2.0`. Older versions might work but is not tested. 77 | - A working Redis cluster based on version `>=3.0.0` is required. 78 | 79 | 80 | 81 | Supported python versions 82 | ------------------------- 83 | 84 | Python versions should follow the same supported python versions as specificed by the upstream package `redis-py`, based on what major version(s) that is specified. 85 | 86 | If this library supports more then one major version line of `redis-py`, then the supported python versions must include the set of supported python versions by all major version lines. 87 | 88 | - 2.7 89 | - 3.5 90 | - 3.6 91 | - 3.7 92 | - 3.8 93 | 94 | 95 | Python 2 Compatibility Note 96 | ########################### 97 | 98 | This library follows the announced change from our upstream package redis-py. Due to this, 99 | we will follow the same python 2.7 deprecation timeline as stated in there. 100 | 101 | redis-py-cluster 2.1.x will be the last major version release that supports Python 2.7. 102 | The 2.1.x line will continue to get bug fixes and security patches that 103 | support Python 2 until August 1, 2020. redis-py-cluster 3.0.x will be the next major 104 | version and will require Python 3.5+. 105 | 106 | 107 | 108 | Regarding duplicate package name on pypi 109 | ---------------------------------------- 110 | 111 | It has been found that the python module name that is used in this library (rediscluster) is already shared with a similar but older project. 112 | 113 | This lib will `NOT` change the naming of the module to something else to prevent collisions between the libs. 114 | 115 | My reasoning for this is the following 116 | 117 | - Changing the namespace is a major task and probably should only be done in a complete rewrite of the lib, or if the lib had plans for a version 2.0.0 where this kind of backwards incompatibility could be introduced. 118 | - This project is more up to date, the last merged PR in the other project was 3 years ago. 119 | - This project is aimed for implement support for the cluster support in 3.0+, the other lib do not have that right now, but they implement almost the same cluster solution as the 3.0+ but in much more in the client side. 120 | - The 2 libs is not compatible to be run at the same time even if the name would not collide. It is not recommended to run both in the same python interpreter. 121 | 122 | An issue has been raised in each repository to have tracking of the problem. 123 | 124 | redis-py-cluster: https://github.com/Grokzen/redis-py-cluster/issues/150 125 | 126 | rediscluster: https://github.com/salimane/rediscluster-py/issues/11 127 | 128 | 129 | 130 | The Usage Guide 131 | --------------- 132 | 133 | .. _cluster_docs: 134 | 135 | .. toctree:: 136 | :maxdepth: 2 137 | :glob: 138 | :caption: Usage guide 139 | 140 | client 141 | commands 142 | limitations-and-differences 143 | pipelines 144 | pubsub 145 | readonly-mode 146 | logging 147 | 148 | 149 | .. _setup_and_performance: 150 | 151 | .. toctree:: 152 | :maxdepth: 2 153 | :glob: 154 | :caption: Setup and performance 155 | 156 | cluster-setup 157 | benchmarks 158 | 159 | 160 | 161 | The Community Guide 162 | ------------------- 163 | 164 | .. _community_guide: 165 | 166 | .. toctree:: 167 | :maxdepth: 2 168 | :glob: 169 | :caption: Community Guide 170 | 171 | project-status 172 | testing 173 | development 174 | upgrading 175 | release-process 176 | release-notes 177 | authors 178 | license 179 | disclaimer 180 | -------------------------------------------------------------------------------- /tests/test_multiprocessing.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | import multiprocessing 3 | import contextlib 4 | 5 | import redis 6 | from redis.connection import Connection, ConnectionPool 7 | from redis.exceptions import ConnectionError 8 | 9 | from .conftest import _get_client 10 | 11 | 12 | @contextlib.contextmanager 13 | def exit_callback(callback, *args): 14 | try: 15 | yield 16 | finally: 17 | callback(*args) 18 | 19 | 20 | class TestMultiprocessing(object): 21 | # Test connection sharing between forks. 22 | # See issue #1085 for details. 23 | 24 | # use a multi-connection client as that's the only type that is 25 | # actually fork/process-safe 26 | @pytest.fixture() 27 | def r(self, request): 28 | return _get_client( 29 | redis.Redis, 30 | request=request, 31 | single_connection_client=False) 32 | 33 | @pytest.mark.skip(reason="Cluster specific override") 34 | def test_close_connection_in_child(self): 35 | """ 36 | A connection owned by a parent and closed by a child doesn't 37 | destroy the file descriptors so a parent can still use it. 38 | """ 39 | conn = Connection() 40 | conn.send_command('ping') 41 | assert conn.read_response() == b'PONG' 42 | 43 | def target(conn): 44 | conn.send_command('ping') 45 | assert conn.read_response() == b'PONG' 46 | conn.disconnect() 47 | 48 | proc = multiprocessing.Process(target=target, args=(conn,)) 49 | proc.start() 50 | proc.join(3) 51 | assert proc.exitcode == 0 52 | 53 | # The connection was created in the parent but disconnected in the 54 | # child. The child called socket.close() but did not call 55 | # socket.shutdown() because it wasn't the "owning" process. 56 | # Therefore the connection still works in the parent. 57 | conn.send_command('ping') 58 | assert conn.read_response() == b'PONG' 59 | 60 | @pytest.mark.skip(reason="Cluster specific override") 61 | def test_close_connection_in_parent(self): 62 | """ 63 | A connection owned by a parent is unusable by a child if the parent 64 | (the owning process) closes the connection. 65 | """ 66 | conn = Connection() 67 | conn.send_command('ping') 68 | assert conn.read_response() == b'PONG' 69 | 70 | def target(conn, ev): 71 | ev.wait() 72 | # the parent closed the connection. because it also created the 73 | # connection, the connection is shutdown and the child 74 | # cannot use it. 75 | with pytest.raises(ConnectionError): 76 | conn.send_command('ping') 77 | 78 | ev = multiprocessing.Event() 79 | proc = multiprocessing.Process(target=target, args=(conn, ev)) 80 | proc.start() 81 | 82 | conn.disconnect() 83 | ev.set() 84 | 85 | proc.join(3) 86 | assert proc.exitcode == 0 87 | 88 | @pytest.mark.parametrize('max_connections', [1, 2, None]) 89 | @pytest.mark.skip(reason="Cluster specific override") 90 | def test_pool(self, max_connections): 91 | """ 92 | A child will create its own connections when using a pool created 93 | by a parent. 94 | """ 95 | pool = ConnectionPool.from_url('redis://localhost', 96 | max_connections=max_connections) 97 | 98 | conn = pool.get_connection('ping') 99 | main_conn_pid = conn.pid 100 | with exit_callback(pool.release, conn): 101 | conn.send_command('ping') 102 | assert conn.read_response() == b'PONG' 103 | 104 | def target(pool): 105 | with exit_callback(pool.disconnect): 106 | conn = pool.get_connection('ping') 107 | assert conn.pid != main_conn_pid 108 | with exit_callback(pool.release, conn): 109 | assert conn.send_command('ping') is None 110 | assert conn.read_response() == b'PONG' 111 | 112 | proc = multiprocessing.Process(target=target, args=(pool,)) 113 | proc.start() 114 | proc.join(3) 115 | assert proc.exitcode == 0 116 | 117 | # Check that connection is still alive after fork process has exited 118 | # and disconnected the connections in its pool 119 | conn = pool.get_connection('ping') 120 | with exit_callback(pool.release, conn): 121 | assert conn.send_command('ping') is None 122 | assert conn.read_response() == b'PONG' 123 | 124 | @pytest.mark.parametrize('max_connections', [1, 2, None]) 125 | @pytest.mark.skip(reason="Cluster specific override") 126 | def test_close_pool_in_main(self, max_connections): 127 | """ 128 | A child process that uses the same pool as its parent isn't affected 129 | when the parent disconnects all connections within the pool. 130 | """ 131 | pool = ConnectionPool.from_url('redis://localhost', 132 | max_connections=max_connections) 133 | 134 | conn = pool.get_connection('ping') 135 | assert conn.send_command('ping') is None 136 | assert conn.read_response() == b'PONG' 137 | 138 | def target(pool, disconnect_event): 139 | conn = pool.get_connection('ping') 140 | with exit_callback(pool.release, conn): 141 | assert conn.send_command('ping') is None 142 | assert conn.read_response() == b'PONG' 143 | disconnect_event.wait() 144 | assert conn.send_command('ping') is None 145 | assert conn.read_response() == b'PONG' 146 | 147 | ev = multiprocessing.Event() 148 | 149 | proc = multiprocessing.Process(target=target, args=(pool, ev)) 150 | proc.start() 151 | 152 | pool.disconnect() 153 | ev.set() 154 | proc.join(3) 155 | assert proc.exitcode == 0 156 | 157 | def test_redis_client(self, r): 158 | "A redis client created in a parent can also be used in a child" 159 | assert r.ping() is True 160 | 161 | def target(client): 162 | assert client.ping() is True 163 | del client 164 | 165 | proc = multiprocessing.Process(target=target, args=(r,)) 166 | proc.start() 167 | proc.join(3) 168 | assert proc.exitcode == 0 169 | 170 | assert r.ping() is True 171 | -------------------------------------------------------------------------------- /rediscluster/utils.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | from socket import gethostbyaddr 3 | 4 | # rediscluster imports 5 | from .exceptions import RedisClusterException 6 | 7 | # 3rd party imports 8 | from redis._compat import basestring, nativestr 9 | 10 | 11 | def bool_ok(response, *args, **kwargs): 12 | """ 13 | Borrowed from redis._compat becuase that method to not support extra arguments 14 | when used in a cluster environment. 15 | """ 16 | return nativestr(response) == 'OK' 17 | 18 | 19 | def string_keys_to_dict(key_strings, callback): 20 | """ 21 | Maps each string in `key_strings` to `callback` function 22 | and return as a dict. 23 | """ 24 | return dict.fromkeys(key_strings, callback) 25 | 26 | 27 | def dict_merge(*dicts): 28 | """ 29 | Merge all provided dicts into 1 dict. 30 | """ 31 | merged = {} 32 | 33 | for d in dicts: 34 | if not isinstance(d, dict): 35 | raise ValueError('Value should be of dict type') 36 | else: 37 | merged.update(d) 38 | 39 | return merged 40 | 41 | 42 | def blocked_command(self, command): 43 | """ 44 | Raises a `RedisClusterException` mentioning the command is blocked. 45 | """ 46 | raise RedisClusterException("Command: {0} is blocked in redis cluster mode".format(command)) 47 | 48 | 49 | def merge_result(command, res): 50 | """ 51 | Merge all items in `res` into a list. 52 | 53 | This command is used when sending a command to multiple nodes 54 | and they result from each node should be merged into a single list. 55 | """ 56 | if not isinstance(res, dict): 57 | raise ValueError('Value should be of dict type') 58 | 59 | result = set([]) 60 | 61 | for _, v in res.items(): 62 | for value in v: 63 | result.add(value) 64 | 65 | return list(result) 66 | 67 | 68 | def first_key(command, res): 69 | """ 70 | Returns the first result for the given command. 71 | 72 | If more then 1 result is returned then a `RedisClusterException` is raised. 73 | """ 74 | if not isinstance(res, dict): 75 | raise ValueError('Value should be of dict type') 76 | 77 | if len(res.keys()) != 1: 78 | raise RedisClusterException("More then 1 result from command: {0}".format(command)) 79 | 80 | return list(res.values())[0] 81 | 82 | 83 | def nslookup(node_ip): 84 | """ 85 | """ 86 | if ':' not in node_ip: 87 | return gethostbyaddr(node_ip)[0] 88 | 89 | ip, port = node_ip.split(':') 90 | 91 | return '{0}:{1}'.format(gethostbyaddr(ip)[0], port) 92 | 93 | 94 | def parse_cluster_slots(resp, **options): 95 | """ 96 | """ 97 | current_host = options.get('current_host', '') 98 | 99 | def fix_server(*args): 100 | return (nativestr(args[0]) or current_host, args[1]) 101 | 102 | slots = {} 103 | for slot in resp: 104 | start, end, master = slot[:3] 105 | slaves = slot[3:] 106 | slots[start, end] = { 107 | 'master': fix_server(*master), 108 | 'slaves': [fix_server(*slave) for slave in slaves], 109 | } 110 | 111 | return slots 112 | 113 | 114 | def parse_cluster_nodes(resp, **options): 115 | """ 116 | @see: http://redis.io/commands/cluster-nodes # string 117 | @see: http://redis.io/commands/cluster-slaves # list of string 118 | """ 119 | resp = nativestr(resp) 120 | current_host = options.get('current_host', '') 121 | 122 | def parse_slots(s): 123 | slots, migrations = [], [] 124 | for r in s.split(' '): 125 | if '->-' in r: 126 | slot_id, dst_node_id = r[1:-1].split('->-', 1) 127 | migrations.append({ 128 | 'slot': int(slot_id), 129 | 'node_id': dst_node_id, 130 | 'state': 'migrating' 131 | }) 132 | elif '-<-' in r: 133 | slot_id, src_node_id = r[1:-1].split('-<-', 1) 134 | migrations.append({ 135 | 'slot': int(slot_id), 136 | 'node_id': src_node_id, 137 | 'state': 'importing' 138 | }) 139 | elif '-' in r: 140 | start, end = r.split('-') 141 | slots.extend(range(int(start), int(end) + 1)) 142 | else: 143 | slots.append(int(r)) 144 | 145 | return slots, migrations 146 | 147 | if isinstance(resp, basestring): 148 | resp = resp.splitlines() 149 | 150 | nodes = [] 151 | for line in resp: 152 | parts = line.split(' ', 8) 153 | self_id, addr, flags, master_id, ping_sent, \ 154 | pong_recv, config_epoch, link_state = parts[:8] 155 | 156 | host, ports = addr.rsplit(':', 1) 157 | port, _, cluster_port = ports.partition('@') 158 | 159 | node = { 160 | 'id': self_id, 161 | 'host': host or current_host, 162 | 'port': int(port), 163 | 'cluster-bus-port': int(cluster_port) if cluster_port else 10000 + int(port), 164 | 'flags': tuple(flags.split(',')), 165 | 'master': master_id if master_id != '-' else None, 166 | 'ping-sent': int(ping_sent), 167 | 'pong-recv': int(pong_recv), 168 | 'link-state': link_state, 169 | 'slots': [], 170 | 'migrations': [], 171 | } 172 | 173 | if len(parts) >= 9: 174 | slots, migrations = parse_slots(parts[8]) 175 | node['slots'], node['migrations'] = tuple(slots), migrations 176 | 177 | nodes.append(node) 178 | 179 | return nodes 180 | 181 | 182 | def parse_pubsub_channels(command, res, **options): 183 | """ 184 | Result callback, handles different return types 185 | switchable by the `aggregate` flag. 186 | """ 187 | aggregate = options.get('aggregate', True) 188 | if not aggregate: 189 | return res 190 | return merge_result(command, res) 191 | 192 | 193 | def parse_pubsub_numpat(command, res, **options): 194 | """ 195 | Result callback, handles different return types 196 | switchable by the `aggregate` flag. 197 | """ 198 | aggregate = options.get('aggregate', True) 199 | if not aggregate: 200 | return res 201 | 202 | numpat = 0 203 | for node, node_numpat in res.items(): 204 | numpat += node_numpat 205 | return numpat 206 | 207 | 208 | def parse_pubsub_numsub(command, res, **options): 209 | """ 210 | Result callback, handles different return types 211 | switchable by the `aggregate` flag. 212 | """ 213 | aggregate = options.get('aggregate', True) 214 | if not aggregate: 215 | return res 216 | 217 | numsub_d = dict() 218 | for _, numsub_tups in res.items(): 219 | for channel, numsubbed in numsub_tups: 220 | try: 221 | numsub_d[channel] += numsubbed 222 | except KeyError: 223 | numsub_d[channel] = numsubbed 224 | 225 | ret_numsub = [] 226 | for channel, numsub in numsub_d.items(): 227 | ret_numsub.append((channel, numsub)) 228 | return ret_numsub 229 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # User-friendly check for sphinx-build 11 | ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) 12 | $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don\'t have Sphinx installed, grab it from http://sphinx-doc.org/) 13 | endif 14 | 15 | # Internal variables. 16 | PAPEROPT_a4 = -D latex_paper_size=a4 17 | PAPEROPT_letter = -D latex_paper_size=letter 18 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 19 | # the i18n builder cannot share the environment and doctrees with the others 20 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 21 | 22 | .PHONY: help 23 | help: 24 | @echo "Please use \`make ' where is one of" 25 | @echo " html to make standalone HTML files" 26 | @echo " dirhtml to make HTML files named index.html in directories" 27 | @echo " singlehtml to make a single large HTML file" 28 | @echo " pickle to make pickle files" 29 | @echo " json to make JSON files" 30 | @echo " htmlhelp to make HTML files and a HTML help project" 31 | @echo " qthelp to make HTML files and a qthelp project" 32 | @echo " applehelp to make an Apple Help Book" 33 | @echo " devhelp to make HTML files and a Devhelp project" 34 | @echo " epub to make an epub" 35 | @echo " epub3 to make an epub3" 36 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 37 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 38 | @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" 39 | @echo " text to make text files" 40 | @echo " man to make manual pages" 41 | @echo " texinfo to make Texinfo files" 42 | @echo " info to make Texinfo files and run them through makeinfo" 43 | @echo " gettext to make PO message catalogs" 44 | @echo " changes to make an overview of all changed/added/deprecated items" 45 | @echo " xml to make Docutils-native XML files" 46 | @echo " pseudoxml to make pseudoxml-XML files for display purposes" 47 | @echo " linkcheck to check all external links for integrity" 48 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 49 | @echo " coverage to run coverage check of the documentation (if enabled)" 50 | 51 | .PHONY: clean 52 | clean: 53 | rm -rf $(BUILDDIR)/* 54 | 55 | .PHONY: html 56 | html: 57 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 58 | @echo 59 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 60 | 61 | .PHONY: dirhtml 62 | dirhtml: 63 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 64 | @echo 65 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 66 | 67 | .PHONY: singlehtml 68 | singlehtml: 69 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 70 | @echo 71 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 72 | 73 | .PHONY: pickle 74 | pickle: 75 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 76 | @echo 77 | @echo "Build finished; now you can process the pickle files." 78 | 79 | .PHONY: json 80 | json: 81 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 82 | @echo 83 | @echo "Build finished; now you can process the JSON files." 84 | 85 | .PHONY: htmlhelp 86 | htmlhelp: 87 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 88 | @echo 89 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 90 | ".hhp project file in $(BUILDDIR)/htmlhelp." 91 | 92 | .PHONY: qthelp 93 | qthelp: 94 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 95 | @echo 96 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 97 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 98 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/redis-py-cluster.qhcp" 99 | @echo "To view the help file:" 100 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/redis-py-cluster.qhc" 101 | 102 | .PHONY: applehelp 103 | applehelp: 104 | $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp 105 | @echo 106 | @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." 107 | @echo "N.B. You won't be able to view it unless you put it in" \ 108 | "~/Library/Documentation/Help or install it in your application" \ 109 | "bundle." 110 | 111 | .PHONY: devhelp 112 | devhelp: 113 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 114 | @echo 115 | @echo "Build finished." 116 | @echo "To view the help file:" 117 | @echo "# mkdir -p $$HOME/.local/share/devhelp/redis-py-cluster" 118 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/redis-py-cluster" 119 | @echo "# devhelp" 120 | 121 | .PHONY: epub 122 | epub: 123 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 124 | @echo 125 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 126 | 127 | .PHONY: epub3 128 | epub3: 129 | $(SPHINXBUILD) -b epub3 $(ALLSPHINXOPTS) $(BUILDDIR)/epub3 130 | @echo 131 | @echo "Build finished. The epub3 file is in $(BUILDDIR)/epub3." 132 | 133 | .PHONY: latex 134 | latex: 135 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 136 | @echo 137 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 138 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 139 | "(use \`make latexpdf' here to do that automatically)." 140 | 141 | .PHONY: latexpdf 142 | latexpdf: 143 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 144 | @echo "Running LaTeX files through pdflatex..." 145 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 146 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 147 | 148 | .PHONY: latexpdfja 149 | latexpdfja: 150 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 151 | @echo "Running LaTeX files through platex and dvipdfmx..." 152 | $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja 153 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 154 | 155 | .PHONY: text 156 | text: 157 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 158 | @echo 159 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 160 | 161 | .PHONY: man 162 | man: 163 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 164 | @echo 165 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 166 | 167 | .PHONY: texinfo 168 | texinfo: 169 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 170 | @echo 171 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 172 | @echo "Run \`make' in that directory to run these through makeinfo" \ 173 | "(use \`make info' here to do that automatically)." 174 | 175 | .PHONY: info 176 | info: 177 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 178 | @echo "Running Texinfo files through makeinfo..." 179 | make -C $(BUILDDIR)/texinfo info 180 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 181 | 182 | .PHONY: gettext 183 | gettext: 184 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 185 | @echo 186 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 187 | 188 | .PHONY: changes 189 | changes: 190 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 191 | @echo 192 | @echo "The overview file is in $(BUILDDIR)/changes." 193 | 194 | .PHONY: linkcheck 195 | linkcheck: 196 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 197 | @echo 198 | @echo "Link check complete; look for any errors in the above output " \ 199 | "or in $(BUILDDIR)/linkcheck/output.txt." 200 | 201 | .PHONY: doctest 202 | doctest: 203 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 204 | @echo "Testing of doctests in the sources finished, look at the " \ 205 | "results in $(BUILDDIR)/doctest/output.txt." 206 | 207 | .PHONY: coverage 208 | coverage: 209 | $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage 210 | @echo "Testing of coverage in the sources finished, look at the " \ 211 | "results in $(BUILDDIR)/coverage/python.txt." 212 | 213 | .PHONY: xml 214 | xml: 215 | $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml 216 | @echo 217 | @echo "Build finished. The XML files are in $(BUILDDIR)/xml." 218 | 219 | .PHONY: pseudoxml 220 | pseudoxml: 221 | $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml 222 | @echo 223 | @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." 224 | -------------------------------------------------------------------------------- /tests/test_lock.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | import time 3 | 4 | from rediscluster import RedisCluster 5 | 6 | from redis.exceptions import LockError, LockNotOwnedError 7 | from redis.lock import Lock 8 | from .conftest import _get_client 9 | 10 | 11 | class TestLock(object): 12 | @pytest.fixture() 13 | def r_decoded(self, request): 14 | """ 15 | Helper function modified for RedisCluster usage to make tests work 16 | """ 17 | return _get_client(RedisCluster, request=request, decode_responses=True) 18 | 19 | def get_lock(self, redis, *args, **kwargs): 20 | kwargs['lock_class'] = Lock 21 | return redis.lock(*args, **kwargs) 22 | 23 | def test_lock(self, r): 24 | lock = self.get_lock(r, 'foo') 25 | assert lock.acquire(blocking=False) 26 | assert r.get('foo') == lock.local.token 27 | assert r.ttl('foo') == -1 28 | lock.release() 29 | assert r.get('foo') is None 30 | 31 | def test_lock_token(self, r): 32 | lock = self.get_lock(r, 'foo') 33 | self._test_lock_token(r, lock) 34 | 35 | def test_lock_token_thread_local_false(self, r): 36 | lock = self.get_lock(r, 'foo', thread_local=False) 37 | self._test_lock_token(r, lock) 38 | 39 | def _test_lock_token(self, r, lock): 40 | assert lock.acquire(blocking=False, token='test') 41 | assert r.get('foo') == b'test' 42 | assert lock.local.token == b'test' 43 | assert r.ttl('foo') == -1 44 | lock.release() 45 | assert r.get('foo') is None 46 | assert lock.local.token is None 47 | 48 | def test_locked(self, r): 49 | lock = self.get_lock(r, 'foo') 50 | assert lock.locked() is False 51 | lock.acquire(blocking=False) 52 | assert lock.locked() is True 53 | lock.release() 54 | assert lock.locked() is False 55 | 56 | def _test_owned(self, client): 57 | lock = self.get_lock(client, 'foo') 58 | assert lock.owned() is False 59 | lock.acquire(blocking=False) 60 | assert lock.owned() is True 61 | lock.release() 62 | assert lock.owned() is False 63 | 64 | lock2 = self.get_lock(client, 'foo') 65 | assert lock.owned() is False 66 | assert lock2.owned() is False 67 | lock2.acquire(blocking=False) 68 | assert lock.owned() is False 69 | assert lock2.owned() is True 70 | lock2.release() 71 | assert lock.owned() is False 72 | assert lock2.owned() is False 73 | 74 | def test_owned(self, r): 75 | self._test_owned(r) 76 | 77 | def test_owned_with_decoded_responses(self, r_decoded): 78 | self._test_owned(r_decoded) 79 | 80 | def test_competing_locks(self, r): 81 | lock1 = self.get_lock(r, 'foo') 82 | lock2 = self.get_lock(r, 'foo') 83 | assert lock1.acquire(blocking=False) 84 | assert not lock2.acquire(blocking=False) 85 | lock1.release() 86 | assert lock2.acquire(blocking=False) 87 | assert not lock1.acquire(blocking=False) 88 | lock2.release() 89 | 90 | def test_timeout(self, r): 91 | lock = self.get_lock(r, 'foo', timeout=10) 92 | assert lock.acquire(blocking=False) 93 | assert 8 < r.ttl('foo') <= 10 94 | lock.release() 95 | 96 | def test_float_timeout(self, r): 97 | lock = self.get_lock(r, 'foo', timeout=9.5) 98 | assert lock.acquire(blocking=False) 99 | assert 8 < r.pttl('foo') <= 9500 100 | lock.release() 101 | 102 | def test_blocking_timeout(self, r): 103 | lock1 = self.get_lock(r, 'foo') 104 | assert lock1.acquire(blocking=False) 105 | bt = 0.2 106 | sleep = 0.05 107 | lock2 = self.get_lock(r, 'foo', sleep=sleep, blocking_timeout=bt) 108 | start = time.time() 109 | assert not lock2.acquire() 110 | # The elapsed duration should be less than the total blocking_timeout 111 | assert bt > (time.time() - start) > bt - sleep 112 | lock1.release() 113 | 114 | def test_context_manager(self, r): 115 | # blocking_timeout prevents a deadlock if the lock can't be acquired 116 | # for some reason 117 | with self.get_lock(r, 'foo', blocking_timeout=0.2) as lock: 118 | assert r.get('foo') == lock.local.token 119 | assert r.get('foo') is None 120 | 121 | def test_context_manager_raises_when_locked_not_acquired(self, r): 122 | r.set('foo', 'bar') 123 | with pytest.raises(LockError): 124 | with self.get_lock(r, 'foo', blocking_timeout=0.1): 125 | pass 126 | 127 | def test_high_sleep_small_blocking_timeout(self, r): 128 | lock1 = self.get_lock(r, 'foo') 129 | assert lock1.acquire(blocking=False) 130 | sleep = 60 131 | bt = 1 132 | lock2 = self.get_lock(r, 'foo', sleep=sleep, blocking_timeout=bt) 133 | start = time.time() 134 | assert not lock2.acquire() 135 | # the elapsed timed is less than the blocking_timeout as the lock is 136 | # unattainable given the sleep/blocking_timeout configuration 137 | assert bt > (time.time() - start) 138 | lock1.release() 139 | 140 | def test_releasing_unlocked_lock_raises_error(self, r): 141 | lock = self.get_lock(r, 'foo') 142 | with pytest.raises(LockError): 143 | lock.release() 144 | 145 | def test_releasing_lock_no_longer_owned_raises_error(self, r): 146 | lock = self.get_lock(r, 'foo') 147 | lock.acquire(blocking=False) 148 | # manually change the token 149 | r.set('foo', 'a') 150 | with pytest.raises(LockNotOwnedError): 151 | lock.release() 152 | # even though we errored, the token is still cleared 153 | assert lock.local.token is None 154 | 155 | def test_extend_lock(self, r): 156 | lock = self.get_lock(r, 'foo', timeout=10) 157 | assert lock.acquire(blocking=False) 158 | assert 8000 < r.pttl('foo') <= 10000 159 | assert lock.extend(10) 160 | assert 16000 < r.pttl('foo') <= 20000 161 | lock.release() 162 | 163 | def test_extend_lock_replace_ttl(self, r): 164 | lock = self.get_lock(r, 'foo', timeout=10) 165 | assert lock.acquire(blocking=False) 166 | assert 8000 < r.pttl('foo') <= 10000 167 | assert lock.extend(10, replace_ttl=True) 168 | assert 8000 < r.pttl('foo') <= 10000 169 | lock.release() 170 | 171 | def test_extend_lock_float(self, r): 172 | lock = self.get_lock(r, 'foo', timeout=10.0) 173 | assert lock.acquire(blocking=False) 174 | assert 8000 < r.pttl('foo') <= 10000 175 | assert lock.extend(10.0) 176 | assert 16000 < r.pttl('foo') <= 20000 177 | lock.release() 178 | 179 | def test_extending_unlocked_lock_raises_error(self, r): 180 | lock = self.get_lock(r, 'foo', timeout=10) 181 | with pytest.raises(LockError): 182 | lock.extend(10) 183 | 184 | def test_extending_lock_with_no_timeout_raises_error(self, r): 185 | lock = self.get_lock(r, 'foo') 186 | assert lock.acquire(blocking=False) 187 | with pytest.raises(LockError): 188 | lock.extend(10) 189 | lock.release() 190 | 191 | def test_extending_lock_no_longer_owned_raises_error(self, r): 192 | lock = self.get_lock(r, 'foo', timeout=10) 193 | assert lock.acquire(blocking=False) 194 | r.set('foo', 'a') 195 | with pytest.raises(LockNotOwnedError): 196 | lock.extend(10) 197 | 198 | def test_reacquire_lock(self, r): 199 | lock = self.get_lock(r, 'foo', timeout=10) 200 | assert lock.acquire(blocking=False) 201 | assert r.pexpire('foo', 5000) 202 | assert r.pttl('foo') <= 5000 203 | assert lock.reacquire() 204 | assert 8000 < r.pttl('foo') <= 10000 205 | lock.release() 206 | 207 | def test_reacquiring_unlocked_lock_raises_error(self, r): 208 | lock = self.get_lock(r, 'foo', timeout=10) 209 | with pytest.raises(LockError): 210 | lock.reacquire() 211 | 212 | def test_reacquiring_lock_with_no_timeout_raises_error(self, r): 213 | lock = self.get_lock(r, 'foo') 214 | assert lock.acquire(blocking=False) 215 | with pytest.raises(LockError): 216 | lock.reacquire() 217 | lock.release() 218 | 219 | def test_reacquiring_lock_no_longer_owned_raises_error(self, r): 220 | lock = self.get_lock(r, 'foo', timeout=10) 221 | assert lock.acquire(blocking=False) 222 | r.set('foo', 'a') 223 | with pytest.raises(LockNotOwnedError): 224 | lock.reacquire() 225 | 226 | 227 | class TestLockClassSelection(object): 228 | def test_lock_class_argument(self, r): 229 | class MyLock(object): 230 | def __init__(self, *args, **kwargs): 231 | 232 | pass 233 | lock = r.lock('foo', lock_class=MyLock) 234 | assert type(lock) == MyLock 235 | -------------------------------------------------------------------------------- /docs/upgrading.rst: -------------------------------------------------------------------------------- 1 | Upgrading redis-py-cluster 2 | ========================== 3 | 4 | This document describes what must be done when upgrading between different versions to ensure that code still works. 5 | 6 | 2.0.0 --> 2.1.0 7 | --------------- 8 | 9 | Python3 version must now be one of 3.5, 3.6, 3.7, 3.8 10 | 11 | The following exception example has now a new more specific exception class that will be attempted to be caught and the client to resolve the cluster layout. If enough attempts has been made then SlotNotCoveredError will be raised with the same message as before. If you have catch for RedisClusterException you either remove it and let the client try to resolve the cluster layout itself, or start to catch SlotNotCoveredError. This error usually happens during failover if you run skip_full_coverage_check=True when running on AWS ElasticCache for example. 12 | 13 | ## Example exception 14 | rediscluster.exceptions.RedisClusterException: Slot "6986" not covered by the cluster. "skip_full_coverage_check=True" 15 | 16 | 17 | 1.3.x --> 2.0.0 18 | --------------- 19 | 20 | Redis-py upstream package dependency has now been updated to be any of the releases in the major version line 3.0.x. This means that you must upgrade your dependency from 2.10.6 to the latest version. Several internal components have been updated to reflect the code from 3.0.x. 21 | 22 | Class StrictRedisCluster was renamed to RedisCluster. All usages of this class must be updated. 23 | 24 | Class StrictRedis has been removed to mirror upstream class structure. 25 | 26 | Class StrictClusterPipeline was renamed to ClusterPipeline. 27 | 28 | Method SORT has been changed back to only allow execution if keys are in the same slot. No more client side parsing and handling of the keys and values. 29 | 30 | 31 | 1.3.2 --> Next Release 32 | ---------------------- 33 | 34 | If you created the `StrictRedisCluster` (or `RedisCluster`) instance via the `from_url` method and were passing `readonly_mode` to it, the connection pool created will now properly allow selecting read-only slaves from the pool. Previously it always used master nodes only, even in the case of `readonly_mode=True`. Make sure your code don't attempt any write commands over connections with `readonly_mode=True`. 35 | 36 | 37 | 1.3.1 --> 1.3.2 38 | --------------- 39 | 40 | If your redis instance is configured to not have the `CONFIG ...` commands enabled due to security reasons you need to pass this into the client object `skip_full_coverage_check=True`. Benefits are that the client class no longer requires the `CONFIG ...` commands to be enabled on the server. A downside is that you can't use the option in your redis server and still use the same feature in this client. 41 | 42 | 43 | 44 | 1.3.0 --> 1.3.1 45 | --------------- 46 | 47 | Method `scan_iter` was rebuilt because it was broken and did not perform as expected. If you are using this method you should be careful with this new implementation and test it through before using it. The expanded testing for that method indicates it should work without problems. If you find any issues with the new method please open a issue on github. 48 | 49 | A major refactoring was performed in the pipeline system that improved error handling and reliability of execution. It also simplified the code, making it easier to understand and to continue development in the future. Because of this major refactoring you should thoroughly test your pipeline code to ensure that none of your code is broken. 50 | 51 | 52 | 53 | 1.2.0 --> Next release 54 | ---------------------- 55 | 56 | Class RedisClusterMgt has been removed. You should use the `CLUSTER ...` methods that exist in the `StrictRedisCluster` client class. 57 | 58 | Method `cluster_delslots` changed argument specification from `self, node_id, *slots` to `self, *slots` and changed the behaviour of the method to now automatically determine the slot_id based on the current cluster structure and where each slot that you want to delete is loaded. 59 | 60 | Method pfcount no longer has custom logic and exceptions to prevent CROSSSLOT errors. If method is used with different slots then a regular CROSSSLOT error (rediscluster.exceptions.ClusterCrossSlotError) will be returned. 61 | 62 | 63 | 64 | 1.1.0 --> 1.2.0 65 | -------------- 66 | 67 | Discontinue passing `pipeline_use_threads` flag to `rediscluster.StrictRedisCluster` or `rediscluster.RedisCluster`. 68 | 69 | Also discontinue passing `use_threads` flag to the pipeline() method. 70 | 71 | In 1.1.0 and prior, you could use `pipeline_use_threads` flag to tell the client to perform queries to the different nodes in parallel via threads. We exposed this as a flag because using threads might have been risky and we wanted people to be able to disable it if needed. 72 | 73 | With this release we figured out how parallelize commands without the need for threads. We write to all the nodes before reading from them, essentially multiplexing the connections (but without the need for complicated socket multiplexing). We found this approach to be faster and more scalable as more nodes are added to the cluster. 74 | 75 | That means we don't need the `pipeline_use_threads` flag anymore, or the `use_threads` flag that could be passed into the instantiation of the pipeline object itself. 76 | 77 | The logic is greatly simplified and the default behavior will now come with a performance boost and no need to use threads. 78 | 79 | Publish and subscribe no longer connects to a single instance. It now hashes the channel name and uses that to determine what node to connect to. More work will be done in the future when `redis-server` improves the pubsub implementation. Please read up on the documentation about pubsub in the `docs/pubsub.md` file about the problems and limitations on using a pubsub in a cluster. 80 | 81 | Commands Publish and Subscribe now uses the same connections as any other commands. If you are using any pubsub commands you need to test it through thoroughly to ensure that your implementation still works. 82 | 83 | To use less strict cluster slots discovery you can add the following config to your redis-server config file "cluster-require-full-coverage=no" and this client will honour that setting and not fail if not all slots is covered. 84 | 85 | A bug was fixed in 'sdiffstore', if you are using this, verify that your code still works as expected. 86 | 87 | Class RedisClusterMgt is now deprecated and will be removed in next release in favor of all cluster commands implemented in the client in this release. 88 | 89 | 90 | 91 | 1.0.0 --> 1.1.0 92 | --------------- 93 | 94 | The following exceptions have been changed/added and code that use this client might have to be updated to handle the new classes. 95 | 96 | `raise RedisClusterException("Too many Cluster redirections")` have been changed to `raise ClusterError('TTL exhausted.')` 97 | 98 | `ClusterDownException` have been replaced with `ClusterDownError` 99 | 100 | Added new `AskError` exception class. 101 | 102 | Added new `TryAgainError` exception class. 103 | 104 | Added new `MovedError` exception class. 105 | 106 | Added new `ClusterCrossSlotError` exception class. 107 | 108 | Added optional `max_connections_per_node` parameter to `ClusterConnectionPool` which changes behavior of `max_connections` so that it applies per-node rather than across the whole cluster. The new feature is opt-in, and the existing default behavior is unchanged. Users are recommended to opt-in as the feature fixes two important problems. First is that some nodes could be starved for connections after max_connections is used up by connecting to other nodes. Second is that the asymmetric number of connections across nodes makes it challenging to configure file descriptor and redis max client settings. 109 | 110 | Reinitialize on `MOVED` errors will not run on every error but instead on every 111 | 25 error to avoid excessive cluster reinitialize when used in multiple threads and resharding at the same time. If you want to go back to the old behaviour with reinitialize on every error you should pass in `reinitialize_steps=1` to the client constructor. If you want to increase or decrease the interval of this new behaviour you should set `reinitialize_steps` in the client constructor to a value that you want. 112 | 113 | Pipelines in general have received a lot of attention so if you are using pipelines in your code, ensure that you test the new code out a lot before using it to make sure it still works as you expect. 114 | 115 | The entire client code should now be safer to use in a threaded environment. Some race conditions was found and have now been fixed and it should prevent the code from behaving weird during reshard operations. 116 | 117 | 118 | 119 | 0.2.0 --> 0.3.0 120 | --------------- 121 | 122 | In `0.3.0` release the name of the client class was changed from `RedisCluster` to `StrictRedisCluster` and a new implementation of `RedisCluster` was added that is based on `redis.Redis` class. This was done to enable implementation a cluster enabled version of `redis.Redis` class. 123 | 124 | Because of this all imports and usage of `RedisCluster` must be changed to `StrictRedisCluster` so that existing code will remain working. If this is not done some issues could arise in existing code. 125 | 126 | 127 | 128 | 0.1.0 --> 0.2.0 129 | --------------- 130 | 131 | No major changes was done. 132 | -------------------------------------------------------------------------------- /tests/conftest.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # python std lib 4 | import os 5 | import random 6 | import sys 7 | 8 | # rediscluster imports 9 | from rediscluster import RedisCluster 10 | 11 | # 3rd party imports 12 | import pytest 13 | from distutils.version import StrictVersion 14 | from mock import Mock 15 | from redis import Redis 16 | 17 | # put our path in front so we can be sure we are testing locally not against the global package 18 | basepath = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) 19 | sys.path.insert(1, basepath) 20 | 21 | # redis 6 release candidates report a version number of 5.9.x. Use this 22 | # constant for skip_if decorators as a placeholder until 6.0.0 is officially 23 | # released 24 | REDIS_6_VERSION = '5.9.0' 25 | 26 | _REDIS_VERSIONS = {} 27 | REDIS_INFO = {} 28 | 29 | default_redis_url = "redis://127.0.0.1:7001" 30 | 31 | 32 | def pytest_addoption(parser): 33 | parser.addoption( 34 | '--redis-url', 35 | default=default_redis_url, 36 | action="store", 37 | help="Redis connection string, defaults to `%(default)s`", 38 | ) 39 | 40 | 41 | def _get_info(redis_url): 42 | """ 43 | customized for a cluster environment 44 | """ 45 | client = RedisCluster.from_url(redis_url) 46 | info = client.info() 47 | for node_name, node_data in info.items(): 48 | if '7001' in node_name: 49 | info = node_data 50 | client.connection_pool.disconnect() 51 | return info 52 | 53 | 54 | def pytest_sessionstart(session): 55 | redis_url = session.config.getoption("--redis-url") 56 | info = _get_info(redis_url) 57 | version = info["redis_version"] 58 | arch_bits = info["arch_bits"] 59 | REDIS_INFO["version"] = version 60 | REDIS_INFO["arch_bits"] = arch_bits 61 | 62 | 63 | def get_version(**kwargs): 64 | params = {'host': 'localhost', 'port': 7000} 65 | params.update(kwargs) 66 | key = '%s:%s' % (params['host'], params['port']) 67 | if key not in _REDIS_VERSIONS: 68 | client = RedisCluster(**params) 69 | 70 | # INFO command returns for all nodes but we only care for port 7000 71 | client_info = client.info() 72 | for client_id, client_data in client_info.items(): 73 | if '7000' in key: 74 | _REDIS_VERSIONS[key] = client_data['redis_version'] 75 | 76 | client.connection_pool.disconnect() 77 | return _REDIS_VERSIONS[key] 78 | 79 | 80 | def _get_client(cls, request=None, **kwargs): 81 | params = {'host': 'localhost', 'port': 7000} 82 | params.update(kwargs) 83 | client = cls(**params) 84 | client.flushdb() 85 | if request: 86 | def teardown(): 87 | client.flushdb() 88 | client.connection_pool.disconnect() 89 | request.addfinalizer(teardown) 90 | return client 91 | 92 | 93 | def _init_client(request, cls=None, **kwargs): 94 | """ 95 | """ 96 | client = _get_client(cls=cls, **kwargs) 97 | client.flushdb() 98 | if request: 99 | def teardown(): 100 | client.flushdb() 101 | client.connection_pool.disconnect() 102 | request.addfinalizer(teardown) 103 | return client 104 | 105 | 106 | def _init_mgt_client(request, cls=None, **kwargs): 107 | """ 108 | """ 109 | client = _get_client(cls=cls, **kwargs) 110 | if request: 111 | def teardown(): 112 | client.connection_pool.disconnect() 113 | request.addfinalizer(teardown) 114 | return client 115 | 116 | 117 | def skip_for_no_cluster_impl(): 118 | return pytest.mark.skipif(True, reason="Cluster has no or working implementation for this test") 119 | 120 | 121 | def skip_if_not_password_protected_nodes(): 122 | """ 123 | """ 124 | return pytest.mark.skipif('TEST_PASSWORD_PROTECTED' not in os.environ, reason="") 125 | 126 | 127 | def skip_if_server_version_lt(min_version): 128 | check = StrictVersion(get_version()) < StrictVersion(min_version) 129 | return pytest.mark.skipif(check, reason="") 130 | 131 | 132 | def skip_if_server_version_gte(min_version): 133 | check = StrictVersion(get_version()) >= StrictVersion(min_version) 134 | return pytest.mark.skipif(check, reason="") 135 | 136 | 137 | def skip_if_redis_py_version_lt(min_version): 138 | """ 139 | """ 140 | import redis 141 | version = redis.__version__ 142 | if StrictVersion(version) < StrictVersion(min_version): 143 | return pytest.mark.skipif(True, reason="") 144 | return pytest.mark.skipif(False, reason="") 145 | 146 | 147 | @pytest.fixture() 148 | def o(request, *args, **kwargs): 149 | """ 150 | Create a RedisCluster instance with decode_responses set to True. 151 | """ 152 | return _init_client(request, cls=RedisCluster, decode_responses=True, **kwargs) 153 | 154 | 155 | @pytest.fixture() 156 | def r(request, *args, **kwargs): 157 | """ 158 | Create a RedisCluster instance with default settings. 159 | """ 160 | return _init_client(request, cls=RedisCluster, **kwargs) 161 | 162 | 163 | @pytest.fixture() 164 | def ro(request, *args, **kwargs): 165 | """ 166 | Create a RedisCluster instance with readonly mode 167 | """ 168 | params = {'readonly_mode': True} 169 | params.update(kwargs) 170 | return _init_client(request, cls=RedisCluster, **params) 171 | 172 | 173 | @pytest.fixture() 174 | def s(*args, **kwargs): 175 | """ 176 | Create a RedisCluster instance with 'init_slot_cache' set to false 177 | """ 178 | s = _get_client(RedisCluster, init_slot_cache=False, **kwargs) 179 | assert s.connection_pool.nodes.slots == {} 180 | assert s.connection_pool.nodes.nodes == {} 181 | return s 182 | 183 | 184 | @pytest.fixture() 185 | def t(*args, **kwargs): 186 | """ 187 | Create a regular Redis object instance 188 | """ 189 | return Redis(*args, **kwargs) 190 | 191 | 192 | @pytest.fixture() 193 | def sr(request, *args, **kwargs): 194 | """ 195 | Returns a instance of RedisCluster 196 | """ 197 | return _init_client(request, reinitialize_steps=1, cls=RedisCluster, **kwargs) 198 | 199 | 200 | def _gen_cluster_mock_resp(r, response): 201 | mock_connection_pool = Mock() 202 | connection = Mock() 203 | response = response 204 | connection.read_response.return_value = response 205 | mock_connection_pool.get_connection.return_value = connection 206 | r.connection_pool = mock_connection_pool 207 | return r 208 | 209 | 210 | @pytest.fixture() 211 | def mock_cluster_resp_ok(request, **kwargs): 212 | r = _get_client(RedisCluster, request, **kwargs) 213 | return _gen_cluster_mock_resp(r, 'OK') 214 | 215 | 216 | @pytest.fixture() 217 | def mock_cluster_resp_int(request, **kwargs): 218 | r = _get_client(RedisCluster, request, **kwargs) 219 | return _gen_cluster_mock_resp(r, '2') 220 | 221 | 222 | @pytest.fixture() 223 | def mock_cluster_resp_info(request, **kwargs): 224 | r = _get_client(RedisCluster, request, **kwargs) 225 | response = ( 226 | 'cluster_state:ok\r\ncluster_slots_assigned:16384\r\n' 227 | 'cluster_slots_ok:16384\r\ncluster_slots_pfail:0\r\n' 228 | 'cluster_slots_fail:0\r\ncluster_known_nodes:7\r\n' 229 | 'cluster_size:3\r\ncluster_current_epoch:7\r\n' 230 | 'cluster_my_epoch:2\r\ncluster_stats_messages_sent:170262\r\n' 231 | 'cluster_stats_messages_received:105653\r\n' 232 | ) 233 | return _gen_cluster_mock_resp(r, response) 234 | 235 | 236 | @pytest.fixture() 237 | def mock_cluster_resp_nodes(request, **kwargs): 238 | r = _get_client(RedisCluster, request, **kwargs) 239 | response = ( 240 | 'c8253bae761cb1ecb2b61857d85dfe455a0fec8b 172.17.0.7:7006 ' 241 | 'slave aa90da731f673a99617dfe930306549a09f83a6b 0 ' 242 | '1447836263059 5 connected\n' 243 | '9bd595fe4821a0e8d6b99d70faa660638a7612b3 172.17.0.7:7008 ' 244 | 'master - 0 1447836264065 0 connected\n' 245 | 'aa90da731f673a99617dfe930306549a09f83a6b 172.17.0.7:7003 ' 246 | 'myself,master - 0 0 2 connected 5461-10922\n' 247 | '1df047e5a594f945d82fc140be97a1452bcbf93e 172.17.0.7:7007 ' 248 | 'slave 19efe5a631f3296fdf21a5441680f893e8cc96ec 0 ' 249 | '1447836262556 3 connected\n' 250 | '4ad9a12e63e8f0207025eeba2354bcf4c85e5b22 172.17.0.7:7005 ' 251 | 'master - 0 1447836262555 7 connected 0-5460\n' 252 | '19efe5a631f3296fdf21a5441680f893e8cc96ec 172.17.0.7:7004 ' 253 | 'master - 0 1447836263562 3 connected 10923-16383\n' 254 | 'fbb23ed8cfa23f17eaf27ff7d0c410492a1093d6 172.17.0.7:7002 ' 255 | 'master,fail - 1447829446956 1447829444948 1 disconnected\n' 256 | ) 257 | return _gen_cluster_mock_resp(r, response) 258 | 259 | 260 | @pytest.fixture() 261 | def mock_cluster_resp_slaves(request, **kwargs): 262 | r = _get_client(RedisCluster, request, **kwargs) 263 | response = ("['1df047e5a594f945d82fc140be97a1452bcbf93e 172.17.0.7:7007 " 264 | "slave 19efe5a631f3296fdf21a5441680f893e8cc96ec 0 " 265 | "1447836789290 3 connected']") 266 | return _gen_cluster_mock_resp(r, response) 267 | 268 | 269 | def skip_unless_arch_bits(arch_bits): 270 | return pytest.mark.skipif( 271 | REDIS_INFO["arch_bits"] != arch_bits, 272 | reason="server is not {}-bit".format(arch_bits), 273 | ) 274 | 275 | 276 | def wait_for_command(client, monitor, command): 277 | # issue a command with a key name that's local to this process. 278 | # if we find a command with our key before the command we're waiting 279 | # for, something went wrong 280 | redis_version = REDIS_INFO["version"] 281 | if StrictVersion(redis_version) >= StrictVersion('5.0.0'): 282 | id_str = str(client.client_id()) 283 | else: 284 | id_str = '%08x' % random.randrange(2**32) 285 | key = '__REDIS-PY-%s__' % id_str 286 | client.get(key) 287 | while True: 288 | monitor_response = monitor.next_command() 289 | if command in monitor_response['command']: 290 | return monitor_response 291 | if key in monitor_response['command']: 292 | return None 293 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # redis-py-cluster documentation build configuration file, created by 4 | # sphinx-quickstart on Tue Mar 29 23:29:46 2016. 5 | # 6 | # This file is execfile()d with the current directory set to its 7 | # containing dir. 8 | # 9 | # Note that not all possible configuration values are present in this 10 | # autogenerated file. 11 | # 12 | # All configuration values have a default; values that are commented out 13 | # serve to show the default. 14 | 15 | import sys 16 | import os 17 | 18 | # Custom RTD sphinx theme 19 | import sphinx_rtd_theme 20 | 21 | # If extensions (or modules to document with autodoc) are in another directory, 22 | # add these directories to sys.path here. If the directory is relative to the 23 | # documentation root, use os.path.abspath to make it absolute, like shown here. 24 | #sys.path.insert(0, os.path.abspath('.')) 25 | 26 | # -- General configuration ------------------------------------------------ 27 | 28 | # If your documentation needs a minimal Sphinx version, state it here. 29 | #needs_sphinx = '1.0' 30 | 31 | # Add any Sphinx extension module names here, as strings. They can be 32 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 33 | # ones. 34 | extensions = [ 35 | "sphinx_rtd_theme", 36 | ] 37 | 38 | # Add any paths that contain templates here, relative to this directory. 39 | templates_path = ['_templates'] 40 | 41 | # The suffix(es) of source filenames. 42 | # You can specify multiple suffix as a list of string: 43 | # source_suffix = ['.rst', '.md'] 44 | source_suffix = '.rst' 45 | 46 | # The encoding of source files. 47 | #source_encoding = 'utf-8-sig' 48 | 49 | # The master toctree document. 50 | master_doc = 'index' 51 | 52 | # General information about the project. 53 | project = u'redis-py-cluster' 54 | copyright = u'2013-2020, Johan Andersson' 55 | author = u'Johan Andersson' 56 | 57 | # The version info for the project you're documenting, acts as replacement for 58 | # |version| and |release|, also used in various other places throughout the 59 | # built documents. 60 | # 61 | # The short X.Y version. 62 | version = u'2.1.3' 63 | # The full version, including alpha/beta/rc tags. 64 | release = u'2.1.3' 65 | 66 | # The language for content autogenerated by Sphinx. Refer to documentation 67 | # for a list of supported languages. 68 | # 69 | # This is also used if you do content translation via gettext catalogs. 70 | # Usually you set "language" from the command line for these cases. 71 | language = None 72 | 73 | # There are two options for replacing |today|: either, you set today to some 74 | # non-false value, then it is used: 75 | #today = '' 76 | # Else, today_fmt is used as the format for a strftime call. 77 | #today_fmt = '%B %d, %Y' 78 | 79 | # List of patterns, relative to source directory, that match files and 80 | # directories to ignore when looking for source files. 81 | # This patterns also effect to html_static_path and html_extra_path 82 | exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] 83 | 84 | # The reST default role (used for this markup: `text`) to use for all 85 | # documents. 86 | #default_role = None 87 | 88 | # If true, '()' will be appended to :func: etc. cross-reference text. 89 | #add_function_parentheses = True 90 | 91 | # If true, the current module name will be prepended to all description 92 | # unit titles (such as .. function::). 93 | #add_module_names = True 94 | 95 | # If true, sectionauthor and moduleauthor directives will be shown in the 96 | # output. They are ignored by default. 97 | #show_authors = False 98 | 99 | # The name of the Pygments (syntax highlighting) style to use. 100 | pygments_style = 'sphinx' 101 | 102 | # A list of ignored prefixes for module index sorting. 103 | #modindex_common_prefix = [] 104 | 105 | # If true, keep warnings as "system message" paragraphs in the built documents. 106 | #keep_warnings = False 107 | 108 | # If true, `todo` and `todoList` produce output, else they produce nothing. 109 | todo_include_todos = False 110 | 111 | 112 | # -- Options for HTML output ---------------------------------------------- 113 | 114 | # The theme to use for HTML and HTML Help pages. See the documentation for 115 | # a list of builtin themes. 116 | html_theme = 'sphinx_rtd_theme' 117 | 118 | # Theme options are theme-specific and customize the look and feel of a theme 119 | # further. For a list of options available for each theme, see the 120 | # documentation. 121 | #html_theme_options = {} 122 | 123 | # Add any paths that contain custom themes here, relative to this directory. 124 | #html_theme_path = [] 125 | 126 | # The name for this set of Sphinx documents. 127 | # " v documentation" by default. 128 | #html_title = u'redis-py-cluster v1.2.0' 129 | 130 | # A shorter title for the navigation bar. Default is the same as html_title. 131 | #html_short_title = None 132 | 133 | # The name of an image file (relative to this directory) to place at the top 134 | # of the sidebar. 135 | #html_logo = None 136 | 137 | # The name of an image file (relative to this directory) to use as a favicon of 138 | # the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 139 | # pixels large. 140 | #html_favicon = None 141 | 142 | # Add any paths that contain custom static files (such as style sheets) here, 143 | # relative to this directory. They are copied after the builtin static files, 144 | # so a file named "default.css" will overwrite the builtin "default.css". 145 | html_static_path = ['_static'] 146 | 147 | # Add any extra paths that contain custom files (such as robots.txt or 148 | # .htaccess) here, relative to this directory. These files are copied 149 | # directly to the root of the documentation. 150 | #html_extra_path = [] 151 | 152 | # If not None, a 'Last updated on:' timestamp is inserted at every page 153 | # bottom, using the given strftime format. 154 | # The empty string is equivalent to '%b %d, %Y'. 155 | #html_last_updated_fmt = None 156 | 157 | # If true, SmartyPants will be used to convert quotes and dashes to 158 | # typographically correct entities. 159 | #html_use_smartypants = True 160 | 161 | # Custom sidebar templates, maps document names to template names. 162 | #html_sidebars = {} 163 | 164 | # Additional templates that should be rendered to pages, maps page names to 165 | # template names. 166 | #html_additional_pages = {} 167 | 168 | # If false, no module index is generated. 169 | #html_domain_indices = True 170 | 171 | # If false, no index is generated. 172 | #html_use_index = True 173 | 174 | # If true, the index is split into individual pages for each letter. 175 | #html_split_index = False 176 | 177 | # If true, links to the reST sources are added to the pages. 178 | #html_show_sourcelink = True 179 | 180 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 181 | #html_show_sphinx = True 182 | 183 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 184 | #html_show_copyright = True 185 | 186 | # If true, an OpenSearch description file will be output, and all pages will 187 | # contain a tag referring to it. The value of this option must be the 188 | # base URL from which the finished HTML is served. 189 | #html_use_opensearch = '' 190 | 191 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 192 | #html_file_suffix = None 193 | 194 | # Language to be used for generating the HTML full-text search index. 195 | # Sphinx supports the following languages: 196 | # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja' 197 | # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh' 198 | #html_search_language = 'en' 199 | 200 | # A dictionary with options for the search language support, empty by default. 201 | # 'ja' uses this config value. 202 | # 'zh' user can custom change `jieba` dictionary path. 203 | #html_search_options = {'type': 'default'} 204 | 205 | # The name of a javascript file (relative to the configuration directory) that 206 | # implements a search results scorer. If empty, the default will be used. 207 | #html_search_scorer = 'scorer.js' 208 | 209 | # Output file base name for HTML help builder. 210 | htmlhelp_basename = 'redis-py-clusterdoc' 211 | 212 | # -- Options for LaTeX output --------------------------------------------- 213 | 214 | latex_elements = { 215 | # The paper size ('letterpaper' or 'a4paper'). 216 | #'papersize': 'letterpaper', 217 | 218 | # The font size ('10pt', '11pt' or '12pt'). 219 | #'pointsize': '10pt', 220 | 221 | # Additional stuff for the LaTeX preamble. 222 | #'preamble': '', 223 | 224 | # Latex figure (float) alignment 225 | #'figure_align': 'htbp', 226 | } 227 | 228 | # Grouping the document tree into LaTeX files. List of tuples 229 | # (source start file, target name, title, 230 | # author, documentclass [howto, manual, or own class]). 231 | latex_documents = [ 232 | (master_doc, 'redis-py-cluster.tex', u'redis-py-cluster Documentation', 233 | u'Johan Andersson', 'manual'), 234 | ] 235 | 236 | # The name of an image file (relative to this directory) to place at the top of 237 | # the title page. 238 | #latex_logo = None 239 | 240 | # For "manual" documents, if this is true, then toplevel headings are parts, 241 | # not chapters. 242 | #latex_use_parts = False 243 | 244 | # If true, show page references after internal links. 245 | #latex_show_pagerefs = False 246 | 247 | # If true, show URL addresses after external links. 248 | #latex_show_urls = False 249 | 250 | # Documents to append as an appendix to all manuals. 251 | #latex_appendices = [] 252 | 253 | # If false, no module index is generated. 254 | #latex_domain_indices = True 255 | 256 | 257 | # -- Options for manual page output --------------------------------------- 258 | 259 | # One entry per manual page. List of tuples 260 | # (source start file, name, description, authors, manual section). 261 | man_pages = [ 262 | (master_doc, 'redis-py-cluster', u'redis-py-cluster Documentation', 263 | [author], 1) 264 | ] 265 | 266 | # If true, show URL addresses after external links. 267 | #man_show_urls = False 268 | 269 | 270 | # -- Options for Texinfo output ------------------------------------------- 271 | 272 | # Grouping the document tree into Texinfo files. List of tuples 273 | # (source start file, target name, title, author, 274 | # dir menu entry, description, category) 275 | texinfo_documents = [ 276 | (master_doc, 'redis-py-cluster', u'redis-py-cluster Documentation', 277 | author, 'redis-py-cluster', 'One line description of project.', 278 | 'Miscellaneous'), 279 | ] 280 | 281 | # Documents to append as an appendix to all manuals. 282 | #texinfo_appendices = [] 283 | 284 | # If false, no module index is generated. 285 | #texinfo_domain_indices = True 286 | 287 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 288 | #texinfo_show_urls = 'footnote' 289 | 290 | # If true, do not generate a @detailmenu in the "Top" node's menu. 291 | #texinfo_no_detailmenu = False 292 | -------------------------------------------------------------------------------- /tests/test_pipeline.py: -------------------------------------------------------------------------------- 1 | from __future__ import unicode_literals 2 | import pytest 3 | 4 | import redis 5 | from redis._compat import unichr, unicode 6 | from .conftest import wait_for_command, skip_if_server_version_lt, skip_for_no_cluster_impl 7 | 8 | 9 | class TestPipeline(object): 10 | def test_pipeline_is_true(self, r): 11 | "Ensure pipeline instances are not false-y" 12 | with r.pipeline() as pipe: 13 | assert pipe 14 | 15 | def test_pipeline(self, r): 16 | with r.pipeline() as pipe: 17 | (pipe.set('a', 'a1') 18 | .get('a') 19 | .zadd('z', {'z1': 1}) 20 | .zadd('z', {'z2': 4}) 21 | .zincrby('z', 1, 'z1') 22 | .zrange('z', 0, 5, withscores=True)) 23 | assert pipe.execute() == \ 24 | [ 25 | True, 26 | b'a1', 27 | True, 28 | True, 29 | 2.0, 30 | [(b'z1', 2.0), (b'z2', 4)], 31 | ] 32 | 33 | def test_pipeline_memoryview(self, r): 34 | with r.pipeline() as pipe: 35 | (pipe.set('a', memoryview(b'a1')) 36 | .get('a')) 37 | assert pipe.execute() == [True, b'a1'] 38 | 39 | def test_pipeline_length(self, r): 40 | with r.pipeline() as pipe: 41 | # Initially empty. 42 | assert len(pipe) == 0 43 | 44 | # Fill 'er up! 45 | pipe.set('a', 'a1').set('b', 'b1').set('c', 'c1') 46 | assert len(pipe) == 3 47 | 48 | # Execute calls reset(), so empty once again. 49 | pipe.execute() 50 | assert len(pipe) == 0 51 | 52 | def test_pipeline_no_transaction(self, r): 53 | with r.pipeline(transaction=False) as pipe: 54 | pipe.set('a', 'a1').set('b', 'b1').set('c', 'c1') 55 | assert pipe.execute() == [True, True, True] 56 | assert r['a'] == b'a1' 57 | assert r['b'] == b'b1' 58 | assert r['c'] == b'c1' 59 | 60 | @skip_for_no_cluster_impl() 61 | def test_pipeline_no_transaction_watch(self, r): 62 | r['a'] = 0 63 | 64 | with r.pipeline(transaction=False) as pipe: 65 | pipe.watch('a') 66 | a = pipe.get('a') 67 | 68 | pipe.multi() 69 | pipe.set('a', int(a) + 1) 70 | assert pipe.execute() == [True] 71 | 72 | @skip_for_no_cluster_impl() 73 | def test_pipeline_no_transaction_watch_failure(self, r): 74 | r['a'] = 0 75 | 76 | with r.pipeline(transaction=False) as pipe: 77 | pipe.watch('a') 78 | a = pipe.get('a') 79 | 80 | r['a'] = 'bad' 81 | 82 | pipe.multi() 83 | pipe.set('a', int(a) + 1) 84 | 85 | with pytest.raises(redis.WatchError): 86 | pipe.execute() 87 | 88 | assert r['a'] == b'bad' 89 | 90 | def test_exec_error_in_response(self, r): 91 | """ 92 | an invalid pipeline command at exec time adds the exception instance 93 | to the list of returned values 94 | """ 95 | r['c'] = 'a' 96 | with r.pipeline() as pipe: 97 | pipe.set('a', 1).set('b', 2).lpush('c', 3).set('d', 4) 98 | result = pipe.execute(raise_on_error=False) 99 | 100 | assert result[0] 101 | assert r['a'] == b'1' 102 | assert result[1] 103 | assert r['b'] == b'2' 104 | 105 | # we can't lpush to a key that's a string value, so this should 106 | # be a ResponseError exception 107 | assert isinstance(result[2], redis.ResponseError) 108 | assert r['c'] == b'a' 109 | 110 | # since this isn't a transaction, the other commands after the 111 | # error are still executed 112 | assert result[3] 113 | assert r['d'] == b'4' 114 | 115 | # make sure the pipe was restored to a working state 116 | assert pipe.set('z', 'zzz').execute() == [True] 117 | assert r['z'] == b'zzz' 118 | 119 | def test_exec_error_raised(self, r): 120 | r['c'] = 'a' 121 | with r.pipeline() as pipe: 122 | pipe.set('a', 1).set('b', 2).lpush('c', 3).set('d', 4) 123 | with pytest.raises(redis.ResponseError) as ex: 124 | pipe.execute() 125 | assert unicode(ex.value).startswith('Command # 3 (LPUSH c 3) of ' 126 | 'pipeline caused error: ') 127 | 128 | # make sure the pipe was restored to a working state 129 | assert pipe.set('z', 'zzz').execute() == [True] 130 | assert r['z'] == b'zzz' 131 | 132 | @skip_for_no_cluster_impl() 133 | def test_transaction_with_empty_error_command(self, r): 134 | """ 135 | Commands with custom EMPTY_ERROR functionality return their default 136 | values in the pipeline no matter the raise_on_error preference 137 | """ 138 | for error_switch in (True, False): 139 | with r.pipeline() as pipe: 140 | pipe.set('a', 1).mget([]).set('c', 3) 141 | result = pipe.execute(raise_on_error=error_switch) 142 | 143 | assert result[0] 144 | assert result[1] == [] 145 | assert result[2] 146 | 147 | @skip_for_no_cluster_impl() 148 | def test_pipeline_with_empty_error_command(self, r): 149 | """ 150 | Commands with custom EMPTY_ERROR functionality return their default 151 | values in the pipeline no matter the raise_on_error preference 152 | """ 153 | for error_switch in (True, False): 154 | with r.pipeline(transaction=False) as pipe: 155 | pipe.set('a', 1).mget([]).set('c', 3) 156 | result = pipe.execute(raise_on_error=error_switch) 157 | 158 | assert result[0] 159 | assert result[1] == [] 160 | assert result[2] 161 | 162 | def test_parse_error_raised(self, r): 163 | with r.pipeline() as pipe: 164 | # the zrem is invalid because we don't pass any keys to it 165 | pipe.set('a', 1).zrem('b').set('b', 2) 166 | with pytest.raises(redis.ResponseError) as ex: 167 | pipe.execute() 168 | 169 | assert unicode(ex.value).startswith('Command # 2 (ZREM b) of ' 170 | 'pipeline caused error: ') 171 | 172 | # make sure the pipe was restored to a working state 173 | assert pipe.set('z', 'zzz').execute() == [True] 174 | assert r['z'] == b'zzz' 175 | 176 | @skip_for_no_cluster_impl() 177 | def test_parse_error_raised_transaction(self, r): 178 | with r.pipeline() as pipe: 179 | pipe.multi() 180 | # the zrem is invalid because we don't pass any keys to it 181 | pipe.set('a', 1).zrem('b').set('b', 2) 182 | with pytest.raises(redis.ResponseError) as ex: 183 | pipe.execute() 184 | 185 | assert unicode(ex.value).startswith('Command # 2 (ZREM b) of ' 186 | 'pipeline caused error: ') 187 | 188 | # make sure the pipe was restored to a working state 189 | assert pipe.set('z', 'zzz').execute() == [True] 190 | assert r['z'] == b'zzz' 191 | 192 | @skip_for_no_cluster_impl() 193 | def test_watch_succeed(self, r): 194 | r['a'] = 1 195 | r['b'] = 2 196 | 197 | with r.pipeline() as pipe: 198 | pipe.watch('a', 'b') 199 | assert pipe.watching 200 | a_value = pipe.get('a') 201 | b_value = pipe.get('b') 202 | assert a_value == b'1' 203 | assert b_value == b'2' 204 | pipe.multi() 205 | 206 | pipe.set('c', 3) 207 | assert pipe.execute() == [True] 208 | assert not pipe.watching 209 | 210 | @skip_for_no_cluster_impl() 211 | def test_watch_failure(self, r): 212 | r['a'] = 1 213 | r['b'] = 2 214 | 215 | with r.pipeline() as pipe: 216 | pipe.watch('a', 'b') 217 | r['b'] = 3 218 | pipe.multi() 219 | pipe.get('a') 220 | with pytest.raises(redis.WatchError): 221 | pipe.execute() 222 | 223 | assert not pipe.watching 224 | 225 | @skip_for_no_cluster_impl() 226 | def test_watch_failure_in_empty_transaction(self, r): 227 | r['a'] = 1 228 | r['b'] = 2 229 | 230 | with r.pipeline() as pipe: 231 | pipe.watch('a', 'b') 232 | r['b'] = 3 233 | pipe.multi() 234 | with pytest.raises(redis.WatchError): 235 | pipe.execute() 236 | 237 | assert not pipe.watching 238 | 239 | @skip_for_no_cluster_impl() 240 | def test_unwatch(self, r): 241 | r['a'] = 1 242 | r['b'] = 2 243 | 244 | with r.pipeline() as pipe: 245 | pipe.watch('a', 'b') 246 | r['b'] = 3 247 | pipe.unwatch() 248 | assert not pipe.watching 249 | pipe.get('a') 250 | assert pipe.execute() == [b'1'] 251 | 252 | @skip_for_no_cluster_impl() 253 | def test_watch_exec_no_unwatch(self, r): 254 | r['a'] = 1 255 | r['b'] = 2 256 | 257 | with r.monitor() as m: 258 | with r.pipeline() as pipe: 259 | pipe.watch('a', 'b') 260 | assert pipe.watching 261 | a_value = pipe.get('a') 262 | b_value = pipe.get('b') 263 | assert a_value == b'1' 264 | assert b_value == b'2' 265 | pipe.multi() 266 | pipe.set('c', 3) 267 | assert pipe.execute() == [True] 268 | assert not pipe.watching 269 | 270 | unwatch_command = wait_for_command(r, m, 'UNWATCH') 271 | assert unwatch_command is None, "should not send UNWATCH" 272 | 273 | @skip_for_no_cluster_impl() 274 | def test_watch_reset_unwatch(self, r): 275 | r['a'] = 1 276 | 277 | with r.monitor() as m: 278 | with r.pipeline() as pipe: 279 | pipe.watch('a') 280 | assert pipe.watching 281 | pipe.reset() 282 | assert not pipe.watching 283 | 284 | unwatch_command = wait_for_command(r, m, 'UNWATCH') 285 | assert unwatch_command is not None 286 | assert unwatch_command['command'] == 'UNWATCH' 287 | 288 | @skip_for_no_cluster_impl() 289 | def test_transaction_callable(self, r): 290 | r['a'] = 1 291 | r['b'] = 2 292 | has_run = [] 293 | 294 | def my_transaction(pipe): 295 | a_value = pipe.get('a') 296 | assert a_value in (b'1', b'2') 297 | b_value = pipe.get('b') 298 | assert b_value == b'2' 299 | 300 | # silly run-once code... incr's "a" so WatchError should be raised 301 | # forcing this all to run again. this should incr "a" once to "2" 302 | if not has_run: 303 | r.incr('a') 304 | has_run.append('it has') 305 | 306 | pipe.multi() 307 | pipe.set('c', int(a_value) + int(b_value)) 308 | 309 | result = r.transaction(my_transaction, 'a', 'b') 310 | assert result == [True] 311 | assert r['c'] == b'4' 312 | 313 | @skip_for_no_cluster_impl() 314 | def test_transaction_callable_returns_value_from_callable(self, r): 315 | def callback(pipe): 316 | # No need to do anything here since we only want the return value 317 | return 'a' 318 | 319 | res = r.transaction(callback, 'my-key', value_from_callable=True) 320 | assert res == 'a' 321 | 322 | def test_exec_error_in_no_transaction_pipeline(self, r): 323 | r['a'] = 1 324 | with r.pipeline(transaction=False) as pipe: 325 | pipe.llen('a') 326 | pipe.expire('a', 100) 327 | 328 | with pytest.raises(redis.ResponseError) as ex: 329 | pipe.execute() 330 | 331 | assert unicode(ex.value).startswith('Command # 1 (LLEN a) of ' 332 | 'pipeline caused error: ') 333 | 334 | assert r['a'] == b'1' 335 | 336 | def test_exec_error_in_no_transaction_pipeline_unicode_command(self, r): 337 | key = unichr(3456) + 'abcd' + unichr(3421) 338 | r[key] = 1 339 | with r.pipeline(transaction=False) as pipe: 340 | pipe.llen(key) 341 | pipe.expire(key, 100) 342 | 343 | with pytest.raises(redis.ResponseError) as ex: 344 | pipe.execute() 345 | 346 | expected = unicode('Command # 1 (LLEN %s) of pipeline caused ' 347 | 'error: ') % key 348 | assert unicode(ex.value).startswith(expected) 349 | 350 | assert r[key] == b'1' 351 | 352 | @skip_if_server_version_lt('3.2.0') 353 | def test_pipeline_with_bitfield(self, r): 354 | with r.pipeline() as pipe: 355 | pipe.set('a', '1') 356 | bf = pipe.bitfield('b') 357 | pipe2 = (bf 358 | .set('u8', 8, 255) 359 | .get('u8', 0) 360 | .get('u4', 8) # 1111 361 | .get('u4', 12) # 1111 362 | .get('u4', 13) # 1110 363 | .execute()) 364 | pipe.get('a') 365 | response = pipe.execute() 366 | 367 | assert pipe == pipe2 368 | assert response == [True, [0, 0, 15, 15, 14], b'1'] 369 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | PATH := ./redis-git/src:${PATH} 2 | 3 | # CLUSTER REDIS NODES 4 | define REDIS_CLUSTER_NODE1_CONF 5 | daemonize yes 6 | port 7000 7 | cluster-node-timeout 5000 8 | pidfile /tmp/redis_cluster_node1.pid 9 | logfile /tmp/redis_cluster_node1.log 10 | save "" 11 | appendonly no 12 | cluster-enabled yes 13 | cluster-config-file /tmp/redis_cluster_node1.conf 14 | endef 15 | 16 | define REDIS_CLUSTER_NODE2_CONF 17 | daemonize yes 18 | port 7001 19 | cluster-node-timeout 5000 20 | pidfile /tmp/redis_cluster_node2.pid 21 | logfile /tmp/redis_cluster_node2.log 22 | save "" 23 | appendonly no 24 | cluster-enabled yes 25 | cluster-config-file /tmp/redis_cluster_node2.conf 26 | endef 27 | 28 | define REDIS_CLUSTER_NODE3_CONF 29 | daemonize yes 30 | port 7002 31 | cluster-node-timeout 5000 32 | pidfile /tmp/redis_cluster_node3.pid 33 | logfile /tmp/redis_cluster_node3.log 34 | save "" 35 | appendonly no 36 | cluster-enabled yes 37 | cluster-config-file /tmp/redis_cluster_node3.conf 38 | endef 39 | 40 | define REDIS_CLUSTER_NODE4_CONF 41 | daemonize yes 42 | port 7003 43 | cluster-node-timeout 5000 44 | pidfile /tmp/redis_cluster_node4.pid 45 | logfile /tmp/redis_cluster_node4.log 46 | save "" 47 | appendonly no 48 | cluster-enabled yes 49 | cluster-config-file /tmp/redis_cluster_node4.conf 50 | endef 51 | 52 | define REDIS_CLUSTER_NODE5_CONF 53 | daemonize yes 54 | port 7004 55 | cluster-node-timeout 5000 56 | pidfile /tmp/redis_cluster_node5.pid 57 | logfile /tmp/redis_cluster_node5.log 58 | save "" 59 | appendonly no 60 | cluster-enabled yes 61 | cluster-config-file /tmp/redis_cluster_node5.conf 62 | endef 63 | 64 | define REDIS_CLUSTER_NODE6_CONF 65 | daemonize yes 66 | port 7005 67 | cluster-node-timeout 5000 68 | pidfile /tmp/redis_cluster_node6.pid 69 | logfile /tmp/redis_cluster_node6.log 70 | save "" 71 | appendonly no 72 | cluster-enabled yes 73 | cluster-config-file /tmp/redis_cluster_node6.conf 74 | endef 75 | 76 | define REDIS_CLUSTER_NODE7_CONF 77 | daemonize yes 78 | port 7006 79 | cluster-node-timeout 5000 80 | pidfile /tmp/redis_cluster_node7.pid 81 | logfile /tmp/redis_cluster_node7.log 82 | save "" 83 | appendonly no 84 | cluster-enabled yes 85 | cluster-config-file /tmp/redis_cluster_node7.conf 86 | endef 87 | 88 | define REDIS_CLUSTER_NODE8_CONF 89 | daemonize yes 90 | port 7007 91 | cluster-node-timeout 5000 92 | pidfile /tmp/redis_cluster_node8.pid 93 | logfile /tmp/redis_cluster_node8.log 94 | save "" 95 | appendonly no 96 | cluster-enabled yes 97 | cluster-config-file /tmp/redis_cluster_node8.conf 98 | endef 99 | 100 | 101 | # CLUSTER REDIS PASSWORD PROTECTED NODES 102 | define REDIS_CLUSTER_PASSWORD_PROTECTED_NODE1_CONF 103 | daemonize yes 104 | port 7100 105 | cluster-node-timeout 5000 106 | pidfile /tmp/redis_cluster_password_protected_node1.pid 107 | logfile /tmp/redis_cluster_password_protected_node1.log 108 | save "" 109 | masterauth password_is_protected 110 | requirepass password_is_protected 111 | appendonly no 112 | cluster-enabled yes 113 | cluster-config-file /tmp/redis_cluster_password_protected_node1.conf 114 | endef 115 | 116 | define REDIS_CLUSTER_PASSWORD_PROTECTED_NODE2_CONF 117 | daemonize yes 118 | port 7101 119 | cluster-node-timeout 5000 120 | pidfile /tmp/redis_cluster_password_protected_node2.pid 121 | logfile /tmp/redis_cluster_password_protected_node2.log 122 | save "" 123 | masterauth password_is_protected 124 | requirepass password_is_protected 125 | appendonly no 126 | cluster-enabled yes 127 | cluster-config-file /tmp/redis_cluster_password_protected_node2.conf 128 | endef 129 | 130 | define REDIS_CLUSTER_PASSWORD_PROTECTED_NODE3_CONF 131 | daemonize yes 132 | port 7102 133 | cluster-node-timeout 5000 134 | pidfile /tmp/redis_cluster_password_protected_node3.pid 135 | logfile /tmp/redis_cluster_password_protected_node3.log 136 | save "" 137 | masterauth password_is_protected 138 | requirepass password_is_protected 139 | appendonly no 140 | cluster-enabled yes 141 | cluster-config-file /tmp/redis_cluster_password_protected_node3.conf 142 | endef 143 | 144 | define REDIS_CLUSTER_PASSWORD_PROTECTED_NODE4_CONF 145 | daemonize yes 146 | port 7103 147 | cluster-node-timeout 5000 148 | pidfile /tmp/redis_cluster_password_protected_node4.pid 149 | logfile /tmp/redis_cluster_password_protected_node4.log 150 | save "" 151 | masterauth password_is_protected 152 | requirepass password_is_protected 153 | appendonly no 154 | cluster-enabled yes 155 | cluster-config-file /tmp/redis_cluster_password_protected_node4.conf 156 | endef 157 | 158 | define REDIS_CLUSTER_PASSWORD_PROTECTED_NODE5_CONF 159 | daemonize yes 160 | port 7104 161 | cluster-node-timeout 5000 162 | pidfile /tmp/redis_cluster_password_protected_node5.pid 163 | logfile /tmp/redis_cluster_password_protected_node5.log 164 | save "" 165 | masterauth password_is_protected 166 | requirepass password_is_protected 167 | appendonly no 168 | cluster-enabled yes 169 | cluster-config-file /tmp/redis_cluster_password_protected_node5.conf 170 | endef 171 | 172 | define REDIS_CLUSTER_PASSWORD_PROTECTED_NODE6_CONF 173 | daemonize yes 174 | port 7105 175 | cluster-node-timeout 5000 176 | pidfile /tmp/redis_cluster_password_protected_node6.pid 177 | logfile /tmp/redis_cluster_password_protected_node6.log 178 | save "" 179 | masterauth password_is_protected 180 | requirepass password_is_protected 181 | appendonly no 182 | cluster-enabled yes 183 | cluster-config-file /tmp/redis_cluster_password_protected_node6.conf 184 | endef 185 | 186 | define REDIS_CLUSTER_PASSWORD_PROTECTED_NODE7_CONF 187 | daemonize yes 188 | port 7106 189 | cluster-node-timeout 5000 190 | pidfile /tmp/redis_cluster_password_protected_node7.pid 191 | logfile /tmp/redis_cluster_password_protected_node7.log 192 | save "" 193 | masterauth password_is_protected 194 | requirepass password_is_protected 195 | appendonly no 196 | cluster-enabled yes 197 | cluster-config-file /tmp/redis_cluster_password_protected_node7.conf 198 | endef 199 | 200 | define REDIS_CLUSTER_PASSWORD_PROTECTED_NODE8_CONF 201 | daemonize yes 202 | port 7107 203 | cluster-node-timeout 5000 204 | pidfile /tmp/redis_cluster_password_protected_node8.pid 205 | logfile /tmp/redis_cluster_password_protected_node8.log 206 | save "" 207 | masterauth password_is_protected 208 | requirepass password_is_protected 209 | appendonly no 210 | cluster-enabled yes 211 | cluster-config-file /tmp/redis_cluster_password_protected_node8.conf 212 | endef 213 | 214 | ifndef REDIS_TRIB_RB 215 | REDIS_TRIB_RB=tests/redis-trib.rb 216 | endif 217 | 218 | ifndef REDIS_VERSION 219 | REDIS_VERSION=5.0.9 220 | endif 221 | 222 | export REDIS_CLUSTER_NODE1_CONF 223 | export REDIS_CLUSTER_NODE2_CONF 224 | export REDIS_CLUSTER_NODE3_CONF 225 | export REDIS_CLUSTER_NODE4_CONF 226 | export REDIS_CLUSTER_NODE5_CONF 227 | export REDIS_CLUSTER_NODE6_CONF 228 | export REDIS_CLUSTER_NODE7_CONF 229 | export REDIS_CLUSTER_NODE8_CONF 230 | 231 | export REDIS_CLUSTER_PASSWORD_PROTECTED_NODE1_CONF 232 | export REDIS_CLUSTER_PASSWORD_PROTECTED_NODE2_CONF 233 | export REDIS_CLUSTER_PASSWORD_PROTECTED_NODE3_CONF 234 | export REDIS_CLUSTER_PASSWORD_PROTECTED_NODE4_CONF 235 | export REDIS_CLUSTER_PASSWORD_PROTECTED_NODE5_CONF 236 | export REDIS_CLUSTER_PASSWORD_PROTECTED_NODE6_CONF 237 | export REDIS_CLUSTER_PASSWORD_PROTECTED_NODE7_CONF 238 | export REDIS_CLUSTER_PASSWORD_PROTECTED_NODE8_CONF 239 | 240 | help: 241 | @echo "Please use 'make ' where is one of" 242 | @echo " clean remove temporary files created by build tools" 243 | @echo " cleanmeta removes all META-* and egg-info/ files created by build tools" 244 | @echo " cleancov remove all files related to coverage reports" 245 | @echo " cleanall all the above + tmp files from development tools" 246 | @echo " test run test suite" 247 | @echo " sdist make a source distribution" 248 | @echo " bdist make an egg distribution" 249 | @echo " install install package" 250 | @echo " benchmark runs all benchmarks. assumes nodes running on port 7001 and 7007" 251 | @echo " *** CI Commands ***" 252 | @echo " start starts a test redis cluster" 253 | @echo " stop stop all started redis nodes (Started via 'make start' only affected)" 254 | @echo " cleanup cleanup files after running a test cluster" 255 | @echo " test starts/activates the test cluster nodes and runs tox test" 256 | @echo " tox run all tox environments and combine coverage report after" 257 | @echo " redis-install checkout latest redis commit --> build --> install ruby dependencies" 258 | 259 | clean: 260 | -rm -f MANIFEST 261 | -rm -rf dist/ 262 | -rm -rf build/ 263 | 264 | cleancov: 265 | -rm -rf htmlcov/ 266 | -coverage combine 267 | -coverage erase 268 | 269 | cleanmeta: 270 | -rm -rf redis_py_cluster.egg-info/ 271 | 272 | cleanall: clean cleancov cleanmeta 273 | -find . -type f -name "*~" -exec rm -f "{}" \; 274 | -find . -type f -name "*.orig" -exec rm -f "{}" \; 275 | -find . -type f -name "*.rej" -exec rm -f "{}" \; 276 | -find . -type f -name "*.pyc" -exec rm -f "{}" \; 277 | -find . -type f -name "*.parse-index" -exec rm -f "{}" \; 278 | 279 | sdist: cleanmeta 280 | python setup.py sdist 281 | 282 | bdist: cleanmeta 283 | python setup.py bdist_egg 284 | 285 | install: 286 | python setup.py install 287 | 288 | start: cleanup 289 | echo "$$REDIS_CLUSTER_NODE1_CONF" | redis-server - 290 | echo "$$REDIS_CLUSTER_NODE2_CONF" | redis-server - 291 | echo "$$REDIS_CLUSTER_NODE3_CONF" | redis-server - 292 | echo "$$REDIS_CLUSTER_NODE4_CONF" | redis-server - 293 | echo "$$REDIS_CLUSTER_NODE5_CONF" | redis-server - 294 | echo "$$REDIS_CLUSTER_NODE6_CONF" | redis-server - 295 | echo "$$REDIS_CLUSTER_NODE7_CONF" | redis-server - 296 | echo "$$REDIS_CLUSTER_NODE8_CONF" | redis-server - 297 | 298 | echo "$$REDIS_CLUSTER_PASSWORD_PROTECTED_NODE1_CONF" | redis-server - 299 | echo "$$REDIS_CLUSTER_PASSWORD_PROTECTED_NODE2_CONF" | redis-server - 300 | echo "$$REDIS_CLUSTER_PASSWORD_PROTECTED_NODE3_CONF" | redis-server - 301 | echo "$$REDIS_CLUSTER_PASSWORD_PROTECTED_NODE4_CONF" | redis-server - 302 | echo "$$REDIS_CLUSTER_PASSWORD_PROTECTED_NODE5_CONF" | redis-server - 303 | echo "$$REDIS_CLUSTER_PASSWORD_PROTECTED_NODE6_CONF" | redis-server - 304 | echo "$$REDIS_CLUSTER_PASSWORD_PROTECTED_NODE7_CONF" | redis-server - 305 | echo "$$REDIS_CLUSTER_PASSWORD_PROTECTED_NODE8_CONF" | redis-server - 306 | 307 | sleep 5 308 | echo "yes" | ruby $(REDIS_TRIB_RB) create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005 309 | 310 | sleep 5 311 | echo "yes" | ruby $(REDIS_TRIB_RB) create --replicas 1 --password password_is_protected 127.0.0.1:7100 127.0.0.1:7101 127.0.0.1:7102 127.0.0.1:7103 127.0.0.1:7104 127.0.0.1:7105 312 | 313 | sleep 5 314 | 315 | cleanup: 316 | - rm -vf /tmp/redis_cluster_node*.conf 2>/dev/null 317 | - rm -vf /tmp/redis_cluster_password_protected_node*.conf 2>/dev/null 318 | - rm dump.rdb appendonly.aof - 2>/dev/null 319 | 320 | stop: 321 | kill `cat /tmp/redis_cluster_node1.pid` || true 322 | kill `cat /tmp/redis_cluster_node2.pid` || true 323 | kill `cat /tmp/redis_cluster_node3.pid` || true 324 | kill `cat /tmp/redis_cluster_node4.pid` || true 325 | kill `cat /tmp/redis_cluster_node5.pid` || true 326 | kill `cat /tmp/redis_cluster_node6.pid` || true 327 | kill `cat /tmp/redis_cluster_node7.pid` || true 328 | kill `cat /tmp/redis_cluster_node8.pid` || true 329 | 330 | kill `cat /tmp/redis_cluster_password_protected_node1.pid` || true 331 | kill `cat /tmp/redis_cluster_password_protected_node2.pid` || true 332 | kill `cat /tmp/redis_cluster_password_protected_node3.pid` || true 333 | kill `cat /tmp/redis_cluster_password_protected_node4.pid` || true 334 | kill `cat /tmp/redis_cluster_password_protected_node5.pid` || true 335 | kill `cat /tmp/redis_cluster_password_protected_node6.pid` || true 336 | kill `cat /tmp/redis_cluster_password_protected_node7.pid` || true 337 | kill `cat /tmp/redis_cluster_password_protected_node8.pid` || true 338 | 339 | rm -f /tmp/redis_cluster_node1.conf 340 | rm -f /tmp/redis_cluster_node2.conf 341 | rm -f /tmp/redis_cluster_node3.conf 342 | rm -f /tmp/redis_cluster_node4.conf 343 | rm -f /tmp/redis_cluster_node5.conf 344 | rm -f /tmp/redis_cluster_node6.conf 345 | rm -f /tmp/redis_cluster_node7.conf 346 | rm -f /tmp/redis_cluster_node8.conf 347 | 348 | rm -f /tmp/redis_cluster_password_protected_node1.conf 349 | rm -f /tmp/redis_cluster_password_protected_node2.conf 350 | rm -f /tmp/redis_cluster_password_protected_node3.conf 351 | rm -f /tmp/redis_cluster_password_protected_node4.conf 352 | rm -f /tmp/redis_cluster_password_protected_node5.conf 353 | rm -f /tmp/redis_cluster_password_protected_node6.conf 354 | rm -f /tmp/redis_cluster_password_protected_node7.conf 355 | rm -f /tmp/redis_cluster_password_protected_node8.conf 356 | 357 | test: 358 | make start 359 | make tox 360 | make stop 361 | 362 | tox: 363 | coverage erase 364 | tox 365 | TEST_PASSWORD_PROTECTED=1 tox 366 | coverage combine 367 | coverage report 368 | 369 | clone-redis: 370 | [ ! -e redis-git ] && git clone https://github.com/antirez/redis.git redis-git || true 371 | cd redis-git && git checkout $(REDIS_VERSION) 372 | 373 | redis-install: 374 | make clone-redis 375 | make -C redis-git -j4 376 | gem install redis 377 | sleep 3 378 | 379 | benchmark: 380 | @echo "" 381 | @echo " -- Running Simple benchmark with Redis lib and non cluster server --" 382 | python benchmarks/simple.py --port 7007 --timeit --nocluster 383 | @echo "" 384 | @echo " -- Running Simple benchmark with RedisCluster lib and cluster server --" 385 | python benchmarks/simple.py --port 7001 --timeit 386 | @echo "" 387 | @echo " -- Running Simple benchmark with pipelines & Redis lib and non cluster server --" 388 | python benchmarks/simple.py --port 7007 --timeit --pipeline --nocluster 389 | @echo "" 390 | @echo " -- Running Simple benchmark with RedisCluster lib and cluster server" 391 | python benchmarks/simple.py --port 7001 --timeit --pipeline 392 | 393 | ptp: 394 | python ptp-debug.py 395 | 396 | .PHONY: test 397 | -------------------------------------------------------------------------------- /tests/test_pipeline_cluster.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | 3 | # python std lib 4 | from __future__ import unicode_literals 5 | import re 6 | 7 | # rediscluster imports 8 | from rediscluster.client import RedisCluster 9 | from rediscluster.connection import ClusterConnectionPool, ClusterReadOnlyConnectionPool 10 | from rediscluster.exceptions import RedisClusterException 11 | from tests.conftest import _get_client 12 | 13 | # 3rd party imports 14 | import pytest 15 | from mock import patch 16 | from redis._compat import unicode 17 | from redis.exceptions import ResponseError, ConnectionError 18 | 19 | 20 | class TestPipeline(object): 21 | """ 22 | """ 23 | 24 | def test_pipeline_eval(self, r): 25 | with r.pipeline(transaction=False) as pipe: 26 | pipe.eval("return {KEYS[1],KEYS[2],ARGV[1],ARGV[2]}", 2, "A{foo}", "B{foo}", "first", "second") 27 | res = pipe.execute()[0] 28 | assert res[0] == b'A{foo}' 29 | assert res[1] == b'B{foo}' 30 | assert res[2] == b'first' 31 | assert res[3] == b'second' 32 | 33 | def test_blocked_methods(self, r): 34 | """ 35 | Currently some method calls on a Cluster pipeline 36 | is blocked when using in cluster mode. 37 | They maybe implemented in the future. 38 | """ 39 | pipe = r.pipeline(transaction=False) 40 | with pytest.raises(RedisClusterException): 41 | pipe.multi() 42 | 43 | with pytest.raises(RedisClusterException): 44 | pipe.immediate_execute_command() 45 | 46 | with pytest.raises(RedisClusterException): 47 | pipe._execute_transaction(None, None, None) 48 | 49 | with pytest.raises(RedisClusterException): 50 | pipe.load_scripts() 51 | 52 | with pytest.raises(RedisClusterException): 53 | pipe.watch() 54 | 55 | with pytest.raises(RedisClusterException): 56 | pipe.unwatch() 57 | 58 | with pytest.raises(RedisClusterException): 59 | pipe.script_load_for_pipeline(None) 60 | 61 | with pytest.raises(RedisClusterException): 62 | pipe.transaction(None) 63 | 64 | def test_blocked_arguments(self, r): 65 | """ 66 | Currently some arguments is blocked when using in cluster mode. 67 | They maybe implemented in the future. 68 | """ 69 | with pytest.raises(RedisClusterException) as ex: 70 | r.pipeline(transaction=True) 71 | 72 | assert unicode(ex.value).startswith("transaction is deprecated in cluster mode"), True 73 | 74 | with pytest.raises(RedisClusterException) as ex: 75 | r.pipeline(shard_hint=True) 76 | 77 | assert unicode(ex.value).startswith("shard_hint is deprecated in cluster mode"), True 78 | 79 | def test_redis_cluster_pipeline(self): 80 | """ 81 | Test that we can use a pipeline with the RedisCluster class 82 | """ 83 | r = _get_client(RedisCluster) 84 | with r.pipeline(transaction=False) as pipe: 85 | pipe.get("foobar") 86 | 87 | def test_mget_disabled(self, r): 88 | with r.pipeline(transaction=False) as pipe: 89 | with pytest.raises(RedisClusterException): 90 | pipe.mget(['a']) 91 | 92 | def test_mset_disabled(self, r): 93 | with r.pipeline(transaction=False) as pipe: 94 | with pytest.raises(RedisClusterException): 95 | pipe.mset({'a': 1, 'b': 2}) 96 | 97 | def test_rename_disabled(self, r): 98 | with r.pipeline(transaction=False) as pipe: 99 | with pytest.raises(RedisClusterException): 100 | pipe.rename('a', 'b') 101 | 102 | def test_renamenx_disabled(self, r): 103 | with r.pipeline(transaction=False) as pipe: 104 | with pytest.raises(RedisClusterException): 105 | pipe.renamenx('a', 'b') 106 | 107 | def test_delete_single(self, r): 108 | r['a'] = 1 109 | with r.pipeline(transaction=False) as pipe: 110 | pipe.delete('a') 111 | assert pipe.execute(), True 112 | 113 | def test_multi_delete_unsupported(self, r): 114 | with r.pipeline(transaction=False) as pipe: 115 | r['a'] = 1 116 | r['b'] = 2 117 | with pytest.raises(RedisClusterException): 118 | pipe.delete('a', 'b') 119 | 120 | def test_brpoplpush_disabled(self, r): 121 | with r.pipeline(transaction=False) as pipe: 122 | with pytest.raises(RedisClusterException): 123 | pipe.brpoplpush() 124 | 125 | def test_rpoplpush_disabled(self, r): 126 | with r.pipeline(transaction=False) as pipe: 127 | with pytest.raises(RedisClusterException): 128 | pipe.rpoplpush() 129 | 130 | def test_sort_disabled(self, r): 131 | with r.pipeline(transaction=False) as pipe: 132 | with pytest.raises(RedisClusterException): 133 | pipe.sort() 134 | 135 | def test_sdiff_disabled(self, r): 136 | with r.pipeline(transaction=False) as pipe: 137 | with pytest.raises(RedisClusterException): 138 | pipe.sdiff() 139 | 140 | def test_sdiffstore_disabled(self, r): 141 | with r.pipeline(transaction=False) as pipe: 142 | with pytest.raises(RedisClusterException): 143 | pipe.sdiffstore() 144 | 145 | def test_sinter_disabled(self, r): 146 | with r.pipeline(transaction=False) as pipe: 147 | with pytest.raises(RedisClusterException): 148 | pipe.sinter() 149 | 150 | def test_sinterstore_disabled(self, r): 151 | with r.pipeline(transaction=False) as pipe: 152 | with pytest.raises(RedisClusterException): 153 | pipe.sinterstore() 154 | 155 | def test_smove_disabled(self, r): 156 | with r.pipeline(transaction=False) as pipe: 157 | with pytest.raises(RedisClusterException): 158 | pipe.smove() 159 | 160 | def test_sunion_disabled(self, r): 161 | with r.pipeline(transaction=False) as pipe: 162 | with pytest.raises(RedisClusterException): 163 | pipe.sunion() 164 | 165 | def test_sunionstore_disabled(self, r): 166 | with r.pipeline(transaction=False) as pipe: 167 | with pytest.raises(RedisClusterException): 168 | pipe.sunionstore() 169 | 170 | def test_spfmerge_disabled(self, r): 171 | with r.pipeline(transaction=False) as pipe: 172 | with pytest.raises(RedisClusterException): 173 | pipe.pfmerge() 174 | 175 | def test_multi_key_operation_with_shared_shards(self, r): 176 | pipe = r.pipeline(transaction=False) 177 | pipe.set('a{foo}', 1) 178 | pipe.set('b{foo}', 2) 179 | pipe.set('c{foo}', 3) 180 | pipe.set('bar', 4) 181 | pipe.set('bazz', 5) 182 | pipe.get('a{foo}') 183 | pipe.get('b{foo}') 184 | pipe.get('c{foo}') 185 | pipe.get('bar') 186 | pipe.get('bazz') 187 | res = pipe.execute() 188 | assert res == [True, True, True, True, True, b'1', b'2', b'3', b'4', b'5'] 189 | 190 | @pytest.mark.xfail(reson="perform_execute_pipeline is not used any longer") 191 | def test_connection_error(self, r): 192 | test = self 193 | test._calls = [] 194 | 195 | def perform_execute_pipeline(pipe): 196 | if not test._calls: 197 | e = ConnectionError('test') 198 | test._calls.append({'exception': e}) 199 | return [e] 200 | result = pipe.execute(raise_on_error=False) 201 | test._calls.append({'result': result}) 202 | return result 203 | 204 | pipe = r.pipeline(transaction=False) 205 | orig_perform_execute_pipeline = pipe.perform_execute_pipeline 206 | pipe.perform_execute_pipeline = perform_execute_pipeline 207 | 208 | try: 209 | pipe.set('foo', 1) 210 | res = pipe.execute() 211 | assert res, [True] 212 | assert isinstance(test._calls[0]['exception'], ConnectionError) 213 | if len(test._calls) == 2: 214 | assert test._calls[1] == {'result': [True]} 215 | else: 216 | assert isinstance(test._calls[1]['result'][0], ResponseError) 217 | assert test._calls[2] == {'result': [True]} 218 | finally: 219 | pipe.perform_execute_pipeline = orig_perform_execute_pipeline 220 | del test._calls 221 | 222 | @pytest.mark.xfail(reson="perform_execute_pipeline is not used any longer") 223 | def test_asking_error(self, r): 224 | test = self 225 | test._calls = [] 226 | 227 | def perform_execute_pipeline(pipe): 228 | if not test._calls: 229 | 230 | e = ResponseError("ASK {0} 127.0.0.1:7003".format(r.keyslot('foo'))) 231 | test._calls.append({'exception': e}) 232 | return [e, e] 233 | result = pipe.execute(raise_on_error=False) 234 | test._calls.append({'result': result}) 235 | return result 236 | 237 | pipe = r.pipeline(transaction=False) 238 | orig_perform_execute_pipeline = pipe.perform_execute_pipeline 239 | pipe.perform_execute_pipeline = perform_execute_pipeline 240 | 241 | try: 242 | pipe.set('foo', 1) 243 | pipe.get('foo') 244 | res = pipe.execute() 245 | assert res == [True, b'1'] 246 | assert isinstance(test._calls[0]['exception'], ResponseError) 247 | assert re.match("ASK", str(test._calls[0]['exception'])) 248 | assert isinstance(test._calls[1]['result'][0], ResponseError) 249 | assert re.match("MOVED", str(test._calls[1]['result'][0])) 250 | assert test._calls[2] == {'result': [True, b'1']} 251 | finally: 252 | pipe.perform_execute_pipeline = orig_perform_execute_pipeline 253 | del test._calls 254 | 255 | def test_empty_stack(self, r): 256 | """ 257 | If pipeline is executed with no commands it should 258 | return a empty list. 259 | """ 260 | p = r.pipeline() 261 | result = p.execute() 262 | assert result == [] 263 | 264 | 265 | class TestReadOnlyPipeline(object): 266 | 267 | def test_pipeline_readonly(self, r, ro): 268 | """ 269 | On readonly mode, we supports get related stuff only. 270 | """ 271 | r.set('foo71', 'a1') # we assume this key is set on 127.0.0.1:7001 272 | r.zadd('foo88', {'z1': 1}) # we assume this key is set on 127.0.0.1:7002 273 | r.zadd('foo88', {'z2': 4}) 274 | 275 | with ro.pipeline() as readonly_pipe: 276 | readonly_pipe.get('foo71').zrange('foo88', 0, 5, withscores=True) 277 | assert readonly_pipe.execute() == [ 278 | b'a1', 279 | [(b'z1', 1.0), (b'z2', 4)], 280 | ] 281 | 282 | def assert_moved_redirection_on_slave(self, connection_pool_cls, cluster_obj): 283 | with patch.object(connection_pool_cls, 'get_node_by_slot') as return_slave_mock: 284 | with patch.object(ClusterConnectionPool, 'get_master_node_by_slot') as return_master_mock: 285 | def get_mock_node(role, port): 286 | return { 287 | 'name': '127.0.0.1:{0}'.format(port), 288 | 'host': '127.0.0.1', 289 | 'port': port, 290 | 'server_type': role, 291 | } 292 | 293 | return_slave_mock.return_value = get_mock_node('slave', 7005) 294 | return_master_mock.return_value = get_mock_node('slave', 7001) 295 | 296 | with cluster_obj.pipeline() as pipe: 297 | # we assume this key is set on 127.0.0.1:7001(7004) 298 | pipe.get('foo87').get('foo88').execute() == [None, None] 299 | 300 | def test_moved_redirection_on_slave_with_default(self): 301 | """ 302 | On Pipeline, we redirected once and finally get from master with 303 | readonly client when data is completely moved. 304 | """ 305 | self.assert_moved_redirection_on_slave( 306 | ClusterConnectionPool, 307 | RedisCluster(host="127.0.0.1", port=7000, reinitialize_steps=1) 308 | ) 309 | 310 | def test_moved_redirection_on_slave_with_readonly_mode_client(self): 311 | """ 312 | Ditto with READONLY mode. 313 | """ 314 | self.assert_moved_redirection_on_slave( 315 | ClusterReadOnlyConnectionPool, 316 | RedisCluster(host="127.0.0.1", port=7000, readonly_mode=True, reinitialize_steps=1) 317 | ) 318 | 319 | def test_access_correct_slave_with_readonly_mode_client(self, sr): 320 | """ 321 | Test that the client can get value normally with readonly mode 322 | when we connect to correct slave. 323 | """ 324 | 325 | # we assume this key is set on 127.0.0.1:7001 326 | sr.set('foo87', 'foo') 327 | sr.set('foo88', 'bar') 328 | import time 329 | time.sleep(1) 330 | 331 | with patch.object(ClusterReadOnlyConnectionPool, 'get_node_by_slot') as return_slave_mock: 332 | return_slave_mock.return_value = { 333 | 'name': '127.0.0.1:7004', 334 | 'host': '127.0.0.1', 335 | 'port': 7004, 336 | 'server_type': 'slave', 337 | } 338 | 339 | master_value = { 340 | 'host': '127.0.0.1', 341 | 'name': '127.0.0.1:7001', 342 | 'port': 7001, 343 | 'server_type': 'master', 344 | } 345 | with patch.object(ClusterConnectionPool, 'get_master_node_by_slot', return_value=master_value): 346 | readonly_client = RedisCluster(host="127.0.0.1", port=7000, readonly_mode=True) 347 | with readonly_client.pipeline() as readonly_pipe: 348 | assert readonly_pipe.get('foo88').get('foo87').execute() == [b'bar', b'foo'] 349 | -------------------------------------------------------------------------------- /docs/release-notes.rst: -------------------------------------------------------------------------------- 1 | Release Notes 2 | ============= 3 | 4 | 2.1.3 (May 30 2021) 5 | ------------------- 6 | 7 | * Add example script pipelin-readonly-replica.py to show how to use replica nodes to offload read commands from primary node 8 | * max_connection now defaults to 50 in ClusterBlockingConnectionPool to avoid issue with infinite loop in queue mechanism 9 | * Using read replica for read commands inside pipeline is now better supported. Feature might be unstable to use as own risk. 10 | * Fixed that in some cases where ConnectionError is raised, a non existing connection was attempted to be disconnected and caused a sub exception to be raised. 11 | 12 | 2.1.2 (Apr 18 2021) 13 | ------------------- 14 | 15 | * Fixed bug where "from rediscluster import *" would not work correct 16 | 17 | 2.1.1 (Apr 18 2021) 18 | ------------------- 19 | 20 | * ClusterPipeline is now exposed when doing "from rediscluster import *" 21 | * Fix issue where connection would be None in some cases when connection pool fails to initialize 22 | * Ported in a fix from redis-py where it now checks if a connection is ready or not before returning the connection for usage 23 | * ClusterFailover command option is no longer mandatory but optional as it is intended 24 | * Fixed "SLOWLOG GET" kwarg command where it failed on decode_responses 25 | * BaseException is now caught when executing commands and it will disconnect and the connection before raising the exception. 26 | * Logging exception on ReseponseError when doing the initial connection to the startup_nodes instances 27 | 28 | 2.1.0 (Sept 26, 2020) 29 | -------------------- 30 | 31 | * Add new config option for Client and Pipeline classes to controll how many attempts will be made before bailing out from a ClusterDownError. 32 | Use "cluster_down_retry_attempts=" when creating the client class to controll this behaviour. 33 | * Updated redis-py compatbile version to support any version in the major version 3.0.x, 3.1.x, 3.2.x, 3.3.x., 3.4.x, 3.5.x (#326) 34 | It is always recommended to use the latest version of redis-py to avoid issues and compatiblity problems. 35 | * Fixed bug preventing reinitialization after getting MOVED errors 36 | * Add testing of redis-esrver 6.0 versions to travis and unit tests 37 | * Add python 2.7 compatiblity note about deprecation and upcomming changes in python 2.7 support for this lib 38 | * Updated tests and cluster tests versions of the same methods to latest tests from upstream redis-py package 39 | * Reorganized tests and how cluster specific tests is written and run over the upstream version of the same test to make it easier 40 | and much faster to update and keep them in sync over time going into the future (#368) 41 | * Python 3.5.x or higher is now required if running on a python 3 version 42 | * Removed the monkeypatching of RedisCluster, ClusterPubSub & ClusterPipeline class names into the "redis" python package namespace during runtime. 43 | They are now exposed in the "rediscluster" namespace to mimic the same feature from redis-py 44 | * cluster_down_retry_attempts can now be configured to any value when creating RedisCluster instance 45 | * Creating RedisCluster from unix socket url:s has been disabled 46 | * Patch the from_url method to use the corret cluster version of the same Connection class 47 | * ConnectionError and TimeoutError is now handled seperately in the main execute loop to better handle each case (#363) 48 | * Update scan_iter custom cluster implementation 49 | * Improve description_format handling for connection classes to simplify how they work 50 | * Implement new connection pool ClusterBlockingConnectionPool (#347) 51 | * Nodemanager initiailize should now handle usernames properly (#365) 52 | * PubSub tests has been all been disabled 53 | * New feature, host_port_remap. Send in a remapping configuration to RedisCluster instance where the nodes configuration recieved from the redis cluster can be altered to allow for connection in certain circumstances. See new section in client.rst in docs/ for usage example. 54 | * When a slot is not covered by the cluster, it will not raise SlotNotCoveredError instead of the old generic RedisClusterException. The client will not attempt to rebuild the cluster layout a few times before giving up and raising that exception to the user. (#350) 55 | * CLIENT SETNAME is now possible to use from the client instance. For setting the name for all connections from the client by default, see issue #802 in redis-py repo for the change that was implemented in redis-py 3.4.0. 56 | * Rewrote implemented commands documentation to mimic the redis.io commands documentation and describe each command and any additional implementation that has been made. 57 | * Added RTD theme to the rendered output when running the documentation in local dev mode. 58 | * Added some basic logging to the client that should make it easier to debug and track down minor issues around the main execution loop. See docs/logging.rst for implementation example into your own code. 59 | * Seperated some of the exception handling inside the main execution loop to get more fine grained controll what to do at certain errors. 60 | 61 | 62 | 2.0.0 (Aug 12, 2019) 63 | -------------------- 64 | 65 | Specific changes to redis-py-cluster is mentioned below here. 66 | 67 | * Update entire code base to now support all redis-py version in the 3.0.x version line. Any future redis-py version will be supported at a later time. 68 | * Major update to all tests to mirror the code of the same tests from redis-py 69 | * Dropped support for the 2.10.6 redis-py release. 70 | * Add pythoncodestyle lint validation check to travis-ci runs to check for proper linting before accepting PR:s 71 | * Class StrictRedisCluster was renamed to RedisCluster 72 | * Class StrictRedis has been removed to mirror upstream class structure 73 | * Class StrictClusterPipeline was renamed to ClusterPipeline 74 | * Fixed travis-ci tests not running properly on python 3.7 75 | * Fixed documentation regarding threads in pipelines 76 | * Update lit of command callbacks and parsers. Added in "CLIENT ID" 77 | * Removed custom implementation of SORT and revert back to use same-slot mechanism for that command. 78 | * Added better exception message to get_master_node_by_slot command to help the user understand the error. 79 | * Improved the exception object message parsing when running on python3 80 | 81 | 82 | 1.3.6 (Nov 16, 2018) 83 | -------------------- 84 | 85 | * Pin upstream redis-py package to release 2.10.6 to avoid issues with incompatible version 3.0.0 86 | 87 | 88 | 1.3.5 (July 22, 2018) 89 | --------------------- 90 | 91 | * Add Redis 4 compatability fix to CLUSTER NODES command (See issue #217) 92 | * Fixed bug with command "CLUSTER GETKEYSINSLOT" that was throwing exceptions 93 | * Added new methods cluster_get_keys_in_slot() to client 94 | * Fixed bug with `StrictRedisCluster.from_url` that was ignoring the `readonly_mode` parameter 95 | * NodeManager will now ignore nodes showing cluster errors when initializing the cluster 96 | * Fix bug where RedisCluster wouldn't refresh the cluster table when executing commands on specific nodes 97 | * Add redis 5.0 to travis-ci tests 98 | * Change default redis version from 3.0.7 to 4.0.10 99 | * Increase accepted ranges of dependencies specefied in dev-requirements.txt 100 | * Several major and minor documentation updates and tweaks 101 | * Add example script "from_url_password_protected.py" 102 | * command "CLUSTER GETKEYSINSLOT" is now returned as a list and not int 103 | * Improve support for ssl connections 104 | * Retry on Timeout errors when doing cluster discovery 105 | * Added new error class "MasterDownError" 106 | * Updated requirements for dependency of redis-py to latest version 107 | 108 | 1.3.4 (Mar 5, 2017) 109 | ------------------- 110 | 111 | * Package is now built as a wheel and source package when releases is built. 112 | * Fixed issues with some key types in `NodeManager.keyslot()`. 113 | * Add support for `PUBSUB` subcommands `CHANNELS`, `NUMSUB [arg] [args...]` and `NUMPAT`. 114 | * Add method `set_result_callback(command, callback)` allowing the default reply callbacks to be changed, in the same way `set_response_callback(command, callback)` inherited from Redis-Py does for responses. 115 | * Node manager now honors defined max_connections variable so connections that is emited from that class uses the same variable. 116 | * Fixed a bug in cluster detection when running on python 3.x and decode_responses=False was used. 117 | Data back from redis for cluster structure is now converted no matter what the data you want to set/get later is using. 118 | * Add SSLClusterConnection for connecting over TLS/SSL to Redis Cluster 119 | * Add new option to make the nodemanager to follow the cluster when nodes move around by avoiding to query the original list of startup nodes that was provided 120 | when the client object was first created. This could make the client handle drifting clusters on for example AWS easier but there is a higher risk of the client talking to 121 | the wrong group of nodes during split-brain event if the cluster is not consistent. This feature is EXPERIMENTAL and use it with care. 122 | 123 | 1.3.3 (Dec 15, 2016) 124 | -------------------- 125 | 126 | * Remove print statement that was faulty commited into release 1.3.2 that case logs to fill up with unwanted data. 127 | 128 | 1.3.2 (Nov 27, 2016) 129 | -------------------- 130 | 131 | * Fix a bug where from_url was not possible to use without passing in additional variables. Now it works as the same method from redis-py. 132 | Note that the same rules that is currently in place for passing ip addresses/dns names into startup_nodes variable apply the same way through 133 | the from_url method. 134 | * Added options to skip full coverage check. This flag is useful when the CONFIG redis command is disabled by the server. 135 | * Fixed a bug where method *CLUSTER SLOTS* would break in newer redis versions where node id is included in the reponse. Method is not compatible with both old and new redis versions. 136 | 137 | 138 | 1.3.1 (Oct 13, 2016) 139 | -------------------- 140 | 141 | * Rebuilt broken method scan_iter. Previous tests was to small to detect the problem but is not corrected to work on a bigger dataset during the test of that method. (korvus81, Grokzen, RedWhiteMiko) 142 | * Errors in pipeline that should be retried, like connection errors, moved, errors and ask errors now fall back to single operation logic in StrictRedisCluster.execute_command. (72squared). 143 | * Moved reinitialize_steps and counter into nodemanager so it can be correctly counted across pipeline operations (72squared). 144 | 145 | 146 | 1.3.0 (Sep 11, 2016) 147 | -------------------- 148 | 149 | * Removed RedisClusterMgt class and file 150 | * Fixed a bug when using pipelines with RedisCluster class (Ozahata) 151 | * Bump redis-server during travis tests to 3.0.7 152 | * Added docs about same module name in another python redis cluster project. 153 | * Fix a bug when a connection was to be tracked for a node but the node either do not yet exists or 154 | was removed because of resharding was done in another thread. (ashishbaghudana) 155 | * Fixed a bug with "CLUSTER ..." commands when a node_id argument was needed and the return type 156 | was supposed to be converted to bool with bool_ok in redis._compat. 157 | * Add back gitter chat room link 158 | * Add new client commands 159 | - cluster_reset_all_nodes 160 | * Command cluster_delslots now determines what cluster shard each slot is on and sends each slot deletion 161 | command to the correct node. Command have changed argument spec (Read Upgrading.rst for details) 162 | * Fixed a bug when hashing the key it if was a python 3 byte string and it would cause it to route to wrong slot in the cluster (fossilet, Grokzen) 163 | * Fixed a bug when reinitialize the nodemanager it would use the old nodes_cache instead of the new one that was just parsed (monklof) 164 | 165 | 166 | 1.2.0 (Apr 09, 2016) 167 | -------------------- 168 | 169 | * Drop maintained support for python 3.2. 170 | * Remove Vagrant file in favor for repo maintained by 72squared 171 | * Add Support for password protected cluster (etng) 172 | * Removed assertion from code (gmolight) 173 | * Fixed a bug where a regular connection pool was allocated with each StrictRedisCluster instance. 174 | * Rework pfcount to now work as expected when all arguments points to same hashslot 175 | * New code and important changes from redis-py 2.10.5 have been added to the codebase. 176 | * Removed the need for threads inside of pipeline. We write the packed commands all nodes before reading the responses which gives us even better performance than threads, especially as we add more nodes to the cluster. 177 | * Allow passing in a custom connection pool 178 | * Provide default max_connections value for ClusterConnectionPool *(2**31)* 179 | * Travis now tests both redis 3.0.x and 3.2.x 180 | * Add simple ptpdb debug script to make it easier to test the client 181 | * Fix a bug in sdiffstore (mt3925) 182 | * Fix a bug with scan_iter where duplicate keys would be returned during itteration 183 | * Implement all "CLUSTER ..." commands as methods in the client class 184 | * Client now follows the service side setting 'cluster-require-full-coverage=yes/no' (baranbartu) 185 | * Change the pubsub implementation (PUBLISH/SUBSCRIBE commands) from using one single node to now determine the hashslot for the channel name and use that to connect to 186 | a node in the cluster. Other clients that do not use this pattern will not be fully compatible with this client. Known limitations is pattern 187 | subscription that do not work properly because a pattern can't know all the possible channel names in advance. 188 | * Convert all docs to ReadTheDocs 189 | * Rework connection pool logic to be more similar to redis-py. This also fixes an issue with pubsub and that connections 190 | was never release back to the pool of available connections. 191 | 192 | 1.1.0 (Oct 27, 2015) 193 | ------------------- 194 | 195 | * Refactored exception handling and exception classes. 196 | * Added READONLY mode support, scales reads using slave nodes. 197 | * Fix __repr__ for ClusterConnectionPool and ClusterReadOnlyConnectionPool 198 | * Add max_connections_per_node parameter to ClusterConnectionPool so that max_connections parameter is calculated per-node rather than across the whole cluster. 199 | * Improve thread safty of get_connection_by_slot and get_connection_by_node methods (iandyh) 200 | * Improved error handling when sending commands to all nodes, e.g. info. Now the connection takes retry_on_timeout as an option and retry once when there is a timeout. (iandyh) 201 | * Added support for SCRIPT LOAD, SCRIPT FLUSH, SCRIPT EXISTS and EVALSHA commands. (alisaifee) 202 | * Improve thread safety to avoid exceptions when running one client object inside multiple threads and doing resharding of the 203 | cluster at the same time. 204 | * Fix ASKING error handling so now it really sends ASKING to next node during a reshard operation. This improvement was also made to pipelined commands. 205 | * Improved thread safety in pipelined commands, along better explanation of the logic inside pipelining with code comments. 206 | 207 | 1.0.0 (Jun 10, 2015) 208 | ------------------- 209 | 210 | * No change to anything just a bump to 1.0.0 because the lib is now considered stable/production ready. 211 | 212 | 0.3.0 (Jun 9, 2015) 213 | ------------------- 214 | 215 | * simple benchmark now uses docopt for cli parsing 216 | * New make target to run some benchmarks 'make benchmark' 217 | * simple benchmark now support pipelines tests 218 | * Renamed RedisCluster --> StrictRedisCluster 219 | * Implement backwards compatible redis.Redis class in cluster mode. It was named RedisCluster and everyone updating from 0.2.0 to 0.3.0 should consult docs/Upgrading.md for instructions how to change your code. 220 | * Added comprehensive documentation regarding pipelines 221 | * Meta retrieval commands(slots, nodes, info) for Redis Cluster. (iandyh) 222 | 223 | 0.2.0 (Dec 26, 2014) 224 | ------------------- 225 | 226 | * Moved pipeline code into new file. 227 | * Code now uses a proper cluster connection pool class that handles 228 | all nodes and connections similar to how redis-py do. 229 | * Better support for pubsub. All clients will now talk to the same server because 230 | pubsub commands do not work reliably if it talks to a random server in the cluster. 231 | * Better result callbacks and node routing support. No more ugly decorators. 232 | * Fix keyslot command when using non ascii characters. 233 | * Add bitpos support, redis-py 2.10.2 or higher required. 234 | * Fixed a bug where vagrant users could not build the package via shared folder. 235 | * Better support for CLUSTERDOWN error. (Neuront) 236 | * Parallel pipeline execution using threads. (72squared) 237 | * Added vagrant support for testing and development. (72squared) 238 | * Improve stability of client during resharding operations (72squared) 239 | 240 | 0.1.0 (Sep 29, 2014) 241 | ------------------- 242 | 243 | * Initial release 244 | * First release uploaded to pypi 245 | --------------------------------------------------------------------------------