├── docs ├── _static │ └── .keep ├── _templates │ └── .keep ├── index.rst ├── make.bat ├── Makefile └── conf.py ├── tests ├── __init__.py ├── test_encoding.py ├── conftest.py ├── test_scripting.py ├── test_lock.py ├── test_sentinel.py ├── test_pipeline.py ├── test_pubsub.py └── test_connection_pool.py ├── benchmarks ├── __init__.py ├── socket_read_size.py ├── base.py ├── command_packer_benchmark.py └── basic_operations.py ├── vagrant ├── .bash_profile ├── bootstrap.sh ├── sentinel-configs │ ├── 001-1 │ ├── 002-2 │ └── 003-3 ├── redis-configs │ ├── 001-master │ └── 002-slave ├── build_redis.sh ├── Vagrantfile ├── redis_init_script ├── sentinel_init_script ├── install_redis.sh ├── install_sentinel.sh └── redis_vars.sh ├── INSTALL ├── .gitignore ├── MANIFEST.in ├── setup.cfg ├── RELEASE ├── tox.ini ├── .travis.yml ├── redis ├── utils.py ├── __init__.py ├── exceptions.py ├── _compat.py ├── lock.py └── sentinel.py ├── LICENSE ├── setup.py ├── CHANGES └── README.rst /docs/_static/.keep: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /benchmarks/__init__.py: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /docs/_templates/.keep: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /vagrant/.bash_profile: -------------------------------------------------------------------------------- 1 | PATH=$PATH:/home/vagrant/redis/bin 2 | -------------------------------------------------------------------------------- /vagrant/bootstrap.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # need make to build redis 4 | sudo apt-get install make 5 | -------------------------------------------------------------------------------- /INSTALL: -------------------------------------------------------------------------------- 1 | 2 | Please use 3 | python setup.py install 4 | 5 | and report errors to Andy McCurdy (sedrik@gmail.com) 6 | 7 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | redis.egg-info 3 | build/ 4 | dist/ 5 | dump.rdb 6 | /.tox 7 | _build 8 | vagrant/.vagrant 9 | .python-version 10 | .cache 11 | .eggs 12 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include CHANGES 2 | include INSTALL 3 | include LICENSE 4 | include README.rst 5 | exclude __pycache__ 6 | recursive-include tests * 7 | recursive-exclude tests *.pyc 8 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [pep8] 2 | show-source = 1 3 | exclude = .venv,.tox,dist,docs,build,*.egg 4 | 5 | [bdist_wheel] 6 | universal = 1 7 | 8 | [metadata] 9 | license_file = LICENSE 10 | -------------------------------------------------------------------------------- /vagrant/sentinel-configs/001-1: -------------------------------------------------------------------------------- 1 | pidfile /var/run/sentinel-1.pid 2 | port 26379 3 | daemonize yes 4 | 5 | # short timeout for sentinel tests 6 | sentinel down-after-milliseconds mymaster 500 7 | -------------------------------------------------------------------------------- /vagrant/sentinel-configs/002-2: -------------------------------------------------------------------------------- 1 | pidfile /var/run/sentinel-2.pid 2 | port 26380 3 | daemonize yes 4 | 5 | # short timeout for sentinel tests 6 | sentinel down-after-milliseconds mymaster 500 7 | -------------------------------------------------------------------------------- /vagrant/sentinel-configs/003-3: -------------------------------------------------------------------------------- 1 | pidfile /var/run/sentinel-3.pid 2 | port 26381 3 | daemonize yes 4 | 5 | # short timeout for sentinel tests 6 | sentinel down-after-milliseconds mymaster 500 7 | -------------------------------------------------------------------------------- /vagrant/redis-configs/001-master: -------------------------------------------------------------------------------- 1 | pidfile /var/run/redis-master.pid 2 | bind * 3 | port 6379 4 | daemonize yes 5 | unixsocket /tmp/redis_master.sock 6 | unixsocketperm 777 7 | dbfilename master.rdb 8 | dir /home/vagrant/redis/backups 9 | -------------------------------------------------------------------------------- /vagrant/redis-configs/002-slave: -------------------------------------------------------------------------------- 1 | pidfile /var/run/redis-slave.pid 2 | bind * 3 | port 6380 4 | daemonize yes 5 | unixsocket /tmp/redis-slave.sock 6 | unixsocketperm 777 7 | dbfilename slave.rdb 8 | dir /home/vagrant/redis/backups 9 | 10 | slaveof 127.0.0.1 6379 11 | -------------------------------------------------------------------------------- /RELEASE: -------------------------------------------------------------------------------- 1 | Release Process 2 | =============== 3 | 4 | 1. Make sure all tests pass. 5 | 2. Make sure CHANGES is up to date. 6 | 3. Update redis.__init__.__version__ and commit 7 | 4. git tag 8 | 5. git push --tag 9 | 6. rm dist/* && python setup.py sdist bdist_wheel && twine upload dist/* 10 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | minversion = 1.8 3 | envlist = {py26,py27,py32,py33,py34,py35,py36}-{plain,hiredis}, pep8 4 | 5 | [testenv] 6 | deps = 7 | pytest==2.9.2 8 | mock==2.0.0 9 | hiredis: hiredis >= 0.1.3 10 | commands = py.test {posargs} 11 | 12 | [testenv:pep8] 13 | basepython = python2.6 14 | deps = pep8 15 | commands = pep8 16 | skipsdist = true 17 | skip_install = true 18 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | .. redis-py documentation master file, created by 2 | sphinx-quickstart on Thu Jul 28 13:55:57 2011. 3 | You can adapt this file completely to your liking, but it should at least 4 | contain the root `toctree` directive. 5 | 6 | Welcome to redis-py's documentation! 7 | ==================================== 8 | 9 | Indices and tables 10 | ------------------ 11 | 12 | * :ref:`genindex` 13 | * :ref:`modindex` 14 | * :ref:`search` 15 | 16 | Contents: 17 | --------- 18 | 19 | .. toctree:: 20 | :maxdepth: 2 21 | 22 | .. automodule:: redis 23 | :members: 24 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: python 2 | cache: pip 3 | python: 4 | - "3.6" 5 | - "3.5" 6 | - "3.4" 7 | - "3.3" 8 | - "2.7" 9 | - "2.6" 10 | services: 11 | - redis-server 12 | env: 13 | - TEST_HIREDIS=0 14 | - TEST_HIREDIS=1 15 | install: 16 | - pip install -e . 17 | - "if [[ $TEST_PEP8 == '1' ]]; then pip install pep8; fi" 18 | - "if [[ $TEST_HIREDIS == '1' ]]; then pip install hiredis; fi" 19 | script: "if [[ $TEST_PEP8 == '1' ]]; then pep8 --repeat --show-source --exclude=.venv,.tox,dist,docs,build,*.egg .; else python setup.py test; fi" 20 | matrix: 21 | include: 22 | - python: "2.7" 23 | env: TEST_PEP8=1 24 | - python: "3.6" 25 | env: TEST_PEP8=1 26 | -------------------------------------------------------------------------------- /vagrant/build_redis.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | source /home/vagrant/redis-py/vagrant/redis_vars.sh 4 | 5 | pushd /home/vagrant 6 | 7 | uninstall_all_sentinel_instances 8 | uninstall_all_redis_instances 9 | 10 | # create a clean directory for redis 11 | rm -rf $REDIS_DIR 12 | mkdir -p $REDIS_BIN_DIR 13 | mkdir -p $REDIS_CONF_DIR 14 | mkdir -p $REDIS_SAVE_DIR 15 | 16 | # download, unpack and build redis 17 | mkdir -p $REDIS_DOWNLOAD_DIR 18 | cd $REDIS_DOWNLOAD_DIR 19 | rm -f $REDIS_PACKAGE 20 | rm -rf $REDIS_BUILD_DIR 21 | wget http://download.redis.io/releases/$REDIS_PACKAGE 22 | tar zxvf $REDIS_PACKAGE 23 | cd $REDIS_BUILD_DIR 24 | make 25 | cp src/redis-server $REDIS_DIR/bin 26 | cp src/redis-cli $REDIS_DIR/bin 27 | cp src/redis-sentinel $REDIS_DIR/bin 28 | 29 | popd 30 | -------------------------------------------------------------------------------- /redis/utils.py: -------------------------------------------------------------------------------- 1 | from contextlib import contextmanager 2 | 3 | 4 | try: 5 | import hiredis 6 | HIREDIS_AVAILABLE = True 7 | except ImportError: 8 | HIREDIS_AVAILABLE = False 9 | 10 | 11 | def from_url(url, db=None, **kwargs): 12 | """ 13 | Returns an active Redis client generated from the given database URL. 14 | 15 | Will attempt to extract the database id from the path url fragment, if 16 | none is provided. 17 | """ 18 | from redis.client import Redis 19 | return Redis.from_url(url, db, **kwargs) 20 | 21 | 22 | @contextmanager 23 | def pipeline(redis_obj): 24 | p = redis_obj.pipeline() 25 | yield p 26 | p.execute() 27 | 28 | 29 | class dummy(object): 30 | """ 31 | Instances of this class can be used as an attribute container. 32 | """ 33 | pass 34 | -------------------------------------------------------------------------------- /redis/__init__.py: -------------------------------------------------------------------------------- 1 | from redis.client import Redis, StrictRedis 2 | from redis.connection import ( 3 | BlockingConnectionPool, 4 | ConnectionPool, 5 | Connection, 6 | SSLConnection, 7 | UnixDomainSocketConnection 8 | ) 9 | from redis.utils import from_url 10 | from redis.exceptions import ( 11 | AuthenticationError, 12 | BusyLoadingError, 13 | ConnectionError, 14 | DataError, 15 | InvalidResponse, 16 | PubSubError, 17 | ReadOnlyError, 18 | RedisError, 19 | ResponseError, 20 | TimeoutError, 21 | WatchError 22 | ) 23 | 24 | 25 | __version__ = '2.10.6' 26 | VERSION = tuple(map(int, __version__.split('.'))) 27 | 28 | __all__ = [ 29 | 'Redis', 'StrictRedis', 'ConnectionPool', 'BlockingConnectionPool', 30 | 'Connection', 'SSLConnection', 'UnixDomainSocketConnection', 'from_url', 31 | 'AuthenticationError', 'BusyLoadingError', 'ConnectionError', 'DataError', 32 | 'InvalidResponse', 'PubSubError', 'ReadOnlyError', 'RedisError', 33 | 'ResponseError', 'TimeoutError', 'WatchError' 34 | ] 35 | -------------------------------------------------------------------------------- /benchmarks/socket_read_size.py: -------------------------------------------------------------------------------- 1 | from redis.connection import PythonParser, HiredisParser 2 | from base import Benchmark 3 | 4 | 5 | class SocketReadBenchmark(Benchmark): 6 | 7 | ARGUMENTS = ( 8 | { 9 | 'name': 'parser', 10 | 'values': [PythonParser, HiredisParser] 11 | }, 12 | { 13 | 'name': 'value_size', 14 | 'values': [10, 100, 1000, 10000, 100000, 1000000, 10000000, 15 | 100000000] 16 | }, 17 | { 18 | 'name': 'read_size', 19 | 'values': [4096, 8192, 16384, 32768, 65536, 131072] 20 | } 21 | ) 22 | 23 | def setup(self, value_size, read_size, parser): 24 | r = self.get_client(parser_class=parser, 25 | socket_read_size=read_size) 26 | r.set('benchmark', 'a' * value_size) 27 | 28 | def run(self, value_size, read_size, parser): 29 | r = self.get_client() 30 | r.get('benchmark') 31 | 32 | 33 | if __name__ == '__main__': 34 | SocketReadBenchmark().run_benchmark() 35 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2012 Andy McCurdy 2 | 3 | Permission is hereby granted, free of charge, to any person 4 | obtaining a copy of this software and associated documentation 5 | files (the "Software"), to deal in the Software without 6 | restriction, including without limitation the rights to use, 7 | copy, modify, merge, publish, distribute, sublicense, and/or sell 8 | copies of the Software, and to permit persons to whom the 9 | Software is furnished to do so, subject to the following 10 | conditions: 11 | 12 | The above copyright notice and this permission notice shall be 13 | included in all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 16 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES 17 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 18 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT 19 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, 20 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING 21 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR 22 | OTHER DEALINGS IN THE SOFTWARE. 23 | -------------------------------------------------------------------------------- /vagrant/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! 5 | VAGRANTFILE_API_VERSION = "2" 6 | 7 | Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| 8 | # ubuntu 64bit image 9 | config.vm.box = "hashicorp/precise64" 10 | 11 | # map the root of redis-py to /home/vagrant/redis-py 12 | config.vm.synced_folder "../", "/home/vagrant/redis-py" 13 | 14 | # install the redis server 15 | config.vm.provision :shell, :path => "bootstrap.sh" 16 | config.vm.provision :shell, :path => "build_redis.sh" 17 | config.vm.provision :shell, :path => "install_redis.sh" 18 | config.vm.provision :shell, :path => "install_sentinel.sh" 19 | config.vm.provision :file, :source => ".bash_profile", :destination => "/home/vagrant/.bash_profile" 20 | 21 | # setup forwarded ports 22 | config.vm.network "forwarded_port", guest: 6379, host: 6379 23 | config.vm.network "forwarded_port", guest: 6380, host: 6380 24 | config.vm.network "forwarded_port", guest: 26379, host: 26379 25 | config.vm.network "forwarded_port", guest: 26380, host: 26380 26 | config.vm.network "forwarded_port", guest: 26381, host: 26381 27 | end 28 | -------------------------------------------------------------------------------- /tests/test_encoding.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | import pytest 3 | 4 | from redis._compat import unichr, u, unicode 5 | from .conftest import r as _redis_client 6 | 7 | 8 | class TestEncoding(object): 9 | @pytest.fixture() 10 | def r(self, request): 11 | return _redis_client(request=request, decode_responses=True) 12 | 13 | def test_simple_encoding(self, r): 14 | unicode_string = unichr(3456) + u('abcd') + unichr(3421) 15 | r['unicode-string'] = unicode_string 16 | cached_val = r['unicode-string'] 17 | assert isinstance(cached_val, unicode) 18 | assert unicode_string == cached_val 19 | 20 | def test_list_encoding(self, r): 21 | unicode_string = unichr(3456) + u('abcd') + unichr(3421) 22 | result = [unicode_string, unicode_string, unicode_string] 23 | r.rpush('a', *result) 24 | assert r.lrange('a', 0, -1) == result 25 | 26 | def test_object_value(self, r): 27 | unicode_string = unichr(3456) + u('abcd') + unichr(3421) 28 | r['unicode-string'] = Exception(unicode_string) 29 | cached_val = r['unicode-string'] 30 | assert isinstance(cached_val, unicode) 31 | assert unicode_string == cached_val 32 | 33 | 34 | class TestCommandsAndTokensArentEncoded(object): 35 | @pytest.fixture() 36 | def r(self, request): 37 | return _redis_client(request=request, encoding='utf-16') 38 | 39 | def test_basic_command(self, r): 40 | r.set('hello', 'world') 41 | -------------------------------------------------------------------------------- /vagrant/redis_init_script: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | ### BEGIN INIT INFO 4 | # Provides: redis-server 5 | # Required-Start: $syslog 6 | # Required-Stop: $syslog 7 | # Default-Start: 2 3 4 5 8 | # Default-Stop: 0 1 6 9 | # Short-Description: Start redis-server at boot time 10 | # Description: Control redis-server. 11 | ### END INIT INFO 12 | 13 | REDISPORT={{ PORT }} 14 | PIDFILE=/var/run/{{ PROCESS_NAME }}.pid 15 | CONF=/home/vagrant/redis/conf/{{ PROCESS_NAME }}.conf 16 | 17 | EXEC=/home/vagrant/redis/bin/redis-server 18 | CLIEXEC=/home/vagrant/redis/bin/redis-cli 19 | 20 | case "$1" in 21 | start) 22 | if [ -f $PIDFILE ] 23 | then 24 | echo "$PIDFILE exists, process is already running or crashed" 25 | else 26 | echo "Starting Redis server..." 27 | $EXEC $CONF 28 | fi 29 | ;; 30 | stop) 31 | if [ ! -f $PIDFILE ] 32 | then 33 | echo "$PIDFILE does not exist, process is not running" 34 | else 35 | PID=$(cat $PIDFILE) 36 | echo "Stopping ..." 37 | $CLIEXEC -p $REDISPORT shutdown 38 | while [ -x /proc/${PID} ] 39 | do 40 | echo "Waiting for Redis to shutdown ..." 41 | sleep 1 42 | done 43 | echo "Redis stopped" 44 | fi 45 | ;; 46 | *) 47 | echo "Please use start or stop as first argument" 48 | ;; 49 | esac 50 | -------------------------------------------------------------------------------- /redis/exceptions.py: -------------------------------------------------------------------------------- 1 | "Core exceptions raised by the Redis client" 2 | from redis._compat import unicode 3 | 4 | 5 | class RedisError(Exception): 6 | pass 7 | 8 | 9 | # python 2.5 doesn't implement Exception.__unicode__. Add it here to all 10 | # our exception types 11 | if not hasattr(RedisError, '__unicode__'): 12 | def __unicode__(self): 13 | if isinstance(self.args[0], unicode): 14 | return self.args[0] 15 | return unicode(self.args[0]) 16 | RedisError.__unicode__ = __unicode__ 17 | 18 | 19 | class AuthenticationError(RedisError): 20 | pass 21 | 22 | 23 | class ConnectionError(RedisError): 24 | pass 25 | 26 | 27 | class TimeoutError(RedisError): 28 | pass 29 | 30 | 31 | class BusyLoadingError(ConnectionError): 32 | pass 33 | 34 | 35 | class InvalidResponse(RedisError): 36 | pass 37 | 38 | 39 | class ResponseError(RedisError): 40 | pass 41 | 42 | 43 | class DataError(RedisError): 44 | pass 45 | 46 | 47 | class PubSubError(RedisError): 48 | pass 49 | 50 | 51 | class WatchError(RedisError): 52 | pass 53 | 54 | 55 | class NoScriptError(ResponseError): 56 | pass 57 | 58 | 59 | class ExecAbortError(ResponseError): 60 | pass 61 | 62 | 63 | class ReadOnlyError(ResponseError): 64 | pass 65 | 66 | 67 | class LockError(RedisError, ValueError): 68 | "Errors acquiring or releasing a lock" 69 | # NOTE: For backwards compatability, this class derives from ValueError. 70 | # This was originally chosen to behave like threading.Lock. 71 | pass 72 | -------------------------------------------------------------------------------- /vagrant/sentinel_init_script: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | ### BEGIN INIT INFO 4 | # Provides: redis-sentintel 5 | # Required-Start: $syslog 6 | # Required-Stop: $syslog 7 | # Default-Start: 2 3 4 5 8 | # Default-Stop: 0 1 6 9 | # Short-Description: Start redis-sentinel at boot time 10 | # Description: Control redis-sentinel. 11 | ### END INIT INFO 12 | 13 | SENTINELPORT={{ PORT }} 14 | PIDFILE=/var/run/{{ PROCESS_NAME }}.pid 15 | CONF=/home/vagrant/redis/conf/{{ PROCESS_NAME }}.conf 16 | 17 | EXEC=/home/vagrant/redis/bin/redis-sentinel 18 | CLIEXEC=/home/vagrant/redis/bin/redis-cli 19 | 20 | case "$1" in 21 | start) 22 | if [ -f $PIDFILE ] 23 | then 24 | echo "$PIDFILE exists, process is already running or crashed" 25 | else 26 | echo "Starting Redis Sentinel..." 27 | $EXEC $CONF 28 | fi 29 | ;; 30 | stop) 31 | if [ ! -f $PIDFILE ] 32 | then 33 | echo "$PIDFILE does not exist, process is not running" 34 | else 35 | PID=$(cat $PIDFILE) 36 | echo "Stopping ..." 37 | $CLIEXEC -p $SENTINELPORT shutdown 38 | while [ -x /proc/${PID} ] 39 | do 40 | echo "Waiting for Sentinel to shutdown ..." 41 | sleep 1 42 | done 43 | echo "Sentinel stopped" 44 | fi 45 | ;; 46 | *) 47 | echo "Please use start or stop as first argument" 48 | ;; 49 | esac 50 | -------------------------------------------------------------------------------- /vagrant/install_redis.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | source /home/vagrant/redis-py/vagrant/redis_vars.sh 4 | 5 | for filename in `ls $VAGRANT_REDIS_CONF_DIR`; do 6 | # cuts the order prefix off of the filename, e.g. 001-master -> master 7 | PROCESS_NAME=redis-`echo $filename | cut -f 2- -d -` 8 | echo "======================================" 9 | echo "INSTALLING REDIS SERVER: $PROCESS_NAME" 10 | echo "======================================" 11 | 12 | # make sure the instance is uninstalled (it should be already) 13 | uninstall_instance $PROCESS_NAME 14 | 15 | # base config 16 | mkdir -p $REDIS_CONF_DIR 17 | cp $REDIS_BUILD_DIR/redis.conf $REDIS_CONF_DIR/$PROCESS_NAME.conf 18 | # override config values from file 19 | cat $VAGRANT_REDIS_CONF_DIR/$filename >> $REDIS_CONF_DIR/$PROCESS_NAME.conf 20 | 21 | # replace placeholder variables in init.d script 22 | cp $VAGRANT_DIR/redis_init_script /etc/init.d/$PROCESS_NAME 23 | sed -i "s/{{ PROCESS_NAME }}/$PROCESS_NAME/g" /etc/init.d/$PROCESS_NAME 24 | # need to read the config file to find out what port this instance will run on 25 | port=`grep port $VAGRANT_REDIS_CONF_DIR/$filename | cut -f 2 -d " "` 26 | sed -i "s/{{ PORT }}/$port/g" /etc/init.d/$PROCESS_NAME 27 | chmod 755 /etc/init.d/$PROCESS_NAME 28 | 29 | # and tell update-rc.d about it 30 | update-rc.d $PROCESS_NAME defaults 98 31 | 32 | # save the $PROCESS_NAME into installed instances file 33 | echo $PROCESS_NAME >> $REDIS_INSTALLED_INSTANCES_FILE 34 | 35 | # start redis 36 | /etc/init.d/$PROCESS_NAME start 37 | done 38 | -------------------------------------------------------------------------------- /benchmarks/base.py: -------------------------------------------------------------------------------- 1 | import functools 2 | import itertools 3 | import redis 4 | import sys 5 | import timeit 6 | from redis._compat import izip 7 | 8 | 9 | class Benchmark(object): 10 | ARGUMENTS = () 11 | 12 | def __init__(self): 13 | self._client = None 14 | 15 | def get_client(self, **kwargs): 16 | # eventually make this more robust and take optional args from 17 | # argparse 18 | if self._client is None or kwargs: 19 | defaults = { 20 | 'db': 9 21 | } 22 | defaults.update(kwargs) 23 | pool = redis.ConnectionPool(**kwargs) 24 | self._client = redis.StrictRedis(connection_pool=pool) 25 | return self._client 26 | 27 | def setup(self, **kwargs): 28 | pass 29 | 30 | def run(self, **kwargs): 31 | pass 32 | 33 | def run_benchmark(self): 34 | group_names = [group['name'] for group in self.ARGUMENTS] 35 | group_values = [group['values'] for group in self.ARGUMENTS] 36 | for value_set in itertools.product(*group_values): 37 | pairs = list(izip(group_names, value_set)) 38 | arg_string = ', '.join(['%s=%s' % (p[0], p[1]) for p in pairs]) 39 | sys.stdout.write('Benchmark: %s... ' % arg_string) 40 | sys.stdout.flush() 41 | kwargs = dict(pairs) 42 | setup = functools.partial(self.setup, **kwargs) 43 | run = functools.partial(self.run, **kwargs) 44 | t = timeit.timeit(stmt=run, setup=setup, number=1000) 45 | sys.stdout.write('%f\n' % t) 46 | sys.stdout.flush() 47 | -------------------------------------------------------------------------------- /vagrant/install_sentinel.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | source /home/vagrant/redis-py/vagrant/redis_vars.sh 4 | 5 | for filename in `ls $VAGRANT_SENTINEL_CONF_DIR`; do 6 | # cuts the order prefix off of the filename, e.g. 001-master -> master 7 | PROCESS_NAME=sentinel-`echo $filename | cut -f 2- -d -` 8 | echo "=========================================" 9 | echo "INSTALLING SENTINEL SERVER: $PROCESS_NAME" 10 | echo "=========================================" 11 | 12 | # make sure the instance is uninstalled (it should be already) 13 | uninstall_instance $PROCESS_NAME 14 | 15 | # base config 16 | mkdir -p $REDIS_CONF_DIR 17 | cp $REDIS_BUILD_DIR/sentinel.conf $REDIS_CONF_DIR/$PROCESS_NAME.conf 18 | # override config values from file 19 | cat $VAGRANT_SENTINEL_CONF_DIR/$filename >> $REDIS_CONF_DIR/$PROCESS_NAME.conf 20 | 21 | # replace placeholder variables in init.d script 22 | cp $VAGRANT_DIR/sentinel_init_script /etc/init.d/$PROCESS_NAME 23 | sed -i "s/{{ PROCESS_NAME }}/$PROCESS_NAME/g" /etc/init.d/$PROCESS_NAME 24 | # need to read the config file to find out what port this instance will run on 25 | port=`grep port $VAGRANT_SENTINEL_CONF_DIR/$filename | cut -f 2 -d " "` 26 | sed -i "s/{{ PORT }}/$port/g" /etc/init.d/$PROCESS_NAME 27 | chmod 755 /etc/init.d/$PROCESS_NAME 28 | 29 | # and tell update-rc.d about it 30 | update-rc.d $PROCESS_NAME defaults 99 31 | 32 | # save the $PROCESS_NAME into installed instances file 33 | echo $PROCESS_NAME >> $SENTINEL_INSTALLED_INSTANCES_FILE 34 | 35 | # start redis 36 | /etc/init.d/$PROCESS_NAME start 37 | done 38 | -------------------------------------------------------------------------------- /vagrant/redis_vars.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | VAGRANT_DIR=/home/vagrant/redis-py/vagrant 4 | VAGRANT_REDIS_CONF_DIR=$VAGRANT_DIR/redis-configs 5 | VAGRANT_SENTINEL_CONF_DIR=$VAGRANT_DIR/sentinel-configs 6 | REDIS_VERSION=3.2.0 7 | REDIS_DOWNLOAD_DIR=/home/vagrant/redis-downloads 8 | REDIS_PACKAGE=redis-$REDIS_VERSION.tar.gz 9 | REDIS_BUILD_DIR=$REDIS_DOWNLOAD_DIR/redis-$REDIS_VERSION 10 | REDIS_DIR=/home/vagrant/redis 11 | REDIS_BIN_DIR=$REDIS_DIR/bin 12 | REDIS_CONF_DIR=$REDIS_DIR/conf 13 | REDIS_SAVE_DIR=$REDIS_DIR/backups 14 | REDIS_INSTALLED_INSTANCES_FILE=$REDIS_DIR/redis-instances 15 | SENTINEL_INSTALLED_INSTANCES_FILE=$REDIS_DIR/sentinel-instances 16 | 17 | function uninstall_instance() { 18 | # Expects $1 to be the init.d filename, e.g. redis-nodename or 19 | # sentinel-nodename 20 | 21 | if [ -a /etc/init.d/$1 ]; then 22 | 23 | echo "======================================" 24 | echo "UNINSTALLING REDIS SERVER: $1" 25 | echo "======================================" 26 | 27 | /etc/init.d/$1 stop 28 | update-rc.d -f $1 remove 29 | rm -f /etc/init.d/$1 30 | fi; 31 | rm -f $REDIS_CONF_DIR/$1.conf 32 | } 33 | 34 | function uninstall_all_redis_instances() { 35 | if [ -a $REDIS_INSTALLED_INSTANCES_FILE ]; then 36 | cat $REDIS_INSTALLED_INSTANCES_FILE | while read line; do 37 | uninstall_instance $line; 38 | done; 39 | fi 40 | } 41 | 42 | function uninstall_all_sentinel_instances() { 43 | if [ -a $SENTINEL_INSTALLED_INSTANCES_FILE ]; then 44 | cat $SENTINEL_INSTALLED_INSTANCES_FILE | while read line; do 45 | uninstall_instance $line; 46 | done; 47 | fi 48 | } 49 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import os 3 | import sys 4 | 5 | from redis import __version__ 6 | 7 | try: 8 | from setuptools import setup 9 | from setuptools.command.test import test as TestCommand 10 | 11 | class PyTest(TestCommand): 12 | def finalize_options(self): 13 | TestCommand.finalize_options(self) 14 | self.test_args = [] 15 | self.test_suite = True 16 | 17 | def run_tests(self): 18 | # import here, because outside the eggs aren't loaded 19 | import pytest 20 | errno = pytest.main(self.test_args) 21 | sys.exit(errno) 22 | 23 | except ImportError: 24 | 25 | from distutils.core import setup 26 | 27 | def PyTest(x): 28 | x 29 | 30 | f = open(os.path.join(os.path.dirname(__file__), 'README.rst')) 31 | long_description = f.read() 32 | f.close() 33 | 34 | setup( 35 | name='redis', 36 | version=__version__, 37 | description='Python client for Redis key-value store', 38 | long_description=long_description, 39 | url='http://github.com/andymccurdy/redis-py', 40 | author='Andy McCurdy', 41 | author_email='sedrik@gmail.com', 42 | maintainer='Andy McCurdy', 43 | maintainer_email='sedrik@gmail.com', 44 | keywords=['Redis', 'key-value store'], 45 | license='MIT', 46 | packages=['redis'], 47 | extras_require={ 48 | 'hiredis': [ 49 | "hiredis>=0.1.3", 50 | ], 51 | }, 52 | tests_require=[ 53 | 'mock', 54 | 'pytest>=2.5.0', 55 | ], 56 | cmdclass={'test': PyTest}, 57 | classifiers=[ 58 | 'Development Status :: 5 - Production/Stable', 59 | 'Environment :: Console', 60 | 'Intended Audience :: Developers', 61 | 'License :: OSI Approved :: MIT License', 62 | 'Operating System :: OS Independent', 63 | 'Programming Language :: Python', 64 | 'Programming Language :: Python :: 2', 65 | 'Programming Language :: Python :: 2.6', 66 | 'Programming Language :: Python :: 2.7', 67 | 'Programming Language :: Python :: 3', 68 | 'Programming Language :: Python :: 3.3', 69 | 'Programming Language :: Python :: 3.4', 70 | 'Programming Language :: Python :: 3.5', 71 | 'Programming Language :: Python :: 3.6', 72 | ] 73 | ) 74 | -------------------------------------------------------------------------------- /benchmarks/command_packer_benchmark.py: -------------------------------------------------------------------------------- 1 | import socket 2 | import sys 3 | from redis.connection import (Connection, SYM_STAR, SYM_DOLLAR, SYM_EMPTY, 4 | SYM_CRLF, b) 5 | from redis._compat import imap 6 | from base import Benchmark 7 | 8 | 9 | class StringJoiningConnection(Connection): 10 | def send_packed_command(self, command): 11 | "Send an already packed command to the Redis server" 12 | if not self._sock: 13 | self.connect() 14 | try: 15 | self._sock.sendall(command) 16 | except socket.error: 17 | e = sys.exc_info()[1] 18 | self.disconnect() 19 | if len(e.args) == 1: 20 | _errno, errmsg = 'UNKNOWN', e.args[0] 21 | else: 22 | _errno, errmsg = e.args 23 | raise ConnectionError("Error %s while writing to socket. %s." % 24 | (_errno, errmsg)) 25 | except: 26 | self.disconnect() 27 | raise 28 | 29 | def pack_command(self, *args): 30 | "Pack a series of arguments into a value Redis command" 31 | args_output = SYM_EMPTY.join([ 32 | SYM_EMPTY.join((SYM_DOLLAR, b(str(len(k))), SYM_CRLF, k, SYM_CRLF)) 33 | for k in imap(self.encode, args)]) 34 | output = SYM_EMPTY.join( 35 | (SYM_STAR, b(str(len(args))), SYM_CRLF, args_output)) 36 | return output 37 | 38 | 39 | class ListJoiningConnection(Connection): 40 | def send_packed_command(self, command): 41 | if not self._sock: 42 | self.connect() 43 | try: 44 | if isinstance(command, str): 45 | command = [command] 46 | for item in command: 47 | self._sock.sendall(item) 48 | except socket.error: 49 | e = sys.exc_info()[1] 50 | self.disconnect() 51 | if len(e.args) == 1: 52 | _errno, errmsg = 'UNKNOWN', e.args[0] 53 | else: 54 | _errno, errmsg = e.args 55 | raise ConnectionError("Error %s while writing to socket. %s." % 56 | (_errno, errmsg)) 57 | except: 58 | self.disconnect() 59 | raise 60 | 61 | def pack_command(self, *args): 62 | output = [] 63 | buff = SYM_EMPTY.join( 64 | (SYM_STAR, b(str(len(args))), SYM_CRLF)) 65 | 66 | for k in imap(self.encode, args): 67 | if len(buff) > 6000 or len(k) > 6000: 68 | buff = SYM_EMPTY.join( 69 | (buff, SYM_DOLLAR, b(str(len(k))), SYM_CRLF)) 70 | output.append(buff) 71 | output.append(k) 72 | buff = SYM_CRLF 73 | else: 74 | buff = SYM_EMPTY.join((buff, SYM_DOLLAR, b(str(len(k))), 75 | SYM_CRLF, k, SYM_CRLF)) 76 | output.append(buff) 77 | return output 78 | 79 | 80 | class CommandPackerBenchmark(Benchmark): 81 | 82 | ARGUMENTS = ( 83 | { 84 | 'name': 'connection_class', 85 | 'values': [StringJoiningConnection, ListJoiningConnection] 86 | }, 87 | { 88 | 'name': 'value_size', 89 | 'values': [10, 100, 1000, 10000, 100000, 1000000, 10000000, 90 | 100000000] 91 | }, 92 | ) 93 | 94 | def setup(self, connection_class, value_size): 95 | self.get_client(connection_class=connection_class) 96 | 97 | def run(self, connection_class, value_size): 98 | r = self.get_client() 99 | x = 'a' * value_size 100 | r.set('benchmark', x) 101 | 102 | 103 | if __name__ == '__main__': 104 | CommandPackerBenchmark().run_benchmark() 105 | -------------------------------------------------------------------------------- /tests/conftest.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | import redis 3 | from mock import Mock 4 | 5 | from distutils.version import StrictVersion 6 | 7 | 8 | _REDIS_VERSIONS = {} 9 | 10 | 11 | def get_version(**kwargs): 12 | params = {'host': 'localhost', 'port': 6379, 'db': 9} 13 | params.update(kwargs) 14 | key = '%s:%s' % (params['host'], params['port']) 15 | if key not in _REDIS_VERSIONS: 16 | client = redis.Redis(**params) 17 | _REDIS_VERSIONS[key] = client.info()['redis_version'] 18 | client.connection_pool.disconnect() 19 | return _REDIS_VERSIONS[key] 20 | 21 | 22 | def _get_client(cls, request=None, **kwargs): 23 | params = {'host': 'localhost', 'port': 6379, 'db': 9} 24 | params.update(kwargs) 25 | client = cls(**params) 26 | client.flushdb() 27 | if request: 28 | def teardown(): 29 | client.flushdb() 30 | client.connection_pool.disconnect() 31 | request.addfinalizer(teardown) 32 | return client 33 | 34 | 35 | def skip_if_server_version_lt(min_version): 36 | check = StrictVersion(get_version()) < StrictVersion(min_version) 37 | return pytest.mark.skipif(check, reason="") 38 | 39 | 40 | def skip_if_server_version_gte(min_version): 41 | check = StrictVersion(get_version()) >= StrictVersion(min_version) 42 | return pytest.mark.skipif(check, reason="") 43 | 44 | 45 | @pytest.fixture() 46 | def r(request, **kwargs): 47 | return _get_client(redis.Redis, request, **kwargs) 48 | 49 | 50 | @pytest.fixture() 51 | def sr(request, **kwargs): 52 | return _get_client(redis.StrictRedis, request, **kwargs) 53 | 54 | 55 | def _gen_cluster_mock_resp(r, response): 56 | mock_connection_pool = Mock() 57 | connection = Mock() 58 | response = response 59 | connection.read_response.return_value = response 60 | mock_connection_pool.get_connection.return_value = connection 61 | r.connection_pool = mock_connection_pool 62 | return r 63 | 64 | 65 | @pytest.fixture() 66 | def mock_cluster_resp_ok(request, **kwargs): 67 | r = _get_client(redis.Redis, request, **kwargs) 68 | return _gen_cluster_mock_resp(r, 'OK') 69 | 70 | 71 | @pytest.fixture() 72 | def mock_cluster_resp_int(request, **kwargs): 73 | r = _get_client(redis.Redis, request, **kwargs) 74 | return _gen_cluster_mock_resp(r, '2') 75 | 76 | 77 | @pytest.fixture() 78 | def mock_cluster_resp_info(request, **kwargs): 79 | r = _get_client(redis.Redis, request, **kwargs) 80 | response = ('cluster_state:ok\r\ncluster_slots_assigned:16384\r\n' 81 | 'cluster_slots_ok:16384\r\ncluster_slots_pfail:0\r\n' 82 | 'cluster_slots_fail:0\r\ncluster_known_nodes:7\r\n' 83 | 'cluster_size:3\r\ncluster_current_epoch:7\r\n' 84 | 'cluster_my_epoch:2\r\ncluster_stats_messages_sent:170262\r\n' 85 | 'cluster_stats_messages_received:105653\r\n') 86 | return _gen_cluster_mock_resp(r, response) 87 | 88 | 89 | @pytest.fixture() 90 | def mock_cluster_resp_nodes(request, **kwargs): 91 | r = _get_client(redis.Redis, request, **kwargs) 92 | response = ('c8253bae761cb1ecb2b61857d85dfe455a0fec8b 172.17.0.7:7006 ' 93 | 'slave aa90da731f673a99617dfe930306549a09f83a6b 0 ' 94 | '1447836263059 5 connected\n' 95 | '9bd595fe4821a0e8d6b99d70faa660638a7612b3 172.17.0.7:7008 ' 96 | 'master - 0 1447836264065 0 connected\n' 97 | 'aa90da731f673a99617dfe930306549a09f83a6b 172.17.0.7:7003 ' 98 | 'myself,master - 0 0 2 connected 5461-10922\n' 99 | '1df047e5a594f945d82fc140be97a1452bcbf93e 172.17.0.7:7007 ' 100 | 'slave 19efe5a631f3296fdf21a5441680f893e8cc96ec 0 ' 101 | '1447836262556 3 connected\n' 102 | '4ad9a12e63e8f0207025eeba2354bcf4c85e5b22 172.17.0.7:7005 ' 103 | 'master - 0 1447836262555 7 connected 0-5460\n' 104 | '19efe5a631f3296fdf21a5441680f893e8cc96ec 172.17.0.7:7004 ' 105 | 'master - 0 1447836263562 3 connected 10923-16383\n' 106 | 'fbb23ed8cfa23f17eaf27ff7d0c410492a1093d6 172.17.0.7:7002 ' 107 | 'master,fail - 1447829446956 1447829444948 1 disconnected\n' 108 | ) 109 | return _gen_cluster_mock_resp(r, response) 110 | 111 | 112 | @pytest.fixture() 113 | def mock_cluster_resp_slaves(request, **kwargs): 114 | r = _get_client(redis.Redis, request, **kwargs) 115 | response = ("['1df047e5a594f945d82fc140be97a1452bcbf93e 172.17.0.7:7007 " 116 | "slave 19efe5a631f3296fdf21a5441680f893e8cc96ec 0 " 117 | "1447836789290 3 connected']") 118 | return _gen_cluster_mock_resp(r, response) 119 | -------------------------------------------------------------------------------- /tests/test_scripting.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | import pytest 3 | 4 | from redis import exceptions 5 | from redis._compat import b 6 | 7 | 8 | multiply_script = """ 9 | local value = redis.call('GET', KEYS[1]) 10 | value = tonumber(value) 11 | return value * ARGV[1]""" 12 | 13 | msgpack_hello_script = """ 14 | local message = cmsgpack.unpack(ARGV[1]) 15 | local name = message['name'] 16 | return "hello " .. name 17 | """ 18 | msgpack_hello_script_broken = """ 19 | local message = cmsgpack.unpack(ARGV[1]) 20 | local names = message['name'] 21 | return "hello " .. name 22 | """ 23 | 24 | 25 | class TestScripting(object): 26 | @pytest.fixture(autouse=True) 27 | def reset_scripts(self, r): 28 | r.script_flush() 29 | 30 | def test_eval(self, r): 31 | r.set('a', 2) 32 | # 2 * 3 == 6 33 | assert r.eval(multiply_script, 1, 'a', 3) == 6 34 | 35 | def test_evalsha(self, r): 36 | r.set('a', 2) 37 | sha = r.script_load(multiply_script) 38 | # 2 * 3 == 6 39 | assert r.evalsha(sha, 1, 'a', 3) == 6 40 | 41 | def test_evalsha_script_not_loaded(self, r): 42 | r.set('a', 2) 43 | sha = r.script_load(multiply_script) 44 | # remove the script from Redis's cache 45 | r.script_flush() 46 | with pytest.raises(exceptions.NoScriptError): 47 | r.evalsha(sha, 1, 'a', 3) 48 | 49 | def test_script_loading(self, r): 50 | # get the sha, then clear the cache 51 | sha = r.script_load(multiply_script) 52 | r.script_flush() 53 | assert r.script_exists(sha) == [False] 54 | r.script_load(multiply_script) 55 | assert r.script_exists(sha) == [True] 56 | 57 | def test_script_object(self, r): 58 | r.set('a', 2) 59 | multiply = r.register_script(multiply_script) 60 | precalculated_sha = multiply.sha 61 | assert precalculated_sha 62 | assert r.script_exists(multiply.sha) == [False] 63 | # Test second evalsha block (after NoScriptError) 64 | assert multiply(keys=['a'], args=[3]) == 6 65 | # At this point, the script should be loaded 66 | assert r.script_exists(multiply.sha) == [True] 67 | # Test that the precalculated sha matches the one from redis 68 | assert multiply.sha == precalculated_sha 69 | # Test first evalsha block 70 | assert multiply(keys=['a'], args=[3]) == 6 71 | 72 | def test_script_object_in_pipeline(self, r): 73 | multiply = r.register_script(multiply_script) 74 | precalculated_sha = multiply.sha 75 | assert precalculated_sha 76 | pipe = r.pipeline() 77 | pipe.set('a', 2) 78 | pipe.get('a') 79 | multiply(keys=['a'], args=[3], client=pipe) 80 | assert r.script_exists(multiply.sha) == [False] 81 | # [SET worked, GET 'a', result of multiple script] 82 | assert pipe.execute() == [True, b('2'), 6] 83 | # The script should have been loaded by pipe.execute() 84 | assert r.script_exists(multiply.sha) == [True] 85 | # The precalculated sha should have been the correct one 86 | assert multiply.sha == precalculated_sha 87 | 88 | # purge the script from redis's cache and re-run the pipeline 89 | # the multiply script should be reloaded by pipe.execute() 90 | r.script_flush() 91 | pipe = r.pipeline() 92 | pipe.set('a', 2) 93 | pipe.get('a') 94 | multiply(keys=['a'], args=[3], client=pipe) 95 | assert r.script_exists(multiply.sha) == [False] 96 | # [SET worked, GET 'a', result of multiple script] 97 | assert pipe.execute() == [True, b('2'), 6] 98 | assert r.script_exists(multiply.sha) == [True] 99 | 100 | def test_eval_msgpack_pipeline_error_in_lua(self, r): 101 | msgpack_hello = r.register_script(msgpack_hello_script) 102 | assert msgpack_hello.sha 103 | 104 | pipe = r.pipeline() 105 | 106 | # avoiding a dependency to msgpack, this is the output of 107 | # msgpack.dumps({"name": "joe"}) 108 | msgpack_message_1 = b'\x81\xa4name\xa3Joe' 109 | 110 | msgpack_hello(args=[msgpack_message_1], client=pipe) 111 | 112 | assert r.script_exists(msgpack_hello.sha) == [False] 113 | assert pipe.execute()[0] == b'hello Joe' 114 | assert r.script_exists(msgpack_hello.sha) == [True] 115 | 116 | msgpack_hello_broken = r.register_script(msgpack_hello_script_broken) 117 | 118 | msgpack_hello_broken(args=[msgpack_message_1], client=pipe) 119 | with pytest.raises(exceptions.ResponseError) as excinfo: 120 | pipe.execute() 121 | assert excinfo.type == exceptions.ResponseError 122 | -------------------------------------------------------------------------------- /docs/make.bat: -------------------------------------------------------------------------------- 1 | @ECHO OFF 2 | 3 | REM Command file for Sphinx documentation 4 | 5 | if "%SPHINXBUILD%" == "" ( 6 | set SPHINXBUILD=sphinx-build 7 | ) 8 | set BUILDDIR=_build 9 | set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . 10 | set I18NSPHINXOPTS=%SPHINXOPTS% . 11 | if NOT "%PAPER%" == "" ( 12 | set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% 13 | set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% 14 | ) 15 | 16 | if "%1" == "" goto help 17 | 18 | if "%1" == "help" ( 19 | :help 20 | echo.Please use `make ^` where ^ is one of 21 | echo. html to make standalone HTML files 22 | echo. dirhtml to make HTML files named index.html in directories 23 | echo. singlehtml to make a single large HTML file 24 | echo. pickle to make pickle files 25 | echo. json to make JSON files 26 | echo. htmlhelp to make HTML files and a HTML help project 27 | echo. qthelp to make HTML files and a qthelp project 28 | echo. devhelp to make HTML files and a Devhelp project 29 | echo. epub to make an epub 30 | echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter 31 | echo. text to make text files 32 | echo. man to make manual pages 33 | echo. texinfo to make Texinfo files 34 | echo. gettext to make PO message catalogs 35 | echo. changes to make an overview over all changed/added/deprecated items 36 | echo. linkcheck to check all external links for integrity 37 | echo. doctest to run all doctests embedded in the documentation if enabled 38 | goto end 39 | ) 40 | 41 | if "%1" == "clean" ( 42 | for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i 43 | del /q /s %BUILDDIR%\* 44 | goto end 45 | ) 46 | 47 | if "%1" == "html" ( 48 | %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html 49 | if errorlevel 1 exit /b 1 50 | echo. 51 | echo.Build finished. The HTML pages are in %BUILDDIR%/html. 52 | goto end 53 | ) 54 | 55 | if "%1" == "dirhtml" ( 56 | %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml 57 | if errorlevel 1 exit /b 1 58 | echo. 59 | echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. 60 | goto end 61 | ) 62 | 63 | if "%1" == "singlehtml" ( 64 | %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml 65 | if errorlevel 1 exit /b 1 66 | echo. 67 | echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. 68 | goto end 69 | ) 70 | 71 | if "%1" == "pickle" ( 72 | %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle 73 | if errorlevel 1 exit /b 1 74 | echo. 75 | echo.Build finished; now you can process the pickle files. 76 | goto end 77 | ) 78 | 79 | if "%1" == "json" ( 80 | %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json 81 | if errorlevel 1 exit /b 1 82 | echo. 83 | echo.Build finished; now you can process the JSON files. 84 | goto end 85 | ) 86 | 87 | if "%1" == "htmlhelp" ( 88 | %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp 89 | if errorlevel 1 exit /b 1 90 | echo. 91 | echo.Build finished; now you can run HTML Help Workshop with the ^ 92 | .hhp project file in %BUILDDIR%/htmlhelp. 93 | goto end 94 | ) 95 | 96 | if "%1" == "qthelp" ( 97 | %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp 98 | if errorlevel 1 exit /b 1 99 | echo. 100 | echo.Build finished; now you can run "qcollectiongenerator" with the ^ 101 | .qhcp project file in %BUILDDIR%/qthelp, like this: 102 | echo.^> qcollectiongenerator %BUILDDIR%\qthelp\redis-py.qhcp 103 | echo.To view the help file: 104 | echo.^> assistant -collectionFile %BUILDDIR%\qthelp\redis-py.ghc 105 | goto end 106 | ) 107 | 108 | if "%1" == "devhelp" ( 109 | %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp 110 | if errorlevel 1 exit /b 1 111 | echo. 112 | echo.Build finished. 113 | goto end 114 | ) 115 | 116 | if "%1" == "epub" ( 117 | %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub 118 | if errorlevel 1 exit /b 1 119 | echo. 120 | echo.Build finished. The epub file is in %BUILDDIR%/epub. 121 | goto end 122 | ) 123 | 124 | if "%1" == "latex" ( 125 | %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex 126 | if errorlevel 1 exit /b 1 127 | echo. 128 | echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. 129 | goto end 130 | ) 131 | 132 | if "%1" == "text" ( 133 | %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text 134 | if errorlevel 1 exit /b 1 135 | echo. 136 | echo.Build finished. The text files are in %BUILDDIR%/text. 137 | goto end 138 | ) 139 | 140 | if "%1" == "man" ( 141 | %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man 142 | if errorlevel 1 exit /b 1 143 | echo. 144 | echo.Build finished. The manual pages are in %BUILDDIR%/man. 145 | goto end 146 | ) 147 | 148 | if "%1" == "texinfo" ( 149 | %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo 150 | if errorlevel 1 exit /b 1 151 | echo. 152 | echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. 153 | goto end 154 | ) 155 | 156 | if "%1" == "gettext" ( 157 | %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale 158 | if errorlevel 1 exit /b 1 159 | echo. 160 | echo.Build finished. The message catalogs are in %BUILDDIR%/locale. 161 | goto end 162 | ) 163 | 164 | if "%1" == "changes" ( 165 | %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes 166 | if errorlevel 1 exit /b 1 167 | echo. 168 | echo.The overview file is in %BUILDDIR%/changes. 169 | goto end 170 | ) 171 | 172 | if "%1" == "linkcheck" ( 173 | %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck 174 | if errorlevel 1 exit /b 1 175 | echo. 176 | echo.Link check complete; look for any errors in the above output ^ 177 | or in %BUILDDIR%/linkcheck/output.txt. 178 | goto end 179 | ) 180 | 181 | if "%1" == "doctest" ( 182 | %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest 183 | if errorlevel 1 exit /b 1 184 | echo. 185 | echo.Testing of doctests in the sources finished, look at the ^ 186 | results in %BUILDDIR%/doctest/output.txt. 187 | goto end 188 | ) 189 | 190 | :end 191 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | PAPER = 8 | BUILDDIR = _build 9 | 10 | # Internal variables. 11 | PAPEROPT_a4 = -D latex_paper_size=a4 12 | PAPEROPT_letter = -D latex_paper_size=letter 13 | ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 14 | # the i18n builder cannot share the environment and doctrees with the others 15 | I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . 16 | 17 | .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext 18 | 19 | help: 20 | @echo "Please use \`make ' where is one of" 21 | @echo " html to make standalone HTML files" 22 | @echo " dirhtml to make HTML files named index.html in directories" 23 | @echo " singlehtml to make a single large HTML file" 24 | @echo " pickle to make pickle files" 25 | @echo " json to make JSON files" 26 | @echo " htmlhelp to make HTML files and a HTML help project" 27 | @echo " qthelp to make HTML files and a qthelp project" 28 | @echo " devhelp to make HTML files and a Devhelp project" 29 | @echo " epub to make an epub" 30 | @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" 31 | @echo " latexpdf to make LaTeX files and run them through pdflatex" 32 | @echo " text to make text files" 33 | @echo " man to make manual pages" 34 | @echo " texinfo to make Texinfo files" 35 | @echo " info to make Texinfo files and run them through makeinfo" 36 | @echo " gettext to make PO message catalogs" 37 | @echo " changes to make an overview of all changed/added/deprecated items" 38 | @echo " linkcheck to check all external links for integrity" 39 | @echo " doctest to run all doctests embedded in the documentation (if enabled)" 40 | 41 | clean: 42 | -rm -rf $(BUILDDIR)/* 43 | 44 | html: 45 | $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html 46 | @echo 47 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." 48 | 49 | dirhtml: 50 | $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml 51 | @echo 52 | @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." 53 | 54 | singlehtml: 55 | $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml 56 | @echo 57 | @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." 58 | 59 | pickle: 60 | $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle 61 | @echo 62 | @echo "Build finished; now you can process the pickle files." 63 | 64 | json: 65 | $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json 66 | @echo 67 | @echo "Build finished; now you can process the JSON files." 68 | 69 | htmlhelp: 70 | $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp 71 | @echo 72 | @echo "Build finished; now you can run HTML Help Workshop with the" \ 73 | ".hhp project file in $(BUILDDIR)/htmlhelp." 74 | 75 | qthelp: 76 | $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp 77 | @echo 78 | @echo "Build finished; now you can run "qcollectiongenerator" with the" \ 79 | ".qhcp project file in $(BUILDDIR)/qthelp, like this:" 80 | @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/redis-py.qhcp" 81 | @echo "To view the help file:" 82 | @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/redis-py.qhc" 83 | 84 | devhelp: 85 | $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp 86 | @echo 87 | @echo "Build finished." 88 | @echo "To view the help file:" 89 | @echo "# mkdir -p $$HOME/.local/share/devhelp/redis-py" 90 | @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/redis-py" 91 | @echo "# devhelp" 92 | 93 | epub: 94 | $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub 95 | @echo 96 | @echo "Build finished. The epub file is in $(BUILDDIR)/epub." 97 | 98 | latex: 99 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 100 | @echo 101 | @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." 102 | @echo "Run \`make' in that directory to run these through (pdf)latex" \ 103 | "(use \`make latexpdf' here to do that automatically)." 104 | 105 | latexpdf: 106 | $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex 107 | @echo "Running LaTeX files through pdflatex..." 108 | $(MAKE) -C $(BUILDDIR)/latex all-pdf 109 | @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." 110 | 111 | text: 112 | $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text 113 | @echo 114 | @echo "Build finished. The text files are in $(BUILDDIR)/text." 115 | 116 | man: 117 | $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man 118 | @echo 119 | @echo "Build finished. The manual pages are in $(BUILDDIR)/man." 120 | 121 | texinfo: 122 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 123 | @echo 124 | @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." 125 | @echo "Run \`make' in that directory to run these through makeinfo" \ 126 | "(use \`make info' here to do that automatically)." 127 | 128 | info: 129 | $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo 130 | @echo "Running Texinfo files through makeinfo..." 131 | make -C $(BUILDDIR)/texinfo info 132 | @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." 133 | 134 | gettext: 135 | $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale 136 | @echo 137 | @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." 138 | 139 | changes: 140 | $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes 141 | @echo 142 | @echo "The overview file is in $(BUILDDIR)/changes." 143 | 144 | linkcheck: 145 | $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck 146 | @echo 147 | @echo "Link check complete; look for any errors in the above output " \ 148 | "or in $(BUILDDIR)/linkcheck/output.txt." 149 | 150 | doctest: 151 | $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest 152 | @echo "Testing of doctests in the sources finished, look at the " \ 153 | "results in $(BUILDDIR)/doctest/output.txt." 154 | -------------------------------------------------------------------------------- /tests/test_lock.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | import pytest 3 | import time 4 | 5 | from redis.exceptions import LockError, ResponseError 6 | from redis.lock import Lock, LuaLock 7 | 8 | 9 | class TestLock(object): 10 | lock_class = Lock 11 | 12 | def get_lock(self, redis, *args, **kwargs): 13 | kwargs['lock_class'] = self.lock_class 14 | return redis.lock(*args, **kwargs) 15 | 16 | def test_lock(self, sr): 17 | lock = self.get_lock(sr, 'foo') 18 | assert lock.acquire(blocking=False) 19 | assert sr.get('foo') == lock.local.token 20 | assert sr.ttl('foo') == -1 21 | lock.release() 22 | assert sr.get('foo') is None 23 | 24 | def test_competing_locks(self, sr): 25 | lock1 = self.get_lock(sr, 'foo') 26 | lock2 = self.get_lock(sr, 'foo') 27 | assert lock1.acquire(blocking=False) 28 | assert not lock2.acquire(blocking=False) 29 | lock1.release() 30 | assert lock2.acquire(blocking=False) 31 | assert not lock1.acquire(blocking=False) 32 | lock2.release() 33 | 34 | def test_timeout(self, sr): 35 | lock = self.get_lock(sr, 'foo', timeout=10) 36 | assert lock.acquire(blocking=False) 37 | assert 8 < sr.ttl('foo') <= 10 38 | lock.release() 39 | 40 | def test_float_timeout(self, sr): 41 | lock = self.get_lock(sr, 'foo', timeout=9.5) 42 | assert lock.acquire(blocking=False) 43 | assert 8 < sr.pttl('foo') <= 9500 44 | lock.release() 45 | 46 | def test_blocking_timeout(self, sr): 47 | lock1 = self.get_lock(sr, 'foo') 48 | assert lock1.acquire(blocking=False) 49 | lock2 = self.get_lock(sr, 'foo', blocking_timeout=0.2) 50 | start = time.time() 51 | assert not lock2.acquire() 52 | assert (time.time() - start) > 0.2 53 | lock1.release() 54 | 55 | def test_context_manager(self, sr): 56 | # blocking_timeout prevents a deadlock if the lock can't be acquired 57 | # for some reason 58 | with self.get_lock(sr, 'foo', blocking_timeout=0.2) as lock: 59 | assert sr.get('foo') == lock.local.token 60 | assert sr.get('foo') is None 61 | 62 | def test_high_sleep_raises_error(self, sr): 63 | "If sleep is higher than timeout, it should raise an error" 64 | with pytest.raises(LockError): 65 | self.get_lock(sr, 'foo', timeout=1, sleep=2) 66 | 67 | def test_releasing_unlocked_lock_raises_error(self, sr): 68 | lock = self.get_lock(sr, 'foo') 69 | with pytest.raises(LockError): 70 | lock.release() 71 | 72 | def test_releasing_lock_no_longer_owned_raises_error(self, sr): 73 | lock = self.get_lock(sr, 'foo') 74 | lock.acquire(blocking=False) 75 | # manually change the token 76 | sr.set('foo', 'a') 77 | with pytest.raises(LockError): 78 | lock.release() 79 | # even though we errored, the token is still cleared 80 | assert lock.local.token is None 81 | 82 | def test_extend_lock(self, sr): 83 | lock = self.get_lock(sr, 'foo', timeout=10) 84 | assert lock.acquire(blocking=False) 85 | assert 8000 < sr.pttl('foo') <= 10000 86 | assert lock.extend(10) 87 | assert 16000 < sr.pttl('foo') <= 20000 88 | lock.release() 89 | 90 | def test_extend_lock_float(self, sr): 91 | lock = self.get_lock(sr, 'foo', timeout=10.0) 92 | assert lock.acquire(blocking=False) 93 | assert 8000 < sr.pttl('foo') <= 10000 94 | assert lock.extend(10.0) 95 | assert 16000 < sr.pttl('foo') <= 20000 96 | lock.release() 97 | 98 | def test_extending_unlocked_lock_raises_error(self, sr): 99 | lock = self.get_lock(sr, 'foo', timeout=10) 100 | with pytest.raises(LockError): 101 | lock.extend(10) 102 | 103 | def test_extending_lock_with_no_timeout_raises_error(self, sr): 104 | lock = self.get_lock(sr, 'foo') 105 | assert lock.acquire(blocking=False) 106 | with pytest.raises(LockError): 107 | lock.extend(10) 108 | lock.release() 109 | 110 | def test_extending_lock_no_longer_owned_raises_error(self, sr): 111 | lock = self.get_lock(sr, 'foo') 112 | assert lock.acquire(blocking=False) 113 | sr.set('foo', 'a') 114 | with pytest.raises(LockError): 115 | lock.extend(10) 116 | 117 | 118 | class TestLuaLock(TestLock): 119 | lock_class = LuaLock 120 | 121 | 122 | class TestLockClassSelection(object): 123 | def test_lock_class_argument(self, sr): 124 | lock = sr.lock('foo', lock_class=Lock) 125 | assert type(lock) == Lock 126 | lock = sr.lock('foo', lock_class=LuaLock) 127 | assert type(lock) == LuaLock 128 | 129 | def test_cached_lualock_flag(self, sr): 130 | try: 131 | sr._use_lua_lock = True 132 | lock = sr.lock('foo') 133 | assert type(lock) == LuaLock 134 | finally: 135 | sr._use_lua_lock = None 136 | 137 | def test_cached_lock_flag(self, sr): 138 | try: 139 | sr._use_lua_lock = False 140 | lock = sr.lock('foo') 141 | assert type(lock) == Lock 142 | finally: 143 | sr._use_lua_lock = None 144 | 145 | def test_lua_compatible_server(self, sr, monkeypatch): 146 | @classmethod 147 | def mock_register(cls, redis): 148 | return 149 | monkeypatch.setattr(LuaLock, 'register_scripts', mock_register) 150 | try: 151 | lock = sr.lock('foo') 152 | assert type(lock) == LuaLock 153 | assert sr._use_lua_lock is True 154 | finally: 155 | sr._use_lua_lock = None 156 | 157 | def test_lua_unavailable(self, sr, monkeypatch): 158 | @classmethod 159 | def mock_register(cls, redis): 160 | raise ResponseError() 161 | monkeypatch.setattr(LuaLock, 'register_scripts', mock_register) 162 | try: 163 | lock = sr.lock('foo') 164 | assert type(lock) == Lock 165 | assert sr._use_lua_lock is False 166 | finally: 167 | sr._use_lua_lock = None 168 | -------------------------------------------------------------------------------- /benchmarks/basic_operations.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | import redis 3 | import time 4 | import sys 5 | from functools import wraps 6 | from argparse import ArgumentParser 7 | 8 | if sys.version_info[0] == 3: 9 | long = int 10 | 11 | 12 | def parse_args(): 13 | parser = ArgumentParser() 14 | parser.add_argument('-n', 15 | type=int, 16 | help='Total number of requests (default 100000)', 17 | default=100000) 18 | parser.add_argument('-P', 19 | type=int, 20 | help=('Pipeline requests.' 21 | ' Default 1 (no pipeline).'), 22 | default=1) 23 | parser.add_argument('-s', 24 | type=int, 25 | help='Data size of SET/GET value in bytes (default 2)', 26 | default=2) 27 | 28 | args = parser.parse_args() 29 | return args 30 | 31 | 32 | def run(): 33 | args = parse_args() 34 | r = redis.StrictRedis() 35 | r.flushall() 36 | set_str(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 37 | set_int(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 38 | get_str(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 39 | get_int(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 40 | incr(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 41 | lpush(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 42 | lrange_300(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 43 | lpop(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 44 | hmset(conn=r, num=args.n, pipeline_size=args.P, data_size=args.s) 45 | 46 | 47 | def timer(func): 48 | @wraps(func) 49 | def wrapper(*args, **kwargs): 50 | start = time.clock() 51 | ret = func(*args, **kwargs) 52 | duration = time.clock() - start 53 | if 'num' in kwargs: 54 | count = kwargs['num'] 55 | else: 56 | count = args[1] 57 | print('{0} - {1} Requests'.format(func.__name__, count)) 58 | print('Duration = {}'.format(duration)) 59 | print('Rate = {}'.format(count/duration)) 60 | print('') 61 | return ret 62 | return wrapper 63 | 64 | 65 | @timer 66 | def set_str(conn, num, pipeline_size, data_size): 67 | if pipeline_size > 1: 68 | conn = conn.pipeline() 69 | 70 | format_str = '{:0<%d}' % data_size 71 | set_data = format_str.format('a') 72 | for i in range(num): 73 | conn.set('set_str:%d' % i, set_data) 74 | if pipeline_size > 1 and i % pipeline_size == 0: 75 | conn.execute() 76 | 77 | if pipeline_size > 1: 78 | conn.execute() 79 | 80 | 81 | @timer 82 | def set_int(conn, num, pipeline_size, data_size): 83 | if pipeline_size > 1: 84 | conn = conn.pipeline() 85 | 86 | format_str = '{:0<%d}' % data_size 87 | set_data = int(format_str.format('1')) 88 | for i in range(num): 89 | conn.set('set_int:%d' % i, set_data) 90 | if pipeline_size > 1 and i % pipeline_size == 0: 91 | conn.execute() 92 | 93 | if pipeline_size > 1: 94 | conn.execute() 95 | 96 | 97 | @timer 98 | def get_str(conn, num, pipeline_size, data_size): 99 | if pipeline_size > 1: 100 | conn = conn.pipeline() 101 | 102 | for i in range(num): 103 | conn.get('set_str:%d' % i) 104 | if pipeline_size > 1 and i % pipeline_size == 0: 105 | conn.execute() 106 | 107 | if pipeline_size > 1: 108 | conn.execute() 109 | 110 | 111 | @timer 112 | def get_int(conn, num, pipeline_size, data_size): 113 | if pipeline_size > 1: 114 | conn = conn.pipeline() 115 | 116 | for i in range(num): 117 | conn.get('set_int:%d' % i) 118 | if pipeline_size > 1 and i % pipeline_size == 0: 119 | conn.execute() 120 | 121 | if pipeline_size > 1: 122 | conn.execute() 123 | 124 | 125 | @timer 126 | def incr(conn, num, pipeline_size, *args, **kwargs): 127 | if pipeline_size > 1: 128 | conn = conn.pipeline() 129 | 130 | for i in range(num): 131 | conn.incr('incr_key') 132 | if pipeline_size > 1 and i % pipeline_size == 0: 133 | conn.execute() 134 | 135 | if pipeline_size > 1: 136 | conn.execute() 137 | 138 | 139 | @timer 140 | def lpush(conn, num, pipeline_size, data_size): 141 | if pipeline_size > 1: 142 | conn = conn.pipeline() 143 | 144 | format_str = '{:0<%d}' % data_size 145 | set_data = int(format_str.format('1')) 146 | for i in range(num): 147 | conn.lpush('lpush_key', set_data) 148 | if pipeline_size > 1 and i % pipeline_size == 0: 149 | conn.execute() 150 | 151 | if pipeline_size > 1: 152 | conn.execute() 153 | 154 | 155 | @timer 156 | def lrange_300(conn, num, pipeline_size, data_size): 157 | if pipeline_size > 1: 158 | conn = conn.pipeline() 159 | 160 | for i in range(num): 161 | conn.lrange('lpush_key', i, i+300) 162 | if pipeline_size > 1 and i % pipeline_size == 0: 163 | conn.execute() 164 | 165 | if pipeline_size > 1: 166 | conn.execute() 167 | 168 | 169 | @timer 170 | def lpop(conn, num, pipeline_size, data_size): 171 | if pipeline_size > 1: 172 | conn = conn.pipeline() 173 | for i in range(num): 174 | conn.lpop('lpush_key') 175 | if pipeline_size > 1 and i % pipeline_size == 0: 176 | conn.execute() 177 | if pipeline_size > 1: 178 | conn.execute() 179 | 180 | 181 | @timer 182 | def hmset(conn, num, pipeline_size, data_size): 183 | if pipeline_size > 1: 184 | conn = conn.pipeline() 185 | 186 | set_data = {'str_value': 'string', 187 | 'int_value': 123456, 188 | 'long_value': long(123456), 189 | 'float_value': 123456.0} 190 | for i in range(num): 191 | conn.hmset('hmset_key', set_data) 192 | if pipeline_size > 1 and i % pipeline_size == 0: 193 | conn.execute() 194 | 195 | if pipeline_size > 1: 196 | conn.execute() 197 | 198 | if __name__ == '__main__': 199 | run() 200 | -------------------------------------------------------------------------------- /redis/_compat.py: -------------------------------------------------------------------------------- 1 | """Internal module for Python 2 backwards compatibility.""" 2 | import errno 3 | import sys 4 | 5 | try: 6 | InterruptedError = InterruptedError 7 | except: 8 | InterruptedError = OSError 9 | 10 | # For Python older than 3.5, retry EINTR. 11 | if sys.version_info[0] < 3 or (sys.version_info[0] == 3 and 12 | sys.version_info[1] < 5): 13 | # Adapted from https://bugs.python.org/review/23863/patch/14532/54418 14 | import socket 15 | import time 16 | import errno 17 | 18 | from select import select as _select 19 | 20 | def select(rlist, wlist, xlist, timeout): 21 | while True: 22 | try: 23 | return _select(rlist, wlist, xlist, timeout) 24 | except InterruptedError as e: 25 | # Python 2 does not define InterruptedError, instead 26 | # try to catch an OSError with errno == EINTR == 4. 27 | if getattr(e, 'errno', None) == getattr(errno, 'EINTR', 4): 28 | continue 29 | raise 30 | 31 | # Wrapper for handling interruptable system calls. 32 | def _retryable_call(s, func, *args, **kwargs): 33 | # Some modules (SSL) use the _fileobject wrapper directly and 34 | # implement a smaller portion of the socket interface, thus we 35 | # need to let them continue to do so. 36 | timeout, deadline = None, 0.0 37 | attempted = False 38 | try: 39 | timeout = s.gettimeout() 40 | except AttributeError: 41 | pass 42 | 43 | if timeout: 44 | deadline = time.time() + timeout 45 | 46 | try: 47 | while True: 48 | if attempted and timeout: 49 | now = time.time() 50 | if now >= deadline: 51 | raise socket.error(errno.EWOULDBLOCK, "timed out") 52 | else: 53 | # Overwrite the timeout on the socket object 54 | # to take into account elapsed time. 55 | s.settimeout(deadline - now) 56 | try: 57 | attempted = True 58 | return func(*args, **kwargs) 59 | except socket.error as e: 60 | if e.args[0] == errno.EINTR: 61 | continue 62 | raise 63 | finally: 64 | # Set the existing timeout back for future 65 | # calls. 66 | if timeout: 67 | s.settimeout(timeout) 68 | 69 | def recv(sock, *args, **kwargs): 70 | return _retryable_call(sock, sock.recv, *args, **kwargs) 71 | 72 | def recv_into(sock, *args, **kwargs): 73 | return _retryable_call(sock, sock.recv_into, *args, **kwargs) 74 | 75 | else: # Python 3.5 and above automatically retry EINTR 76 | from select import select 77 | 78 | def recv(sock, *args, **kwargs): 79 | return sock.recv(*args, **kwargs) 80 | 81 | def recv_into(sock, *args, **kwargs): 82 | return sock.recv_into(*args, **kwargs) 83 | 84 | if sys.version_info[0] < 3: 85 | from urllib import unquote 86 | from urlparse import parse_qs, urlparse 87 | from itertools import imap, izip 88 | from string import letters as ascii_letters 89 | from Queue import Queue 90 | try: 91 | from cStringIO import StringIO as BytesIO 92 | except ImportError: 93 | from StringIO import StringIO as BytesIO 94 | 95 | # special unicode handling for python2 to avoid UnicodeDecodeError 96 | def safe_unicode(obj, *args): 97 | """ return the unicode representation of obj """ 98 | try: 99 | return unicode(obj, *args) 100 | except UnicodeDecodeError: 101 | # obj is byte string 102 | ascii_text = str(obj).encode('string_escape') 103 | return unicode(ascii_text) 104 | 105 | def iteritems(x): 106 | return x.iteritems() 107 | 108 | def iterkeys(x): 109 | return x.iterkeys() 110 | 111 | def itervalues(x): 112 | return x.itervalues() 113 | 114 | def nativestr(x): 115 | return x if isinstance(x, str) else x.encode('utf-8', 'replace') 116 | 117 | def u(x): 118 | return x.decode() 119 | 120 | def b(x): 121 | return x 122 | 123 | def next(x): 124 | return x.next() 125 | 126 | def byte_to_chr(x): 127 | return x 128 | 129 | unichr = unichr 130 | xrange = xrange 131 | basestring = basestring 132 | unicode = unicode 133 | bytes = str 134 | long = long 135 | else: 136 | from urllib.parse import parse_qs, unquote, urlparse 137 | from io import BytesIO 138 | from string import ascii_letters 139 | from queue import Queue 140 | 141 | def iteritems(x): 142 | return iter(x.items()) 143 | 144 | def iterkeys(x): 145 | return iter(x.keys()) 146 | 147 | def itervalues(x): 148 | return iter(x.values()) 149 | 150 | def byte_to_chr(x): 151 | return chr(x) 152 | 153 | def nativestr(x): 154 | return x if isinstance(x, str) else x.decode('utf-8', 'replace') 155 | 156 | def u(x): 157 | return x 158 | 159 | def b(x): 160 | return x.encode('latin-1') if not isinstance(x, bytes) else x 161 | 162 | next = next 163 | unichr = chr 164 | imap = map 165 | izip = zip 166 | xrange = range 167 | basestring = str 168 | unicode = str 169 | safe_unicode = str 170 | bytes = bytes 171 | long = int 172 | 173 | try: # Python 3 174 | from queue import LifoQueue, Empty, Full 175 | except ImportError: 176 | from Queue import Empty, Full 177 | try: # Python 2.6 - 2.7 178 | from Queue import LifoQueue 179 | except ImportError: # Python 2.5 180 | from Queue import Queue 181 | # From the Python 2.7 lib. Python 2.5 already extracted the core 182 | # methods to aid implementating different queue organisations. 183 | 184 | class LifoQueue(Queue): 185 | "Override queue methods to implement a last-in first-out queue." 186 | 187 | def _init(self, maxsize): 188 | self.maxsize = maxsize 189 | self.queue = [] 190 | 191 | def _qsize(self, len=len): 192 | return len(self.queue) 193 | 194 | def _put(self, item): 195 | self.queue.append(item) 196 | 197 | def _get(self): 198 | return self.queue.pop() 199 | -------------------------------------------------------------------------------- /tests/test_sentinel.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | import pytest 3 | 4 | from redis import exceptions 5 | from redis.sentinel import (Sentinel, SentinelConnectionPool, 6 | MasterNotFoundError, SlaveNotFoundError) 7 | from redis._compat import next 8 | import redis.sentinel 9 | 10 | 11 | class SentinelTestClient(object): 12 | def __init__(self, cluster, id): 13 | self.cluster = cluster 14 | self.id = id 15 | 16 | def sentinel_masters(self): 17 | self.cluster.connection_error_if_down(self) 18 | self.cluster.timeout_if_down(self) 19 | return {self.cluster.service_name: self.cluster.master} 20 | 21 | def sentinel_slaves(self, master_name): 22 | self.cluster.connection_error_if_down(self) 23 | self.cluster.timeout_if_down(self) 24 | if master_name != self.cluster.service_name: 25 | return [] 26 | return self.cluster.slaves 27 | 28 | 29 | class SentinelTestCluster(object): 30 | def __init__(self, service_name='mymaster', ip='127.0.0.1', port=6379): 31 | self.clients = {} 32 | self.master = { 33 | 'ip': ip, 34 | 'port': port, 35 | 'is_master': True, 36 | 'is_sdown': False, 37 | 'is_odown': False, 38 | 'num-other-sentinels': 0, 39 | } 40 | self.service_name = service_name 41 | self.slaves = [] 42 | self.nodes_down = set() 43 | self.nodes_timeout = set() 44 | 45 | def connection_error_if_down(self, node): 46 | if node.id in self.nodes_down: 47 | raise exceptions.ConnectionError 48 | 49 | def timeout_if_down(self, node): 50 | if node.id in self.nodes_timeout: 51 | raise exceptions.TimeoutError 52 | 53 | def client(self, host, port, **kwargs): 54 | return SentinelTestClient(self, (host, port)) 55 | 56 | 57 | @pytest.fixture() 58 | def cluster(request): 59 | def teardown(): 60 | redis.sentinel.StrictRedis = saved_StrictRedis 61 | cluster = SentinelTestCluster() 62 | saved_StrictRedis = redis.sentinel.StrictRedis 63 | redis.sentinel.StrictRedis = cluster.client 64 | request.addfinalizer(teardown) 65 | return cluster 66 | 67 | 68 | @pytest.fixture() 69 | def sentinel(request, cluster): 70 | return Sentinel([('foo', 26379), ('bar', 26379)]) 71 | 72 | 73 | def test_discover_master(sentinel): 74 | address = sentinel.discover_master('mymaster') 75 | assert address == ('127.0.0.1', 6379) 76 | 77 | 78 | def test_discover_master_error(sentinel): 79 | with pytest.raises(MasterNotFoundError): 80 | sentinel.discover_master('xxx') 81 | 82 | 83 | def test_discover_master_sentinel_down(cluster, sentinel): 84 | # Put first sentinel 'foo' down 85 | cluster.nodes_down.add(('foo', 26379)) 86 | address = sentinel.discover_master('mymaster') 87 | assert address == ('127.0.0.1', 6379) 88 | # 'bar' is now first sentinel 89 | assert sentinel.sentinels[0].id == ('bar', 26379) 90 | 91 | 92 | def test_discover_master_sentinel_timeout(cluster, sentinel): 93 | # Put first sentinel 'foo' down 94 | cluster.nodes_timeout.add(('foo', 26379)) 95 | address = sentinel.discover_master('mymaster') 96 | assert address == ('127.0.0.1', 6379) 97 | # 'bar' is now first sentinel 98 | assert sentinel.sentinels[0].id == ('bar', 26379) 99 | 100 | 101 | def test_master_min_other_sentinels(cluster): 102 | sentinel = Sentinel([('foo', 26379)], min_other_sentinels=1) 103 | # min_other_sentinels 104 | with pytest.raises(MasterNotFoundError): 105 | sentinel.discover_master('mymaster') 106 | cluster.master['num-other-sentinels'] = 2 107 | address = sentinel.discover_master('mymaster') 108 | assert address == ('127.0.0.1', 6379) 109 | 110 | 111 | def test_master_odown(cluster, sentinel): 112 | cluster.master['is_odown'] = True 113 | with pytest.raises(MasterNotFoundError): 114 | sentinel.discover_master('mymaster') 115 | 116 | 117 | def test_master_sdown(cluster, sentinel): 118 | cluster.master['is_sdown'] = True 119 | with pytest.raises(MasterNotFoundError): 120 | sentinel.discover_master('mymaster') 121 | 122 | 123 | def test_discover_slaves(cluster, sentinel): 124 | assert sentinel.discover_slaves('mymaster') == [] 125 | 126 | cluster.slaves = [ 127 | {'ip': 'slave0', 'port': 1234, 'is_odown': False, 'is_sdown': False}, 128 | {'ip': 'slave1', 'port': 1234, 'is_odown': False, 'is_sdown': False}, 129 | ] 130 | assert sentinel.discover_slaves('mymaster') == [ 131 | ('slave0', 1234), ('slave1', 1234)] 132 | 133 | # slave0 -> ODOWN 134 | cluster.slaves[0]['is_odown'] = True 135 | assert sentinel.discover_slaves('mymaster') == [ 136 | ('slave1', 1234)] 137 | 138 | # slave1 -> SDOWN 139 | cluster.slaves[1]['is_sdown'] = True 140 | assert sentinel.discover_slaves('mymaster') == [] 141 | 142 | cluster.slaves[0]['is_odown'] = False 143 | cluster.slaves[1]['is_sdown'] = False 144 | 145 | # node0 -> DOWN 146 | cluster.nodes_down.add(('foo', 26379)) 147 | assert sentinel.discover_slaves('mymaster') == [ 148 | ('slave0', 1234), ('slave1', 1234)] 149 | cluster.nodes_down.clear() 150 | 151 | # node0 -> TIMEOUT 152 | cluster.nodes_timeout.add(('foo', 26379)) 153 | assert sentinel.discover_slaves('mymaster') == [ 154 | ('slave0', 1234), ('slave1', 1234)] 155 | 156 | 157 | def test_master_for(cluster, sentinel): 158 | master = sentinel.master_for('mymaster', db=9) 159 | assert master.ping() 160 | assert master.connection_pool.master_address == ('127.0.0.1', 6379) 161 | 162 | # Use internal connection check 163 | master = sentinel.master_for('mymaster', db=9, check_connection=True) 164 | assert master.ping() 165 | 166 | 167 | def test_slave_for(cluster, sentinel): 168 | cluster.slaves = [ 169 | {'ip': '127.0.0.1', 'port': 6379, 170 | 'is_odown': False, 'is_sdown': False}, 171 | ] 172 | slave = sentinel.slave_for('mymaster', db=9) 173 | assert slave.ping() 174 | 175 | 176 | def test_slave_for_slave_not_found_error(cluster, sentinel): 177 | cluster.master['is_odown'] = True 178 | slave = sentinel.slave_for('mymaster', db=9) 179 | with pytest.raises(SlaveNotFoundError): 180 | slave.ping() 181 | 182 | 183 | def test_slave_round_robin(cluster, sentinel): 184 | cluster.slaves = [ 185 | {'ip': 'slave0', 'port': 6379, 'is_odown': False, 'is_sdown': False}, 186 | {'ip': 'slave1', 'port': 6379, 'is_odown': False, 'is_sdown': False}, 187 | ] 188 | pool = SentinelConnectionPool('mymaster', sentinel) 189 | rotator = pool.rotate_slaves() 190 | assert next(rotator) in (('slave0', 6379), ('slave1', 6379)) 191 | assert next(rotator) in (('slave0', 6379), ('slave1', 6379)) 192 | # Fallback to master 193 | assert next(rotator) == ('127.0.0.1', 6379) 194 | with pytest.raises(SlaveNotFoundError): 195 | next(rotator) 196 | -------------------------------------------------------------------------------- /tests/test_pipeline.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | import pytest 3 | 4 | import redis 5 | from redis._compat import b, u, unichr, unicode 6 | 7 | 8 | class TestPipeline(object): 9 | def test_pipeline(self, r): 10 | with r.pipeline() as pipe: 11 | pipe.set('a', 'a1').get('a').zadd('z', z1=1).zadd('z', z2=4) 12 | pipe.zincrby('z', 'z1').zrange('z', 0, 5, withscores=True) 13 | assert pipe.execute() == \ 14 | [ 15 | True, 16 | b('a1'), 17 | True, 18 | True, 19 | 2.0, 20 | [(b('z1'), 2.0), (b('z2'), 4)], 21 | ] 22 | 23 | def test_pipeline_length(self, r): 24 | with r.pipeline() as pipe: 25 | # Initially empty. 26 | assert len(pipe) == 0 27 | assert not pipe 28 | 29 | # Fill 'er up! 30 | pipe.set('a', 'a1').set('b', 'b1').set('c', 'c1') 31 | assert len(pipe) == 3 32 | assert pipe 33 | 34 | # Execute calls reset(), so empty once again. 35 | pipe.execute() 36 | assert len(pipe) == 0 37 | assert not pipe 38 | 39 | def test_pipeline_no_transaction(self, r): 40 | with r.pipeline(transaction=False) as pipe: 41 | pipe.set('a', 'a1').set('b', 'b1').set('c', 'c1') 42 | assert pipe.execute() == [True, True, True] 43 | assert r['a'] == b('a1') 44 | assert r['b'] == b('b1') 45 | assert r['c'] == b('c1') 46 | 47 | def test_pipeline_no_transaction_watch(self, r): 48 | r['a'] = 0 49 | 50 | with r.pipeline(transaction=False) as pipe: 51 | pipe.watch('a') 52 | a = pipe.get('a') 53 | 54 | pipe.multi() 55 | pipe.set('a', int(a) + 1) 56 | assert pipe.execute() == [True] 57 | 58 | def test_pipeline_no_transaction_watch_failure(self, r): 59 | r['a'] = 0 60 | 61 | with r.pipeline(transaction=False) as pipe: 62 | pipe.watch('a') 63 | a = pipe.get('a') 64 | 65 | r['a'] = 'bad' 66 | 67 | pipe.multi() 68 | pipe.set('a', int(a) + 1) 69 | 70 | with pytest.raises(redis.WatchError): 71 | pipe.execute() 72 | 73 | assert r['a'] == b('bad') 74 | 75 | def test_exec_error_in_response(self, r): 76 | """ 77 | an invalid pipeline command at exec time adds the exception instance 78 | to the list of returned values 79 | """ 80 | r['c'] = 'a' 81 | with r.pipeline() as pipe: 82 | pipe.set('a', 1).set('b', 2).lpush('c', 3).set('d', 4) 83 | result = pipe.execute(raise_on_error=False) 84 | 85 | assert result[0] 86 | assert r['a'] == b('1') 87 | assert result[1] 88 | assert r['b'] == b('2') 89 | 90 | # we can't lpush to a key that's a string value, so this should 91 | # be a ResponseError exception 92 | assert isinstance(result[2], redis.ResponseError) 93 | assert r['c'] == b('a') 94 | 95 | # since this isn't a transaction, the other commands after the 96 | # error are still executed 97 | assert result[3] 98 | assert r['d'] == b('4') 99 | 100 | # make sure the pipe was restored to a working state 101 | assert pipe.set('z', 'zzz').execute() == [True] 102 | assert r['z'] == b('zzz') 103 | 104 | def test_exec_error_raised(self, r): 105 | r['c'] = 'a' 106 | with r.pipeline() as pipe: 107 | pipe.set('a', 1).set('b', 2).lpush('c', 3).set('d', 4) 108 | with pytest.raises(redis.ResponseError) as ex: 109 | pipe.execute() 110 | assert unicode(ex.value).startswith('Command # 3 (LPUSH c 3) of ' 111 | 'pipeline caused error: ') 112 | 113 | # make sure the pipe was restored to a working state 114 | assert pipe.set('z', 'zzz').execute() == [True] 115 | assert r['z'] == b('zzz') 116 | 117 | def test_parse_error_raised(self, r): 118 | with r.pipeline() as pipe: 119 | # the zrem is invalid because we don't pass any keys to it 120 | pipe.set('a', 1).zrem('b').set('b', 2) 121 | with pytest.raises(redis.ResponseError) as ex: 122 | pipe.execute() 123 | 124 | assert unicode(ex.value).startswith('Command # 2 (ZREM b) of ' 125 | 'pipeline caused error: ') 126 | 127 | # make sure the pipe was restored to a working state 128 | assert pipe.set('z', 'zzz').execute() == [True] 129 | assert r['z'] == b('zzz') 130 | 131 | def test_watch_succeed(self, r): 132 | r['a'] = 1 133 | r['b'] = 2 134 | 135 | with r.pipeline() as pipe: 136 | pipe.watch('a', 'b') 137 | assert pipe.watching 138 | a_value = pipe.get('a') 139 | b_value = pipe.get('b') 140 | assert a_value == b('1') 141 | assert b_value == b('2') 142 | pipe.multi() 143 | 144 | pipe.set('c', 3) 145 | assert pipe.execute() == [True] 146 | assert not pipe.watching 147 | 148 | def test_watch_failure(self, r): 149 | r['a'] = 1 150 | r['b'] = 2 151 | 152 | with r.pipeline() as pipe: 153 | pipe.watch('a', 'b') 154 | r['b'] = 3 155 | pipe.multi() 156 | pipe.get('a') 157 | with pytest.raises(redis.WatchError): 158 | pipe.execute() 159 | 160 | assert not pipe.watching 161 | 162 | def test_unwatch(self, r): 163 | r['a'] = 1 164 | r['b'] = 2 165 | 166 | with r.pipeline() as pipe: 167 | pipe.watch('a', 'b') 168 | r['b'] = 3 169 | pipe.unwatch() 170 | assert not pipe.watching 171 | pipe.get('a') 172 | assert pipe.execute() == [b('1')] 173 | 174 | def test_transaction_callable(self, r): 175 | r['a'] = 1 176 | r['b'] = 2 177 | has_run = [] 178 | 179 | def my_transaction(pipe): 180 | a_value = pipe.get('a') 181 | assert a_value in (b('1'), b('2')) 182 | b_value = pipe.get('b') 183 | assert b_value == b('2') 184 | 185 | # silly run-once code... incr's "a" so WatchError should be raised 186 | # forcing this all to run again. this should incr "a" once to "2" 187 | if not has_run: 188 | r.incr('a') 189 | has_run.append('it has') 190 | 191 | pipe.multi() 192 | pipe.set('c', int(a_value) + int(b_value)) 193 | 194 | result = r.transaction(my_transaction, 'a', 'b') 195 | assert result == [True] 196 | assert r['c'] == b('4') 197 | 198 | def test_exec_error_in_no_transaction_pipeline(self, r): 199 | r['a'] = 1 200 | with r.pipeline(transaction=False) as pipe: 201 | pipe.llen('a') 202 | pipe.expire('a', 100) 203 | 204 | with pytest.raises(redis.ResponseError) as ex: 205 | pipe.execute() 206 | 207 | assert unicode(ex.value).startswith('Command # 1 (LLEN a) of ' 208 | 'pipeline caused error: ') 209 | 210 | assert r['a'] == b('1') 211 | 212 | def test_exec_error_in_no_transaction_pipeline_unicode_command(self, r): 213 | key = unichr(3456) + u('abcd') + unichr(3421) 214 | r[key] = 1 215 | with r.pipeline(transaction=False) as pipe: 216 | pipe.llen(key) 217 | pipe.expire(key, 100) 218 | 219 | with pytest.raises(redis.ResponseError) as ex: 220 | pipe.execute() 221 | 222 | expected = unicode('Command # 1 (LLEN %s) of pipeline caused ' 223 | 'error: ') % key 224 | assert unicode(ex.value).startswith(expected) 225 | 226 | assert r[key] == b('1') 227 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # redis-py documentation build configuration file, created by 4 | # sphinx-quickstart on Fri Feb 8 00:47:08 2013. 5 | # 6 | # This file is execfile()d with the current directory set to its containing 7 | # dir. 8 | # 9 | # Note that not all possible configuration values are present in this 10 | # autogenerated file. 11 | # 12 | # All configuration values have a default; values that are commented out 13 | # serve to show the default. 14 | 15 | import os 16 | import sys 17 | 18 | # If extensions (or modules to document with autodoc) are in another directory, 19 | # add these directories to sys.path here. If the directory is relative to the 20 | # documentation root, use os.path.abspath to make it absolute, like shown here. 21 | #sys.path.insert(0, os.path.abspath('.')) 22 | sys.path.append(os.path.abspath(os.path.pardir)) 23 | 24 | # -- General configuration ---------------------------------------------------- 25 | 26 | # If your documentation needs a minimal Sphinx version, state it here. 27 | #needs_sphinx = '1.0' 28 | 29 | # Add any Sphinx extension module names here, as strings. They can be 30 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones. 31 | extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.viewcode'] 32 | 33 | # Add any paths that contain templates here, relative to this directory. 34 | templates_path = ['_templates'] 35 | 36 | # The suffix of source filenames. 37 | source_suffix = '.rst' 38 | 39 | # The encoding of source files. 40 | #source_encoding = 'utf-8-sig' 41 | 42 | # The master toctree document. 43 | master_doc = 'index' 44 | 45 | # General information about the project. 46 | project = u'redis-py' 47 | copyright = u'2016, Andy McCurdy' 48 | 49 | # The version info for the project you're documenting, acts as replacement for 50 | # |version| and |release|, also used in various other places throughout the 51 | # built documents. 52 | # 53 | # The short X.Y version. 54 | version = '2.10.5' 55 | # The full version, including alpha/beta/rc tags. 56 | release = '2.10.5' 57 | 58 | # The language for content autogenerated by Sphinx. Refer to documentation 59 | # for a list of supported languages. 60 | #language = None 61 | 62 | # There are two options for replacing |today|: either, you set today to some 63 | # non-false value, then it is used: 64 | #today = '' 65 | # Else, today_fmt is used as the format for a strftime call. 66 | #today_fmt = '%B %d, %Y' 67 | 68 | # List of patterns, relative to source directory, that match files and 69 | # directories to ignore when looking for source files. 70 | exclude_patterns = ['_build'] 71 | 72 | # The reST default role (used for this markup: `text`) to use for all 73 | # documents. 74 | #default_role = None 75 | 76 | # If true, '()' will be appended to :func: etc. cross-reference text. 77 | #add_function_parentheses = True 78 | 79 | # If true, the current module name will be prepended to all description 80 | # unit titles (such as .. function::). 81 | #add_module_names = True 82 | 83 | # If true, sectionauthor and moduleauthor directives will be shown in the 84 | # output. They are ignored by default. 85 | #show_authors = False 86 | 87 | # The name of the Pygments (syntax highlighting) style to use. 88 | pygments_style = 'sphinx' 89 | 90 | # A list of ignored prefixes for module index sorting. 91 | #modindex_common_prefix = [] 92 | 93 | 94 | # -- Options for HTML output -------------------------------------------------- 95 | 96 | # The theme to use for HTML and HTML Help pages. See the documentation for 97 | # a list of builtin themes. 98 | html_theme = 'default' 99 | 100 | # Theme options are theme-specific and customize the look and feel of a theme 101 | # further. For a list of options available for each theme, see the 102 | # documentation. 103 | #html_theme_options = {} 104 | 105 | # Add any paths that contain custom themes here, relative to this directory. 106 | #html_theme_path = [] 107 | 108 | # The name for this set of Sphinx documents. If None, it defaults to 109 | # " v documentation". 110 | #html_title = None 111 | 112 | # A shorter title for the navigation bar. Default is the same as html_title. 113 | #html_short_title = None 114 | 115 | # The name of an image file (relative to this directory) to place at the top 116 | # of the sidebar. 117 | #html_logo = None 118 | 119 | # The name of an image file (within the static path) to use as favicon of the 120 | # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 121 | # pixels large. 122 | #html_favicon = None 123 | 124 | # Add any paths that contain custom static files (such as style sheets) here, 125 | # relative to this directory. They are copied after the builtin static files, 126 | # so a file named "default.css" will overwrite the builtin "default.css". 127 | html_static_path = ['_static'] 128 | 129 | # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, 130 | # using the given strftime format. 131 | #html_last_updated_fmt = '%b %d, %Y' 132 | 133 | # If true, SmartyPants will be used to convert quotes and dashes to 134 | # typographically correct entities. 135 | #html_use_smartypants = True 136 | 137 | # Custom sidebar templates, maps document names to template names. 138 | #html_sidebars = {} 139 | 140 | # Additional templates that should be rendered to pages, maps page names to 141 | # template names. 142 | #html_additional_pages = {} 143 | 144 | # If false, no module index is generated. 145 | #html_domain_indices = True 146 | 147 | # If false, no index is generated. 148 | #html_use_index = True 149 | 150 | # If true, the index is split into individual pages for each letter. 151 | #html_split_index = False 152 | 153 | # If true, links to the reST sources are added to the pages. 154 | #html_show_sourcelink = True 155 | 156 | # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. 157 | #html_show_sphinx = True 158 | 159 | # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. 160 | #html_show_copyright = True 161 | 162 | # If true, an OpenSearch description file will be output, and all pages will 163 | # contain a tag referring to it. The value of this option must be the 164 | # base URL from which the finished HTML is served. 165 | #html_use_opensearch = '' 166 | 167 | # This is the file name suffix for HTML files (e.g. ".xhtml"). 168 | #html_file_suffix = None 169 | 170 | # Output file base name for HTML help builder. 171 | htmlhelp_basename = 'redis-pydoc' 172 | 173 | 174 | # -- Options for LaTeX output ------------------------------------------------- 175 | 176 | latex_elements = { 177 | # The paper size ('letterpaper' or 'a4paper'). 178 | #'papersize': 'letterpaper', 179 | 180 | # The font size ('10pt', '11pt' or '12pt'). 181 | #'pointsize': '10pt', 182 | 183 | # Additional stuff for the LaTeX preamble. 184 | #'preamble': '', 185 | } 186 | 187 | # Grouping the document tree into LaTeX files. List of tuples 188 | # (source start file, target name, title, author, documentclass 189 | # [howto/manual]). 190 | latex_documents = [ 191 | ('index', 'redis-py.tex', u'redis-py Documentation', 192 | u'Andy McCurdy', 'manual'), 193 | ] 194 | 195 | # The name of an image file (relative to this directory) to place at the top of 196 | # the title page. 197 | #latex_logo = None 198 | 199 | # For "manual" documents, if this is true, then toplevel headings are parts, 200 | # not chapters. 201 | #latex_use_parts = False 202 | 203 | # If true, show page references after internal links. 204 | #latex_show_pagerefs = False 205 | 206 | # If true, show URL addresses after external links. 207 | #latex_show_urls = False 208 | 209 | # Documents to append as an appendix to all manuals. 210 | #latex_appendices = [] 211 | 212 | # If false, no module index is generated. 213 | #latex_domain_indices = True 214 | 215 | 216 | # -- Options for manual page output ------------------------------------------- 217 | 218 | # One entry per manual page. List of tuples 219 | # (source start file, name, description, authors, manual section). 220 | man_pages = [ 221 | ('index', 'redis-py', u'redis-py Documentation', 222 | [u'Andy McCurdy'], 1) 223 | ] 224 | 225 | # If true, show URL addresses after external links. 226 | #man_show_urls = False 227 | 228 | 229 | # -- Options for Texinfo output ----------------------------------------------- 230 | 231 | # Grouping the document tree into Texinfo files. List of tuples 232 | # (source start file, target name, title, author, 233 | # dir menu entry, description, category) 234 | texinfo_documents = [ 235 | ('index', 'redis-py', u'redis-py Documentation', 236 | u'Andy McCurdy', 'redis-py', 237 | 'One line description of project.', 'Miscellaneous'), 238 | ] 239 | 240 | # Documents to append as an appendix to all manuals. 241 | #texinfo_appendices = [] 242 | 243 | # If false, no module index is generated. 244 | #texinfo_domain_indices = True 245 | 246 | # How to display URL addresses: 'footnote', 'no', or 'inline'. 247 | #texinfo_show_urls = 'footnote' 248 | 249 | epub_title = u'redis-py' 250 | epub_author = u'Andy McCurdy' 251 | epub_publisher = u'Andy McCurdy' 252 | epub_copyright = u'2011, Andy McCurdy' 253 | -------------------------------------------------------------------------------- /redis/lock.py: -------------------------------------------------------------------------------- 1 | import threading 2 | import time as mod_time 3 | import uuid 4 | from redis.exceptions import LockError, WatchError 5 | from redis.utils import dummy 6 | from redis._compat import b 7 | 8 | 9 | class Lock(object): 10 | """ 11 | A shared, distributed Lock. Using Redis for locking allows the Lock 12 | to be shared across processes and/or machines. 13 | 14 | It's left to the user to resolve deadlock issues and make sure 15 | multiple clients play nicely together. 16 | """ 17 | def __init__(self, redis, name, timeout=None, sleep=0.1, 18 | blocking=True, blocking_timeout=None, thread_local=True): 19 | """ 20 | Create a new Lock instance named ``name`` using the Redis client 21 | supplied by ``redis``. 22 | 23 | ``timeout`` indicates a maximum life for the lock. 24 | By default, it will remain locked until release() is called. 25 | ``timeout`` can be specified as a float or integer, both representing 26 | the number of seconds to wait. 27 | 28 | ``sleep`` indicates the amount of time to sleep per loop iteration 29 | when the lock is in blocking mode and another client is currently 30 | holding the lock. 31 | 32 | ``blocking`` indicates whether calling ``acquire`` should block until 33 | the lock has been acquired or to fail immediately, causing ``acquire`` 34 | to return False and the lock not being acquired. Defaults to True. 35 | Note this value can be overridden by passing a ``blocking`` 36 | argument to ``acquire``. 37 | 38 | ``blocking_timeout`` indicates the maximum amount of time in seconds to 39 | spend trying to acquire the lock. A value of ``None`` indicates 40 | continue trying forever. ``blocking_timeout`` can be specified as a 41 | float or integer, both representing the number of seconds to wait. 42 | 43 | ``thread_local`` indicates whether the lock token is placed in 44 | thread-local storage. By default, the token is placed in thread local 45 | storage so that a thread only sees its token, not a token set by 46 | another thread. Consider the following timeline: 47 | 48 | time: 0, thread-1 acquires `my-lock`, with a timeout of 5 seconds. 49 | thread-1 sets the token to "abc" 50 | time: 1, thread-2 blocks trying to acquire `my-lock` using the 51 | Lock instance. 52 | time: 5, thread-1 has not yet completed. redis expires the lock 53 | key. 54 | time: 5, thread-2 acquired `my-lock` now that it's available. 55 | thread-2 sets the token to "xyz" 56 | time: 6, thread-1 finishes its work and calls release(). if the 57 | token is *not* stored in thread local storage, then 58 | thread-1 would see the token value as "xyz" and would be 59 | able to successfully release the thread-2's lock. 60 | 61 | In some use cases it's necessary to disable thread local storage. For 62 | example, if you have code where one thread acquires a lock and passes 63 | that lock instance to a worker thread to release later. If thread 64 | local storage isn't disabled in this case, the worker thread won't see 65 | the token set by the thread that acquired the lock. Our assumption 66 | is that these cases aren't common and as such default to using 67 | thread local storage. 68 | """ 69 | self.redis = redis 70 | self.name = name 71 | self.timeout = timeout 72 | self.sleep = sleep 73 | self.blocking = blocking 74 | self.blocking_timeout = blocking_timeout 75 | self.thread_local = bool(thread_local) 76 | self.local = threading.local() if self.thread_local else dummy() 77 | self.local.token = None 78 | if self.timeout and self.sleep > self.timeout: 79 | raise LockError("'sleep' must be less than 'timeout'") 80 | 81 | def __enter__(self): 82 | # force blocking, as otherwise the user would have to check whether 83 | # the lock was actually acquired or not. 84 | self.acquire(blocking=True) 85 | return self 86 | 87 | def __exit__(self, exc_type, exc_value, traceback): 88 | self.release() 89 | 90 | def acquire(self, blocking=None, blocking_timeout=None): 91 | """ 92 | Use Redis to hold a shared, distributed lock named ``name``. 93 | Returns True once the lock is acquired. 94 | 95 | If ``blocking`` is False, always return immediately. If the lock 96 | was acquired, return True, otherwise return False. 97 | 98 | ``blocking_timeout`` specifies the maximum number of seconds to 99 | wait trying to acquire the lock. 100 | """ 101 | sleep = self.sleep 102 | token = b(uuid.uuid1().hex) 103 | if blocking is None: 104 | blocking = self.blocking 105 | if blocking_timeout is None: 106 | blocking_timeout = self.blocking_timeout 107 | stop_trying_at = None 108 | if blocking_timeout is not None: 109 | stop_trying_at = mod_time.time() + blocking_timeout 110 | while 1: 111 | if self.do_acquire(token): 112 | self.local.token = token 113 | return True 114 | if not blocking: 115 | return False 116 | if stop_trying_at is not None and mod_time.time() > stop_trying_at: 117 | return False 118 | mod_time.sleep(sleep) 119 | 120 | def do_acquire(self, token): 121 | if self.redis.setnx(self.name, token): 122 | if self.timeout: 123 | # convert to milliseconds 124 | timeout = int(self.timeout * 1000) 125 | self.redis.pexpire(self.name, timeout) 126 | return True 127 | return False 128 | 129 | def release(self): 130 | "Releases the already acquired lock" 131 | expected_token = self.local.token 132 | if expected_token is None: 133 | raise LockError("Cannot release an unlocked lock") 134 | self.local.token = None 135 | self.do_release(expected_token) 136 | 137 | def do_release(self, expected_token): 138 | name = self.name 139 | 140 | def execute_release(pipe): 141 | lock_value = pipe.get(name) 142 | if lock_value != expected_token: 143 | raise LockError("Cannot release a lock that's no longer owned") 144 | pipe.delete(name) 145 | 146 | self.redis.transaction(execute_release, name) 147 | 148 | def extend(self, additional_time): 149 | """ 150 | Adds more time to an already acquired lock. 151 | 152 | ``additional_time`` can be specified as an integer or a float, both 153 | representing the number of seconds to add. 154 | """ 155 | if self.local.token is None: 156 | raise LockError("Cannot extend an unlocked lock") 157 | if self.timeout is None: 158 | raise LockError("Cannot extend a lock with no timeout") 159 | return self.do_extend(additional_time) 160 | 161 | def do_extend(self, additional_time): 162 | pipe = self.redis.pipeline() 163 | pipe.watch(self.name) 164 | lock_value = pipe.get(self.name) 165 | if lock_value != self.local.token: 166 | raise LockError("Cannot extend a lock that's no longer owned") 167 | expiration = pipe.pttl(self.name) 168 | if expiration is None or expiration < 0: 169 | # Redis evicted the lock key between the previous get() and now 170 | # we'll handle this when we call pexpire() 171 | expiration = 0 172 | pipe.multi() 173 | pipe.pexpire(self.name, expiration + int(additional_time * 1000)) 174 | 175 | try: 176 | response = pipe.execute() 177 | except WatchError: 178 | # someone else acquired the lock 179 | raise LockError("Cannot extend a lock that's no longer owned") 180 | if not response[0]: 181 | # pexpire returns False if the key doesn't exist 182 | raise LockError("Cannot extend a lock that's no longer owned") 183 | return True 184 | 185 | 186 | class LuaLock(Lock): 187 | """ 188 | A lock implementation that uses Lua scripts rather than pipelines 189 | and watches. 190 | """ 191 | lua_acquire = None 192 | lua_release = None 193 | lua_extend = None 194 | 195 | # KEYS[1] - lock name 196 | # ARGV[1] - token 197 | # ARGV[2] - timeout in milliseconds 198 | # return 1 if lock was acquired, otherwise 0 199 | LUA_ACQUIRE_SCRIPT = """ 200 | if redis.call('setnx', KEYS[1], ARGV[1]) == 1 then 201 | if ARGV[2] ~= '' then 202 | redis.call('pexpire', KEYS[1], ARGV[2]) 203 | end 204 | return 1 205 | end 206 | return 0 207 | """ 208 | 209 | # KEYS[1] - lock name 210 | # ARGS[1] - token 211 | # return 1 if the lock was released, otherwise 0 212 | LUA_RELEASE_SCRIPT = """ 213 | local token = redis.call('get', KEYS[1]) 214 | if not token or token ~= ARGV[1] then 215 | return 0 216 | end 217 | redis.call('del', KEYS[1]) 218 | return 1 219 | """ 220 | 221 | # KEYS[1] - lock name 222 | # ARGS[1] - token 223 | # ARGS[2] - additional milliseconds 224 | # return 1 if the locks time was extended, otherwise 0 225 | LUA_EXTEND_SCRIPT = """ 226 | local token = redis.call('get', KEYS[1]) 227 | if not token or token ~= ARGV[1] then 228 | return 0 229 | end 230 | local expiration = redis.call('pttl', KEYS[1]) 231 | if not expiration then 232 | expiration = 0 233 | end 234 | if expiration < 0 then 235 | return 0 236 | end 237 | redis.call('pexpire', KEYS[1], expiration + ARGV[2]) 238 | return 1 239 | """ 240 | 241 | def __init__(self, *args, **kwargs): 242 | super(LuaLock, self).__init__(*args, **kwargs) 243 | LuaLock.register_scripts(self.redis) 244 | 245 | @classmethod 246 | def register_scripts(cls, redis): 247 | if cls.lua_acquire is None: 248 | cls.lua_acquire = redis.register_script(cls.LUA_ACQUIRE_SCRIPT) 249 | if cls.lua_release is None: 250 | cls.lua_release = redis.register_script(cls.LUA_RELEASE_SCRIPT) 251 | if cls.lua_extend is None: 252 | cls.lua_extend = redis.register_script(cls.LUA_EXTEND_SCRIPT) 253 | 254 | def do_acquire(self, token): 255 | timeout = self.timeout and int(self.timeout * 1000) or '' 256 | return bool(self.lua_acquire(keys=[self.name], 257 | args=[token, timeout], 258 | client=self.redis)) 259 | 260 | def do_release(self, expected_token): 261 | if not bool(self.lua_release(keys=[self.name], 262 | args=[expected_token], 263 | client=self.redis)): 264 | raise LockError("Cannot release a lock that's no longer owned") 265 | 266 | def do_extend(self, additional_time): 267 | additional_time = int(additional_time * 1000) 268 | if not bool(self.lua_extend(keys=[self.name], 269 | args=[self.local.token, additional_time], 270 | client=self.redis)): 271 | raise LockError("Cannot extend a lock that's no longer owned") 272 | return True 273 | -------------------------------------------------------------------------------- /redis/sentinel.py: -------------------------------------------------------------------------------- 1 | import os 2 | import random 3 | import weakref 4 | 5 | from redis.client import StrictRedis 6 | from redis.connection import ConnectionPool, Connection 7 | from redis.exceptions import (ConnectionError, ResponseError, ReadOnlyError, 8 | TimeoutError) 9 | from redis._compat import iteritems, nativestr, xrange 10 | 11 | 12 | class MasterNotFoundError(ConnectionError): 13 | pass 14 | 15 | 16 | class SlaveNotFoundError(ConnectionError): 17 | pass 18 | 19 | 20 | class SentinelManagedConnection(Connection): 21 | def __init__(self, **kwargs): 22 | self.connection_pool = kwargs.pop('connection_pool') 23 | super(SentinelManagedConnection, self).__init__(**kwargs) 24 | 25 | def __repr__(self): 26 | pool = self.connection_pool 27 | s = '%s' % (type(self).__name__, pool.service_name) 28 | if self.host: 29 | host_info = ',host=%s,port=%s' % (self.host, self.port) 30 | s = s % host_info 31 | return s 32 | 33 | def connect_to(self, address): 34 | self.host, self.port = address 35 | super(SentinelManagedConnection, self).connect() 36 | if self.connection_pool.check_connection: 37 | self.send_command('PING') 38 | if nativestr(self.read_response()) != 'PONG': 39 | raise ConnectionError('PING failed') 40 | 41 | def connect(self): 42 | if self._sock: 43 | return # already connected 44 | if self.connection_pool.is_master: 45 | self.connect_to(self.connection_pool.get_master_address()) 46 | else: 47 | for slave in self.connection_pool.rotate_slaves(): 48 | try: 49 | return self.connect_to(slave) 50 | except ConnectionError: 51 | continue 52 | raise SlaveNotFoundError # Never be here 53 | 54 | def read_response(self): 55 | try: 56 | return super(SentinelManagedConnection, self).read_response() 57 | except ReadOnlyError: 58 | if self.connection_pool.is_master: 59 | # When talking to a master, a ReadOnlyError when likely 60 | # indicates that the previous master that we're still connected 61 | # to has been demoted to a slave and there's a new master. 62 | # calling disconnect will force the connection to re-query 63 | # sentinel during the next connect() attempt. 64 | self.disconnect() 65 | raise ConnectionError('The previous master is now a slave') 66 | raise 67 | 68 | 69 | class SentinelConnectionPool(ConnectionPool): 70 | """ 71 | Sentinel backed connection pool. 72 | 73 | If ``check_connection`` flag is set to True, SentinelManagedConnection 74 | sends a PING command right after establishing the connection. 75 | """ 76 | 77 | def __init__(self, service_name, sentinel_manager, **kwargs): 78 | kwargs['connection_class'] = kwargs.get( 79 | 'connection_class', SentinelManagedConnection) 80 | self.is_master = kwargs.pop('is_master', True) 81 | self.check_connection = kwargs.pop('check_connection', False) 82 | super(SentinelConnectionPool, self).__init__(**kwargs) 83 | self.connection_kwargs['connection_pool'] = weakref.proxy(self) 84 | self.service_name = service_name 85 | self.sentinel_manager = sentinel_manager 86 | 87 | def __repr__(self): 88 | return "%s>> from redis.sentinel import Sentinel 145 | >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1) 146 | >>> master = sentinel.master_for('mymaster', socket_timeout=0.1) 147 | >>> master.set('foo', 'bar') 148 | >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1) 149 | >>> slave.get('foo') 150 | 'bar' 151 | 152 | ``sentinels`` is a list of sentinel nodes. Each node is represented by 153 | a pair (hostname, port). 154 | 155 | ``min_other_sentinels`` defined a minimum number of peers for a sentinel. 156 | When querying a sentinel, if it doesn't meet this threshold, responses 157 | from that sentinel won't be considered valid. 158 | 159 | ``sentinel_kwargs`` is a dictionary of connection arguments used when 160 | connecting to sentinel instances. Any argument that can be passed to 161 | a normal Redis connection can be specified here. If ``sentinel_kwargs`` is 162 | not specified, any socket_timeout and socket_keepalive options specified 163 | in ``connection_kwargs`` will be used. 164 | 165 | ``connection_kwargs`` are keyword arguments that will be used when 166 | establishing a connection to a Redis server. 167 | """ 168 | 169 | def __init__(self, sentinels, min_other_sentinels=0, sentinel_kwargs=None, 170 | **connection_kwargs): 171 | # if sentinel_kwargs isn't defined, use the socket_* options from 172 | # connection_kwargs 173 | if sentinel_kwargs is None: 174 | sentinel_kwargs = dict([(k, v) 175 | for k, v in iteritems(connection_kwargs) 176 | if k.startswith('socket_') 177 | ]) 178 | self.sentinel_kwargs = sentinel_kwargs 179 | 180 | self.sentinels = [StrictRedis(hostname, port, **self.sentinel_kwargs) 181 | for hostname, port in sentinels] 182 | self.min_other_sentinels = min_other_sentinels 183 | self.connection_kwargs = connection_kwargs 184 | 185 | def __repr__(self): 186 | sentinel_addresses = [] 187 | for sentinel in self.sentinels: 188 | sentinel_addresses.append('%s:%s' % ( 189 | sentinel.connection_pool.connection_kwargs['host'], 190 | sentinel.connection_pool.connection_kwargs['port'], 191 | )) 192 | return '%s' % ( 193 | type(self).__name__, 194 | ','.join(sentinel_addresses)) 195 | 196 | def check_master_state(self, state, service_name): 197 | if not state['is_master'] or state['is_sdown'] or state['is_odown']: 198 | return False 199 | # Check if our sentinel doesn't see other nodes 200 | if state['num-other-sentinels'] < self.min_other_sentinels: 201 | return False 202 | return True 203 | 204 | def discover_master(self, service_name): 205 | """ 206 | Asks sentinel servers for the Redis master's address corresponding 207 | to the service labeled ``service_name``. 208 | 209 | Returns a pair (address, port) or raises MasterNotFoundError if no 210 | master is found. 211 | """ 212 | for sentinel_no, sentinel in enumerate(self.sentinels): 213 | try: 214 | masters = sentinel.sentinel_masters() 215 | except (ConnectionError, TimeoutError): 216 | continue 217 | state = masters.get(service_name) 218 | if state and self.check_master_state(state, service_name): 219 | # Put this sentinel at the top of the list 220 | self.sentinels[0], self.sentinels[sentinel_no] = ( 221 | sentinel, self.sentinels[0]) 222 | return state['ip'], state['port'] 223 | raise MasterNotFoundError("No master found for %r" % (service_name,)) 224 | 225 | def filter_slaves(self, slaves): 226 | "Remove slaves that are in an ODOWN or SDOWN state" 227 | slaves_alive = [] 228 | for slave in slaves: 229 | if slave['is_odown'] or slave['is_sdown']: 230 | continue 231 | slaves_alive.append((slave['ip'], slave['port'])) 232 | return slaves_alive 233 | 234 | def discover_slaves(self, service_name): 235 | "Returns a list of alive slaves for service ``service_name``" 236 | for sentinel in self.sentinels: 237 | try: 238 | slaves = sentinel.sentinel_slaves(service_name) 239 | except (ConnectionError, ResponseError, TimeoutError): 240 | continue 241 | slaves = self.filter_slaves(slaves) 242 | if slaves: 243 | return slaves 244 | return [] 245 | 246 | def master_for(self, service_name, redis_class=StrictRedis, 247 | connection_pool_class=SentinelConnectionPool, **kwargs): 248 | """ 249 | Returns a redis client instance for the ``service_name`` master. 250 | 251 | A SentinelConnectionPool class is used to retrive the master's 252 | address before establishing a new connection. 253 | 254 | NOTE: If the master's address has changed, any cached connections to 255 | the old master are closed. 256 | 257 | By default clients will be a redis.StrictRedis instance. Specify a 258 | different class to the ``redis_class`` argument if you desire 259 | something different. 260 | 261 | The ``connection_pool_class`` specifies the connection pool to use. 262 | The SentinelConnectionPool will be used by default. 263 | 264 | All other keyword arguments are merged with any connection_kwargs 265 | passed to this class and passed to the connection pool as keyword 266 | arguments to be used to initialize Redis connections. 267 | """ 268 | kwargs['is_master'] = True 269 | connection_kwargs = dict(self.connection_kwargs) 270 | connection_kwargs.update(kwargs) 271 | return redis_class(connection_pool=connection_pool_class( 272 | service_name, self, **connection_kwargs)) 273 | 274 | def slave_for(self, service_name, redis_class=StrictRedis, 275 | connection_pool_class=SentinelConnectionPool, **kwargs): 276 | """ 277 | Returns redis client instance for the ``service_name`` slave(s). 278 | 279 | A SentinelConnectionPool class is used to retrive the slave's 280 | address before establishing a new connection. 281 | 282 | By default clients will be a redis.StrictRedis instance. Specify a 283 | different class to the ``redis_class`` argument if you desire 284 | something different. 285 | 286 | The ``connection_pool_class`` specifies the connection pool to use. 287 | The SentinelConnectionPool will be used by default. 288 | 289 | All other keyword arguments are merged with any connection_kwargs 290 | passed to this class and passed to the connection pool as keyword 291 | arguments to be used to initialize Redis connections. 292 | """ 293 | kwargs['is_master'] = False 294 | connection_kwargs = dict(self.connection_kwargs) 295 | connection_kwargs.update(kwargs) 296 | return redis_class(connection_pool=connection_pool_class( 297 | service_name, self, **connection_kwargs)) 298 | -------------------------------------------------------------------------------- /tests/test_pubsub.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | import pytest 3 | import time 4 | 5 | import redis 6 | from redis.exceptions import ConnectionError 7 | from redis._compat import basestring, u, unichr, b 8 | 9 | from .conftest import r as _redis_client 10 | from .conftest import skip_if_server_version_lt 11 | 12 | 13 | def wait_for_message(pubsub, timeout=0.1, ignore_subscribe_messages=False): 14 | now = time.time() 15 | timeout = now + timeout 16 | while now < timeout: 17 | message = pubsub.get_message( 18 | ignore_subscribe_messages=ignore_subscribe_messages) 19 | if message is not None: 20 | return message 21 | time.sleep(0.01) 22 | now = time.time() 23 | return None 24 | 25 | 26 | def make_message(type, channel, data, pattern=None): 27 | return { 28 | 'type': type, 29 | 'pattern': pattern and pattern.encode('utf-8') or None, 30 | 'channel': channel.encode('utf-8'), 31 | 'data': data.encode('utf-8') if isinstance(data, basestring) else data 32 | } 33 | 34 | 35 | def make_subscribe_test_data(pubsub, type): 36 | if type == 'channel': 37 | return { 38 | 'p': pubsub, 39 | 'sub_type': 'subscribe', 40 | 'unsub_type': 'unsubscribe', 41 | 'sub_func': pubsub.subscribe, 42 | 'unsub_func': pubsub.unsubscribe, 43 | 'keys': ['foo', 'bar', u('uni') + unichr(4456) + u('code')] 44 | } 45 | elif type == 'pattern': 46 | return { 47 | 'p': pubsub, 48 | 'sub_type': 'psubscribe', 49 | 'unsub_type': 'punsubscribe', 50 | 'sub_func': pubsub.psubscribe, 51 | 'unsub_func': pubsub.punsubscribe, 52 | 'keys': ['f*', 'b*', u('uni') + unichr(4456) + u('*')] 53 | } 54 | assert False, 'invalid subscribe type: %s' % type 55 | 56 | 57 | class TestPubSubSubscribeUnsubscribe(object): 58 | 59 | def _test_subscribe_unsubscribe(self, p, sub_type, unsub_type, sub_func, 60 | unsub_func, keys): 61 | for key in keys: 62 | assert sub_func(key) is None 63 | 64 | # should be a message for each channel/pattern we just subscribed to 65 | for i, key in enumerate(keys): 66 | assert wait_for_message(p) == make_message(sub_type, key, i + 1) 67 | 68 | for key in keys: 69 | assert unsub_func(key) is None 70 | 71 | # should be a message for each channel/pattern we just unsubscribed 72 | # from 73 | for i, key in enumerate(keys): 74 | i = len(keys) - 1 - i 75 | assert wait_for_message(p) == make_message(unsub_type, key, i) 76 | 77 | def test_channel_subscribe_unsubscribe(self, r): 78 | kwargs = make_subscribe_test_data(r.pubsub(), 'channel') 79 | self._test_subscribe_unsubscribe(**kwargs) 80 | 81 | def test_pattern_subscribe_unsubscribe(self, r): 82 | kwargs = make_subscribe_test_data(r.pubsub(), 'pattern') 83 | self._test_subscribe_unsubscribe(**kwargs) 84 | 85 | def _test_resubscribe_on_reconnection(self, p, sub_type, unsub_type, 86 | sub_func, unsub_func, keys): 87 | 88 | for key in keys: 89 | assert sub_func(key) is None 90 | 91 | # should be a message for each channel/pattern we just subscribed to 92 | for i, key in enumerate(keys): 93 | assert wait_for_message(p) == make_message(sub_type, key, i + 1) 94 | 95 | # manually disconnect 96 | p.connection.disconnect() 97 | 98 | # calling get_message again reconnects and resubscribes 99 | # note, we may not re-subscribe to channels in exactly the same order 100 | # so we have to do some extra checks to make sure we got them all 101 | messages = [] 102 | for i in range(len(keys)): 103 | messages.append(wait_for_message(p)) 104 | 105 | unique_channels = set() 106 | assert len(messages) == len(keys) 107 | for i, message in enumerate(messages): 108 | assert message['type'] == sub_type 109 | assert message['data'] == i + 1 110 | assert isinstance(message['channel'], bytes) 111 | channel = message['channel'].decode('utf-8') 112 | unique_channels.add(channel) 113 | 114 | assert len(unique_channels) == len(keys) 115 | for channel in unique_channels: 116 | assert channel in keys 117 | 118 | def test_resubscribe_to_channels_on_reconnection(self, r): 119 | kwargs = make_subscribe_test_data(r.pubsub(), 'channel') 120 | self._test_resubscribe_on_reconnection(**kwargs) 121 | 122 | def test_resubscribe_to_patterns_on_reconnection(self, r): 123 | kwargs = make_subscribe_test_data(r.pubsub(), 'pattern') 124 | self._test_resubscribe_on_reconnection(**kwargs) 125 | 126 | def _test_subscribed_property(self, p, sub_type, unsub_type, sub_func, 127 | unsub_func, keys): 128 | 129 | assert p.subscribed is False 130 | sub_func(keys[0]) 131 | # we're now subscribed even though we haven't processed the 132 | # reply from the server just yet 133 | assert p.subscribed is True 134 | assert wait_for_message(p) == make_message(sub_type, keys[0], 1) 135 | # we're still subscribed 136 | assert p.subscribed is True 137 | 138 | # unsubscribe from all channels 139 | unsub_func() 140 | # we're still technically subscribed until we process the 141 | # response messages from the server 142 | assert p.subscribed is True 143 | assert wait_for_message(p) == make_message(unsub_type, keys[0], 0) 144 | # now we're no longer subscribed as no more messages can be delivered 145 | # to any channels we were listening to 146 | assert p.subscribed is False 147 | 148 | # subscribing again flips the flag back 149 | sub_func(keys[0]) 150 | assert p.subscribed is True 151 | assert wait_for_message(p) == make_message(sub_type, keys[0], 1) 152 | 153 | # unsubscribe again 154 | unsub_func() 155 | assert p.subscribed is True 156 | # subscribe to another channel before reading the unsubscribe response 157 | sub_func(keys[1]) 158 | assert p.subscribed is True 159 | # read the unsubscribe for key1 160 | assert wait_for_message(p) == make_message(unsub_type, keys[0], 0) 161 | # we're still subscribed to key2, so subscribed should still be True 162 | assert p.subscribed is True 163 | # read the key2 subscribe message 164 | assert wait_for_message(p) == make_message(sub_type, keys[1], 1) 165 | unsub_func() 166 | # haven't read the message yet, so we're still subscribed 167 | assert p.subscribed is True 168 | assert wait_for_message(p) == make_message(unsub_type, keys[1], 0) 169 | # now we're finally unsubscribed 170 | assert p.subscribed is False 171 | 172 | def test_subscribe_property_with_channels(self, r): 173 | kwargs = make_subscribe_test_data(r.pubsub(), 'channel') 174 | self._test_subscribed_property(**kwargs) 175 | 176 | def test_subscribe_property_with_patterns(self, r): 177 | kwargs = make_subscribe_test_data(r.pubsub(), 'pattern') 178 | self._test_subscribed_property(**kwargs) 179 | 180 | def test_ignore_all_subscribe_messages(self, r): 181 | p = r.pubsub(ignore_subscribe_messages=True) 182 | 183 | checks = ( 184 | (p.subscribe, 'foo'), 185 | (p.unsubscribe, 'foo'), 186 | (p.psubscribe, 'f*'), 187 | (p.punsubscribe, 'f*'), 188 | ) 189 | 190 | assert p.subscribed is False 191 | for func, channel in checks: 192 | assert func(channel) is None 193 | assert p.subscribed is True 194 | assert wait_for_message(p) is None 195 | assert p.subscribed is False 196 | 197 | def test_ignore_individual_subscribe_messages(self, r): 198 | p = r.pubsub() 199 | 200 | checks = ( 201 | (p.subscribe, 'foo'), 202 | (p.unsubscribe, 'foo'), 203 | (p.psubscribe, 'f*'), 204 | (p.punsubscribe, 'f*'), 205 | ) 206 | 207 | assert p.subscribed is False 208 | for func, channel in checks: 209 | assert func(channel) is None 210 | assert p.subscribed is True 211 | message = wait_for_message(p, ignore_subscribe_messages=True) 212 | assert message is None 213 | assert p.subscribed is False 214 | 215 | 216 | class TestPubSubMessages(object): 217 | def setup_method(self, method): 218 | self.message = None 219 | 220 | def message_handler(self, message): 221 | self.message = message 222 | 223 | def test_published_message_to_channel(self, r): 224 | p = r.pubsub(ignore_subscribe_messages=True) 225 | p.subscribe('foo') 226 | assert r.publish('foo', 'test message') == 1 227 | 228 | message = wait_for_message(p) 229 | assert isinstance(message, dict) 230 | assert message == make_message('message', 'foo', 'test message') 231 | 232 | def test_published_message_to_pattern(self, r): 233 | p = r.pubsub(ignore_subscribe_messages=True) 234 | p.subscribe('foo') 235 | p.psubscribe('f*') 236 | # 1 to pattern, 1 to channel 237 | assert r.publish('foo', 'test message') == 2 238 | 239 | message1 = wait_for_message(p) 240 | message2 = wait_for_message(p) 241 | assert isinstance(message1, dict) 242 | assert isinstance(message2, dict) 243 | 244 | expected = [ 245 | make_message('message', 'foo', 'test message'), 246 | make_message('pmessage', 'foo', 'test message', pattern='f*') 247 | ] 248 | 249 | assert message1 in expected 250 | assert message2 in expected 251 | assert message1 != message2 252 | 253 | def test_channel_message_handler(self, r): 254 | p = r.pubsub(ignore_subscribe_messages=True) 255 | p.subscribe(foo=self.message_handler) 256 | assert r.publish('foo', 'test message') == 1 257 | assert wait_for_message(p) is None 258 | assert self.message == make_message('message', 'foo', 'test message') 259 | 260 | def test_pattern_message_handler(self, r): 261 | p = r.pubsub(ignore_subscribe_messages=True) 262 | p.psubscribe(**{'f*': self.message_handler}) 263 | assert r.publish('foo', 'test message') == 1 264 | assert wait_for_message(p) is None 265 | assert self.message == make_message('pmessage', 'foo', 'test message', 266 | pattern='f*') 267 | 268 | def test_unicode_channel_message_handler(self, r): 269 | p = r.pubsub(ignore_subscribe_messages=True) 270 | channel = u('uni') + unichr(4456) + u('code') 271 | channels = {channel: self.message_handler} 272 | p.subscribe(**channels) 273 | assert r.publish(channel, 'test message') == 1 274 | assert wait_for_message(p) is None 275 | assert self.message == make_message('message', channel, 'test message') 276 | 277 | def test_unicode_pattern_message_handler(self, r): 278 | p = r.pubsub(ignore_subscribe_messages=True) 279 | pattern = u('uni') + unichr(4456) + u('*') 280 | channel = u('uni') + unichr(4456) + u('code') 281 | p.psubscribe(**{pattern: self.message_handler}) 282 | assert r.publish(channel, 'test message') == 1 283 | assert wait_for_message(p) is None 284 | assert self.message == make_message('pmessage', channel, 285 | 'test message', pattern=pattern) 286 | 287 | def test_get_message_without_subscribe(self, r): 288 | p = r.pubsub() 289 | with pytest.raises(RuntimeError) as info: 290 | p.get_message() 291 | expect = ('connection not set: ' 292 | 'did you forget to call subscribe() or psubscribe()?') 293 | assert expect in info.exconly() 294 | 295 | 296 | class TestPubSubAutoDecoding(object): 297 | "These tests only validate that we get unicode values back" 298 | 299 | channel = u('uni') + unichr(4456) + u('code') 300 | pattern = u('uni') + unichr(4456) + u('*') 301 | data = u('abc') + unichr(4458) + u('123') 302 | 303 | def make_message(self, type, channel, data, pattern=None): 304 | return { 305 | 'type': type, 306 | 'channel': channel, 307 | 'pattern': pattern, 308 | 'data': data 309 | } 310 | 311 | def setup_method(self, method): 312 | self.message = None 313 | 314 | def message_handler(self, message): 315 | self.message = message 316 | 317 | @pytest.fixture() 318 | def r(self, request): 319 | return _redis_client(request=request, decode_responses=True) 320 | 321 | def test_channel_subscribe_unsubscribe(self, r): 322 | p = r.pubsub() 323 | p.subscribe(self.channel) 324 | assert wait_for_message(p) == self.make_message('subscribe', 325 | self.channel, 1) 326 | 327 | p.unsubscribe(self.channel) 328 | assert wait_for_message(p) == self.make_message('unsubscribe', 329 | self.channel, 0) 330 | 331 | def test_pattern_subscribe_unsubscribe(self, r): 332 | p = r.pubsub() 333 | p.psubscribe(self.pattern) 334 | assert wait_for_message(p) == self.make_message('psubscribe', 335 | self.pattern, 1) 336 | 337 | p.punsubscribe(self.pattern) 338 | assert wait_for_message(p) == self.make_message('punsubscribe', 339 | self.pattern, 0) 340 | 341 | def test_channel_publish(self, r): 342 | p = r.pubsub(ignore_subscribe_messages=True) 343 | p.subscribe(self.channel) 344 | r.publish(self.channel, self.data) 345 | assert wait_for_message(p) == self.make_message('message', 346 | self.channel, 347 | self.data) 348 | 349 | def test_pattern_publish(self, r): 350 | p = r.pubsub(ignore_subscribe_messages=True) 351 | p.psubscribe(self.pattern) 352 | r.publish(self.channel, self.data) 353 | assert wait_for_message(p) == self.make_message('pmessage', 354 | self.channel, 355 | self.data, 356 | pattern=self.pattern) 357 | 358 | def test_channel_message_handler(self, r): 359 | p = r.pubsub(ignore_subscribe_messages=True) 360 | p.subscribe(**{self.channel: self.message_handler}) 361 | r.publish(self.channel, self.data) 362 | assert wait_for_message(p) is None 363 | assert self.message == self.make_message('message', self.channel, 364 | self.data) 365 | 366 | # test that we reconnected to the correct channel 367 | p.connection.disconnect() 368 | assert wait_for_message(p) is None # should reconnect 369 | new_data = self.data + u('new data') 370 | r.publish(self.channel, new_data) 371 | assert wait_for_message(p) is None 372 | assert self.message == self.make_message('message', self.channel, 373 | new_data) 374 | 375 | def test_pattern_message_handler(self, r): 376 | p = r.pubsub(ignore_subscribe_messages=True) 377 | p.psubscribe(**{self.pattern: self.message_handler}) 378 | r.publish(self.channel, self.data) 379 | assert wait_for_message(p) is None 380 | assert self.message == self.make_message('pmessage', self.channel, 381 | self.data, 382 | pattern=self.pattern) 383 | 384 | # test that we reconnected to the correct pattern 385 | p.connection.disconnect() 386 | assert wait_for_message(p) is None # should reconnect 387 | new_data = self.data + u('new data') 388 | r.publish(self.channel, new_data) 389 | assert wait_for_message(p) is None 390 | assert self.message == self.make_message('pmessage', self.channel, 391 | new_data, 392 | pattern=self.pattern) 393 | 394 | 395 | class TestPubSubRedisDown(object): 396 | 397 | def test_channel_subscribe(self, r): 398 | r = redis.Redis(host='localhost', port=6390) 399 | p = r.pubsub() 400 | with pytest.raises(ConnectionError): 401 | p.subscribe('foo') 402 | 403 | 404 | class TestPubSubPubSubSubcommands(object): 405 | 406 | @skip_if_server_version_lt('2.8.0') 407 | def test_pubsub_channels(self, r): 408 | p = r.pubsub(ignore_subscribe_messages=True) 409 | p.subscribe('foo', 'bar', 'baz', 'quux') 410 | channels = sorted(r.pubsub_channels()) 411 | assert channels == [b('bar'), b('baz'), b('foo'), b('quux')] 412 | 413 | @skip_if_server_version_lt('2.8.0') 414 | def test_pubsub_numsub(self, r): 415 | p1 = r.pubsub(ignore_subscribe_messages=True) 416 | p1.subscribe('foo', 'bar', 'baz') 417 | p2 = r.pubsub(ignore_subscribe_messages=True) 418 | p2.subscribe('bar', 'baz') 419 | p3 = r.pubsub(ignore_subscribe_messages=True) 420 | p3.subscribe('baz') 421 | 422 | channels = [(b('foo'), 1), (b('bar'), 2), (b('baz'), 3)] 423 | assert channels == r.pubsub_numsub('foo', 'bar', 'baz') 424 | 425 | @skip_if_server_version_lt('2.8.0') 426 | def test_pubsub_numpat(self, r): 427 | p = r.pubsub(ignore_subscribe_messages=True) 428 | p.psubscribe('*oo', '*ar', 'b*z') 429 | assert r.pubsub_numpat() == 3 430 | -------------------------------------------------------------------------------- /tests/test_connection_pool.py: -------------------------------------------------------------------------------- 1 | from __future__ import with_statement 2 | import os 3 | import pytest 4 | import redis 5 | import time 6 | import re 7 | 8 | from threading import Thread 9 | from redis.connection import ssl_available, to_bool 10 | from .conftest import skip_if_server_version_lt 11 | 12 | 13 | class DummyConnection(object): 14 | description_format = "DummyConnection<>" 15 | 16 | def __init__(self, **kwargs): 17 | self.kwargs = kwargs 18 | self.pid = os.getpid() 19 | 20 | 21 | class TestConnectionPool(object): 22 | def get_pool(self, connection_kwargs=None, max_connections=None, 23 | connection_class=DummyConnection): 24 | connection_kwargs = connection_kwargs or {} 25 | pool = redis.ConnectionPool( 26 | connection_class=connection_class, 27 | max_connections=max_connections, 28 | **connection_kwargs) 29 | return pool 30 | 31 | def test_connection_creation(self): 32 | connection_kwargs = {'foo': 'bar', 'biz': 'baz'} 33 | pool = self.get_pool(connection_kwargs=connection_kwargs) 34 | connection = pool.get_connection('_') 35 | assert isinstance(connection, DummyConnection) 36 | assert connection.kwargs == connection_kwargs 37 | 38 | def test_multiple_connections(self): 39 | pool = self.get_pool() 40 | c1 = pool.get_connection('_') 41 | c2 = pool.get_connection('_') 42 | assert c1 != c2 43 | 44 | def test_max_connections(self): 45 | pool = self.get_pool(max_connections=2) 46 | pool.get_connection('_') 47 | pool.get_connection('_') 48 | with pytest.raises(redis.ConnectionError): 49 | pool.get_connection('_') 50 | 51 | def test_reuse_previously_released_connection(self): 52 | pool = self.get_pool() 53 | c1 = pool.get_connection('_') 54 | pool.release(c1) 55 | c2 = pool.get_connection('_') 56 | assert c1 == c2 57 | 58 | def test_repr_contains_db_info_tcp(self): 59 | connection_kwargs = {'host': 'localhost', 'port': 6379, 'db': 1} 60 | pool = self.get_pool(connection_kwargs=connection_kwargs, 61 | connection_class=redis.Connection) 62 | expected = 'ConnectionPool>' 63 | assert repr(pool) == expected 64 | 65 | def test_repr_contains_db_info_unix(self): 66 | connection_kwargs = {'path': '/abc', 'db': 1} 67 | pool = self.get_pool(connection_kwargs=connection_kwargs, 68 | connection_class=redis.UnixDomainSocketConnection) 69 | expected = 'ConnectionPool>' 70 | assert repr(pool) == expected 71 | 72 | 73 | class TestBlockingConnectionPool(object): 74 | def get_pool(self, connection_kwargs=None, max_connections=10, timeout=20): 75 | connection_kwargs = connection_kwargs or {} 76 | pool = redis.BlockingConnectionPool(connection_class=DummyConnection, 77 | max_connections=max_connections, 78 | timeout=timeout, 79 | **connection_kwargs) 80 | return pool 81 | 82 | def test_connection_creation(self): 83 | connection_kwargs = {'foo': 'bar', 'biz': 'baz'} 84 | pool = self.get_pool(connection_kwargs=connection_kwargs) 85 | connection = pool.get_connection('_') 86 | assert isinstance(connection, DummyConnection) 87 | assert connection.kwargs == connection_kwargs 88 | 89 | def test_multiple_connections(self): 90 | pool = self.get_pool() 91 | c1 = pool.get_connection('_') 92 | c2 = pool.get_connection('_') 93 | assert c1 != c2 94 | 95 | def test_connection_pool_blocks_until_timeout(self): 96 | "When out of connections, block for timeout seconds, then raise" 97 | pool = self.get_pool(max_connections=1, timeout=0.1) 98 | pool.get_connection('_') 99 | 100 | start = time.time() 101 | with pytest.raises(redis.ConnectionError): 102 | pool.get_connection('_') 103 | # we should have waited at least 0.1 seconds 104 | assert time.time() - start >= 0.1 105 | 106 | def connection_pool_blocks_until_another_connection_released(self): 107 | """ 108 | When out of connections, block until another connection is released 109 | to the pool 110 | """ 111 | pool = self.get_pool(max_connections=1, timeout=2) 112 | c1 = pool.get_connection('_') 113 | 114 | def target(): 115 | time.sleep(0.1) 116 | pool.release(c1) 117 | 118 | Thread(target=target).start() 119 | start = time.time() 120 | pool.get_connection('_') 121 | assert time.time() - start >= 0.1 122 | 123 | def test_reuse_previously_released_connection(self): 124 | pool = self.get_pool() 125 | c1 = pool.get_connection('_') 126 | pool.release(c1) 127 | c2 = pool.get_connection('_') 128 | assert c1 == c2 129 | 130 | def test_repr_contains_db_info_tcp(self): 131 | pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 132 | expected = 'ConnectionPool>' 133 | assert repr(pool) == expected 134 | 135 | def test_repr_contains_db_info_unix(self): 136 | pool = redis.ConnectionPool( 137 | connection_class=redis.UnixDomainSocketConnection, 138 | path='abc', 139 | db=0, 140 | ) 141 | expected = 'ConnectionPool>' 142 | assert repr(pool) == expected 143 | 144 | 145 | class TestConnectionPoolURLParsing(object): 146 | def test_defaults(self): 147 | pool = redis.ConnectionPool.from_url('redis://localhost') 148 | assert pool.connection_class == redis.Connection 149 | assert pool.connection_kwargs == { 150 | 'host': 'localhost', 151 | 'port': 6379, 152 | 'db': 0, 153 | 'password': None, 154 | } 155 | 156 | def test_hostname(self): 157 | pool = redis.ConnectionPool.from_url('redis://myhost') 158 | assert pool.connection_class == redis.Connection 159 | assert pool.connection_kwargs == { 160 | 'host': 'myhost', 161 | 'port': 6379, 162 | 'db': 0, 163 | 'password': None, 164 | } 165 | 166 | def test_quoted_hostname(self): 167 | pool = redis.ConnectionPool.from_url('redis://my %2F host %2B%3D+', 168 | decode_components=True) 169 | assert pool.connection_class == redis.Connection 170 | assert pool.connection_kwargs == { 171 | 'host': 'my / host +=+', 172 | 'port': 6379, 173 | 'db': 0, 174 | 'password': None, 175 | } 176 | 177 | def test_port(self): 178 | pool = redis.ConnectionPool.from_url('redis://localhost:6380') 179 | assert pool.connection_class == redis.Connection 180 | assert pool.connection_kwargs == { 181 | 'host': 'localhost', 182 | 'port': 6380, 183 | 'db': 0, 184 | 'password': None, 185 | } 186 | 187 | def test_password(self): 188 | pool = redis.ConnectionPool.from_url('redis://:mypassword@localhost') 189 | assert pool.connection_class == redis.Connection 190 | assert pool.connection_kwargs == { 191 | 'host': 'localhost', 192 | 'port': 6379, 193 | 'db': 0, 194 | 'password': 'mypassword', 195 | } 196 | 197 | def test_quoted_password(self): 198 | pool = redis.ConnectionPool.from_url( 199 | 'redis://:%2Fmypass%2F%2B word%3D%24+@localhost', 200 | decode_components=True) 201 | assert pool.connection_class == redis.Connection 202 | assert pool.connection_kwargs == { 203 | 'host': 'localhost', 204 | 'port': 6379, 205 | 'db': 0, 206 | 'password': '/mypass/+ word=$+', 207 | } 208 | 209 | def test_db_as_argument(self): 210 | pool = redis.ConnectionPool.from_url('redis://localhost', db='1') 211 | assert pool.connection_class == redis.Connection 212 | assert pool.connection_kwargs == { 213 | 'host': 'localhost', 214 | 'port': 6379, 215 | 'db': 1, 216 | 'password': None, 217 | } 218 | 219 | def test_db_in_path(self): 220 | pool = redis.ConnectionPool.from_url('redis://localhost/2', db='1') 221 | assert pool.connection_class == redis.Connection 222 | assert pool.connection_kwargs == { 223 | 'host': 'localhost', 224 | 'port': 6379, 225 | 'db': 2, 226 | 'password': None, 227 | } 228 | 229 | def test_db_in_querystring(self): 230 | pool = redis.ConnectionPool.from_url('redis://localhost/2?db=3', 231 | db='1') 232 | assert pool.connection_class == redis.Connection 233 | assert pool.connection_kwargs == { 234 | 'host': 'localhost', 235 | 'port': 6379, 236 | 'db': 3, 237 | 'password': None, 238 | } 239 | 240 | def test_extra_typed_querystring_options(self): 241 | pool = redis.ConnectionPool.from_url( 242 | 'redis://localhost/2?socket_timeout=20&socket_connect_timeout=10' 243 | '&socket_keepalive=&retry_on_timeout=Yes' 244 | ) 245 | 246 | assert pool.connection_class == redis.Connection 247 | assert pool.connection_kwargs == { 248 | 'host': 'localhost', 249 | 'port': 6379, 250 | 'db': 2, 251 | 'socket_timeout': 20.0, 252 | 'socket_connect_timeout': 10.0, 253 | 'retry_on_timeout': True, 254 | 'password': None, 255 | } 256 | 257 | def test_boolean_parsing(self): 258 | for expected, value in ( 259 | (None, None), 260 | (None, ''), 261 | (False, 0), (False, '0'), 262 | (False, 'f'), (False, 'F'), (False, 'False'), 263 | (False, 'n'), (False, 'N'), (False, 'No'), 264 | (True, 1), (True, '1'), 265 | (True, 'y'), (True, 'Y'), (True, 'Yes'), 266 | ): 267 | assert expected is to_bool(value) 268 | 269 | def test_invalid_extra_typed_querystring_options(self): 270 | import warnings 271 | with warnings.catch_warnings(record=True) as warning_log: 272 | redis.ConnectionPool.from_url( 273 | 'redis://localhost/2?socket_timeout=_&' 274 | 'socket_connect_timeout=abc' 275 | ) 276 | # Compare the message values 277 | assert [ 278 | str(m.message) for m in 279 | sorted(warning_log, key=lambda l: str(l.message)) 280 | ] == [ 281 | 'Invalid value for `socket_connect_timeout` in connection URL.', 282 | 'Invalid value for `socket_timeout` in connection URL.', 283 | ] 284 | 285 | def test_extra_querystring_options(self): 286 | pool = redis.ConnectionPool.from_url('redis://localhost?a=1&b=2') 287 | assert pool.connection_class == redis.Connection 288 | assert pool.connection_kwargs == { 289 | 'host': 'localhost', 290 | 'port': 6379, 291 | 'db': 0, 292 | 'password': None, 293 | 'a': '1', 294 | 'b': '2' 295 | } 296 | 297 | def test_calling_from_subclass_returns_correct_instance(self): 298 | pool = redis.BlockingConnectionPool.from_url('redis://localhost') 299 | assert isinstance(pool, redis.BlockingConnectionPool) 300 | 301 | def test_client_creates_connection_pool(self): 302 | r = redis.StrictRedis.from_url('redis://myhost') 303 | assert r.connection_pool.connection_class == redis.Connection 304 | assert r.connection_pool.connection_kwargs == { 305 | 'host': 'myhost', 306 | 'port': 6379, 307 | 'db': 0, 308 | 'password': None, 309 | } 310 | 311 | 312 | class TestConnectionPoolUnixSocketURLParsing(object): 313 | def test_defaults(self): 314 | pool = redis.ConnectionPool.from_url('unix:///socket') 315 | assert pool.connection_class == redis.UnixDomainSocketConnection 316 | assert pool.connection_kwargs == { 317 | 'path': '/socket', 318 | 'db': 0, 319 | 'password': None, 320 | } 321 | 322 | def test_password(self): 323 | pool = redis.ConnectionPool.from_url('unix://:mypassword@/socket') 324 | assert pool.connection_class == redis.UnixDomainSocketConnection 325 | assert pool.connection_kwargs == { 326 | 'path': '/socket', 327 | 'db': 0, 328 | 'password': 'mypassword', 329 | } 330 | 331 | def test_quoted_password(self): 332 | pool = redis.ConnectionPool.from_url( 333 | 'unix://:%2Fmypass%2F%2B word%3D%24+@/socket', 334 | decode_components=True) 335 | assert pool.connection_class == redis.UnixDomainSocketConnection 336 | assert pool.connection_kwargs == { 337 | 'path': '/socket', 338 | 'db': 0, 339 | 'password': '/mypass/+ word=$+', 340 | } 341 | 342 | def test_quoted_path(self): 343 | pool = redis.ConnectionPool.from_url( 344 | 'unix://:mypassword@/my%2Fpath%2Fto%2F..%2F+_%2B%3D%24ocket', 345 | decode_components=True) 346 | assert pool.connection_class == redis.UnixDomainSocketConnection 347 | assert pool.connection_kwargs == { 348 | 'path': '/my/path/to/../+_+=$ocket', 349 | 'db': 0, 350 | 'password': 'mypassword', 351 | } 352 | 353 | def test_db_as_argument(self): 354 | pool = redis.ConnectionPool.from_url('unix:///socket', db=1) 355 | assert pool.connection_class == redis.UnixDomainSocketConnection 356 | assert pool.connection_kwargs == { 357 | 'path': '/socket', 358 | 'db': 1, 359 | 'password': None, 360 | } 361 | 362 | def test_db_in_querystring(self): 363 | pool = redis.ConnectionPool.from_url('unix:///socket?db=2', db=1) 364 | assert pool.connection_class == redis.UnixDomainSocketConnection 365 | assert pool.connection_kwargs == { 366 | 'path': '/socket', 367 | 'db': 2, 368 | 'password': None, 369 | } 370 | 371 | def test_extra_querystring_options(self): 372 | pool = redis.ConnectionPool.from_url('unix:///socket?a=1&b=2') 373 | assert pool.connection_class == redis.UnixDomainSocketConnection 374 | assert pool.connection_kwargs == { 375 | 'path': '/socket', 376 | 'db': 0, 377 | 'password': None, 378 | 'a': '1', 379 | 'b': '2' 380 | } 381 | 382 | 383 | class TestSSLConnectionURLParsing(object): 384 | @pytest.mark.skipif(not ssl_available, reason="SSL not installed") 385 | def test_defaults(self): 386 | pool = redis.ConnectionPool.from_url('rediss://localhost') 387 | assert pool.connection_class == redis.SSLConnection 388 | assert pool.connection_kwargs == { 389 | 'host': 'localhost', 390 | 'port': 6379, 391 | 'db': 0, 392 | 'password': None, 393 | } 394 | 395 | @pytest.mark.skipif(not ssl_available, reason="SSL not installed") 396 | def test_cert_reqs_options(self): 397 | import ssl 398 | pool = redis.ConnectionPool.from_url('rediss://?ssl_cert_reqs=none') 399 | assert pool.get_connection('_').cert_reqs == ssl.CERT_NONE 400 | 401 | pool = redis.ConnectionPool.from_url( 402 | 'rediss://?ssl_cert_reqs=optional') 403 | assert pool.get_connection('_').cert_reqs == ssl.CERT_OPTIONAL 404 | 405 | pool = redis.ConnectionPool.from_url( 406 | 'rediss://?ssl_cert_reqs=required') 407 | assert pool.get_connection('_').cert_reqs == ssl.CERT_REQUIRED 408 | 409 | 410 | class TestConnection(object): 411 | def test_on_connect_error(self): 412 | """ 413 | An error in Connection.on_connect should disconnect from the server 414 | see for details: https://github.com/andymccurdy/redis-py/issues/368 415 | """ 416 | # this assumes the Redis server being tested against doesn't have 417 | # 9999 databases ;) 418 | bad_connection = redis.Redis(db=9999) 419 | # an error should be raised on connect 420 | with pytest.raises(redis.RedisError): 421 | bad_connection.info() 422 | pool = bad_connection.connection_pool 423 | assert len(pool._available_connections) == 1 424 | assert not pool._available_connections[0]._sock 425 | 426 | @skip_if_server_version_lt('2.8.8') 427 | def test_busy_loading_disconnects_socket(self, r): 428 | """ 429 | If Redis raises a LOADING error, the connection should be 430 | disconnected and a BusyLoadingError raised 431 | """ 432 | with pytest.raises(redis.BusyLoadingError): 433 | r.execute_command('DEBUG', 'ERROR', 'LOADING fake message') 434 | pool = r.connection_pool 435 | assert len(pool._available_connections) == 1 436 | assert not pool._available_connections[0]._sock 437 | 438 | @skip_if_server_version_lt('2.8.8') 439 | def test_busy_loading_from_pipeline_immediate_command(self, r): 440 | """ 441 | BusyLoadingErrors should raise from Pipelines that execute a 442 | command immediately, like WATCH does. 443 | """ 444 | pipe = r.pipeline() 445 | with pytest.raises(redis.BusyLoadingError): 446 | pipe.immediate_execute_command('DEBUG', 'ERROR', 447 | 'LOADING fake message') 448 | pool = r.connection_pool 449 | assert not pipe.connection 450 | assert len(pool._available_connections) == 1 451 | assert not pool._available_connections[0]._sock 452 | 453 | @skip_if_server_version_lt('2.8.8') 454 | def test_busy_loading_from_pipeline(self, r): 455 | """ 456 | BusyLoadingErrors should be raised from a pipeline execution 457 | regardless of the raise_on_error flag. 458 | """ 459 | pipe = r.pipeline() 460 | pipe.execute_command('DEBUG', 'ERROR', 'LOADING fake message') 461 | with pytest.raises(redis.BusyLoadingError): 462 | pipe.execute() 463 | pool = r.connection_pool 464 | assert not pipe.connection 465 | assert len(pool._available_connections) == 1 466 | assert not pool._available_connections[0]._sock 467 | 468 | @skip_if_server_version_lt('2.8.8') 469 | def test_read_only_error(self, r): 470 | "READONLY errors get turned in ReadOnlyError exceptions" 471 | with pytest.raises(redis.ReadOnlyError): 472 | r.execute_command('DEBUG', 'ERROR', 'READONLY blah blah') 473 | 474 | def test_connect_from_url_tcp(self): 475 | connection = redis.Redis.from_url('redis://localhost') 476 | pool = connection.connection_pool 477 | 478 | assert re.match('(.*)<(.*)<(.*)>>', repr(pool)).groups() == ( 479 | 'ConnectionPool', 480 | 'Connection', 481 | 'host=localhost,port=6379,db=0', 482 | ) 483 | 484 | def test_connect_from_url_unix(self): 485 | connection = redis.Redis.from_url('unix:///path/to/socket') 486 | pool = connection.connection_pool 487 | 488 | assert re.match('(.*)<(.*)<(.*)>>', repr(pool)).groups() == ( 489 | 'ConnectionPool', 490 | 'UnixDomainSocketConnection', 491 | 'path=/path/to/socket,db=0', 492 | ) 493 | -------------------------------------------------------------------------------- /CHANGES: -------------------------------------------------------------------------------- 1 | * 2.10.6 2 | * Various performance improvements. Thanks cjsimpson 3 | * Fixed a bug with SRANDMEMBER where 4 | * Added HSTRLEN command. Thanks Alexander Putilin 5 | * Added the TOUCH command. Thanks Anis Jonischkeit 6 | * Remove unnecessary calls to the server when registering Lua scripts. 7 | Thanks Ben Greenberg 8 | * SET's EX and PX arguments now allow values of zero. Thanks huangqiyin 9 | * Added PUBSUB {CHANNELS, NUMPAT, NUMSUB} commands. Thanks Angus Pearson 10 | * PubSub connections that that encounter `InterruptedError`s now 11 | retry automatically. Thanks Carlton Gibson and Seth M. Larson 12 | * LPUSH and RPUSH commands run on PyPy now correctly returns the number 13 | of items of the list. Thanks Jeong YunWon 14 | * Added support to automatically retry socket EINTR errors. Thanks 15 | Thomas Steinacher 16 | * PubSubWorker threads started with `run_in_thread` are now daemonized 17 | so the thread shuts down when the running process goes away. Thanks 18 | Keith Ainsworth 19 | * Added support for GEO commands. Thanks Pau Freixes, Alex DeBrie and 20 | Abraham Toriz 21 | * Made client construction from URLs smarter. Thanks Tim Savage 22 | * Added support for CLUSTER * commands. Thanks Andy Huang 23 | * The RESTORE command now accepts an optional `replace` boolean. 24 | Thanks Yoshinari Takaoka 25 | * Attempt to connect to a new Sentinel if a TimeoutError occurs. Thanks 26 | Bo Lopker 27 | * Fixed a bug in the client's `__getitem__` where a KeyError would be 28 | raised if the value returned by the server is an empty string. 29 | Thanks Javier Candeira. 30 | * Socket timeouts when connecting to a server are now properly raised 31 | as TimeoutErrors. 32 | * 2.10.5 33 | * Allow URL encoded parameters in Redis URLs. Characters like a "/" can 34 | now be URL encoded and redis-py will correctly decode them. Thanks 35 | Paul Keene. 36 | * Added support for the WAIT command. Thanks https://github.com/eshizhan 37 | * Better shutdown support for the PubSub Worker Thread. It now properly 38 | cleans up the connection, unsubscribes from any channels and patterns 39 | previously subscribed to and consumes any waiting messages on the socket. 40 | * Added the ability to sleep for a brief period in the event of a 41 | WatchError occuring. Thanks Joshua Harlow. 42 | * Fixed a bug with pipeline error reporting when dealing with characters 43 | in error messages that could not be encoded to the connection's 44 | character set. Thanks Hendrik Muhs. 45 | * Fixed a bug in Sentinel connections that would inadvertantly connect 46 | to the master when the connection pool resets. Thanks 47 | https://github.com/df3n5 48 | * Better timeout support in Pubsub get_message. Thanks Andy Isaacson. 49 | * Fixed a bug with the HiredisParser that would cause the parser to 50 | get stuck in an endless loop if a specific number of bytes were 51 | delivered from the socket. This fix also increases performance of 52 | parsing large responses from the Redis server. 53 | * Added support for ZREVRANGEBYLEX. 54 | * ConnectionErrors are now raised if Redis refuses a connection due to 55 | the maxclients limit being exceeded. Thanks Roman Karpovich. 56 | * max_connections can now be set when instantiating client instances. 57 | Thanks Ohad Perry. 58 | * 2.10.4 59 | (skipped due to a PyPI snafu) 60 | * 2.10.3 61 | * Fixed a bug with the bytearray support introduced in 2.10.2. Thanks 62 | Josh Owen. 63 | * 2.10.2 64 | * Added support for Hiredis's new bytearray support. Thanks 65 | https://github.com/tzickel 66 | * POSSIBLE BACKWARDS INCOMPATBLE CHANGE: Fixed a possible race condition 67 | when multiple threads share the same Lock instance with a timeout. Lock 68 | tokens are now stored in thread local storage by default. If you have 69 | code that acquires a lock in one thread and passes that lock instance to 70 | another thread to release it, you need to disable thread local storage. 71 | Refer to the doc strings on the Lock class about the thread_local 72 | argument information. 73 | * Fixed a regression in from_url where "charset" and "errors" weren't 74 | valid options. "encoding" and "encoding_errors" are still accepted 75 | and preferred. 76 | * The "charset" and "errors" options have been deprecated. Passing 77 | either to StrictRedis.__init__ or from_url will still work but will 78 | also emit a DeprecationWarning. Instead use the "encoding" and 79 | "encoding_errors" options. 80 | * Fixed a compatability bug with Python 3 when the server closes a 81 | connection. 82 | * Added BITPOS command. Thanks https://github.com/jettify. 83 | * Fixed a bug when attempting to send large values to Redis in a Pipeline. 84 | * 2.10.1 85 | * Fixed a bug where Sentinel connections to a server that's no longer a 86 | master and receives a READONLY error will disconnect and reconnect to 87 | the master. 88 | * 2.10.0 89 | * Discontinuted support for Python 2.5. Upgrade. You'll be happier. 90 | * The HiRedis parser will now properly raise ConnectionErrors. 91 | * Completely refactored PubSub support. Fixes all known PubSub bugs and 92 | adds a bunch of new features. Docs can be found in the README under the 93 | new "Publish / Subscribe" section. 94 | * Added the new HyperLogLog commanads (PFADD, PFCOUNT, PFMERGE). Thanks 95 | Pepijn de Vos and Vincent Ohprecio. 96 | * Updated TTL and PTTL commands with Redis 2.8+ semantics. Thanks Markus 97 | Kaiserswerth. 98 | * *SCAN commands now return a long (int on Python3) cursor value rather 99 | than the string representation. This might be slightly backwards 100 | incompatible in code using *SCAN commands loops such as 101 | "while cursor != '0':". 102 | * Added extra *SCAN commands that return iterators instead of the normal 103 | [cursor, data] type. Use scan_iter, hscan_iter, sscan_iter, and 104 | zscan_iter for iterators. Thanks Mathieu Longtin. 105 | * Added support for SLOWLOG commands. Thanks Rick van Hattem. 106 | * Added lexicographical commands ZRANGEBYLEX, ZREMRANGEBYLEX, and ZLEXCOUNT 107 | for sorted sets. 108 | * Connection objects now support an optional argument, socket_read_size, 109 | indicating how much data to read during each socket.recv() call. After 110 | benchmarking, increased the default size to 64k, which dramatically 111 | improves performance when fetching large values, such as many results 112 | in a pipeline or a large (>1MB) string value. 113 | * Improved the pack_command and send_packed_command functions to increase 114 | performance when sending large (>1MB) values. 115 | * Sentinel Connections to master servers now detect when a READONLY error 116 | is encountered and disconnect themselves and all other active connections 117 | to the same master so that the new master can be discovered. 118 | * Fixed Sentinel state parsing on Python 3. 119 | * Added support for SENTINEL MONITOR, SENTINEL REMOVE, and SENTINEL SET 120 | commands. Thanks Greg Murphy. 121 | * INFO ouput that doesn't follow the "key:value" format will now be 122 | appended to a key named "__raw__" in the INFO dictionary. Thanks Pedro 123 | Larroy. 124 | * The "vagrant" directory contains a complete vagrant environment for 125 | redis-py developers. The environment runs a Redis master, a Redis slave, 126 | and 3 Sentinels. Future iterations of the test sutie will incorporate 127 | more integration style tests, ensuring things like failover happen 128 | correctly. 129 | * It's now possible to create connection pool instances from a URL. 130 | StrictRedis.from_url() now uses this feature to create a connection pool 131 | instance and use that when creating a new client instance. Thanks 132 | https://github.com/chillipino 133 | * When creating client instances or connection pool instances from an URL, 134 | it's now possible to pass additional options to the connection pool with 135 | querystring arguments. 136 | * Fixed a bug where some encodings (like utf-16) were unusable on Python 3 137 | as command names and literals would get encoded. 138 | * Added an SSLConnection class that allows for secure connections through 139 | stunnel or other means. Construct and SSL connection with the sll=True 140 | option on client classes, using the rediss:// scheme from an URL, or 141 | by passing the SSLConnection class to a connection pool's 142 | connection_class argument. Thanks https://github.com/oranagra. 143 | * Added a socket_connect_timeout option to control how long to wait while 144 | establishing a TCP connection before timing out. This lets the client 145 | fail fast when attempting to connect to a downed server while keeping 146 | a more lenient timeout for all other socket operations. 147 | * Added TCP Keep-alive support by passing use the socket_keepalive=True 148 | option. Finer grain control can be achieved using the 149 | socket_keepalive_options option which expects a dictionary with any of 150 | the keys (socket.TCP_KEEPIDLE, socket.TCP_KEEPCNT, socket.TCP_KEEPINTVL) 151 | and integers for values. Thanks Yossi Gottlieb. 152 | * Added a `retry_on_timeout` option that controls how socket.timeout errors 153 | are handled. By default it is set to False and will cause the client to 154 | raise a TimeoutError anytime a socket.timeout is encountered. If 155 | `retry_on_timeout` is set to True, the client will retry a command that 156 | timed out once like other `socket.error`s. 157 | * Completely refactored the Lock system. There is now a LuaLock class 158 | that's used when the Redis server is capable of running Lua scripts along 159 | with a fallback class for Redis servers < 2.6. The new locks fix several 160 | subtle race consider that the old lock could face. In additional, a 161 | new method, "extend" is available on lock instances that all a lock 162 | owner to extend the amount of time they have the lock for. Thanks to 163 | Eli Finkelshteyn and https://github.com/chillipino for contributions. 164 | * 2.9.1 165 | * IPv6 support. Thanks https://github.com/amashinchi 166 | * 2.9.0 167 | * Performance improvement for packing commands when using the PythonParser. 168 | Thanks Guillaume Viot. 169 | * Executing an empty pipeline transaction no longer sends MULTI/EXEC to 170 | the server. Thanks EliFinkelshteyn. 171 | * Errors when authenticating (incorrect password) and selecting a database 172 | now close the socket. 173 | * Full Sentinel support thanks to Vitja Makarov. Thanks! 174 | * Better repr support for client and connection pool instances. Thanks 175 | Mark Roberts. 176 | * Error messages that the server sends to the client are now included 177 | in the client error message. Thanks Sangjin Lim. 178 | * Added the SCAN, SSCAN, HSCAN, and ZSCAN commands. Thanks Jingchao Hu. 179 | * ResponseErrors generated by pipeline execution provide addition context 180 | including the position of the command in the pipeline and the actual 181 | command text generated the error. 182 | * ConnectionPools now play nicer in threaded environments that fork. Thanks 183 | Christian Joergensen. 184 | * 2.8.0 185 | * redis-py should play better with gevent when a gevent Timeout is raised. 186 | Thanks leifkb. 187 | * Added SENTINEL command. Thanks Anna Janackova. 188 | * Fixed a bug where pipelines could potentially corrupt a connection 189 | if the MULTI command generated a ResponseError. Thanks EliFinkelshteyn 190 | for the report. 191 | * Connections now call socket.shutdown() prior to socket.close() to 192 | ensure communication ends immediately per the note at 193 | http://docs.python.org/2/library/socket.html#socket.socket.close 194 | Thanks to David Martin for pointing this out. 195 | * Lock checks are now based on floats rather than ints. Thanks 196 | Vitja Makarov. 197 | * 2.7.6 198 | * Added CONFIG RESETSTAT command. Thanks Yossi Gottlieb. 199 | * Fixed a bug introduced in 2.7.3 that caused issues with script objects 200 | and pipelines. Thanks Carpentier Pierre-Francois. 201 | * Converted redis-py's test suite to use the awesome py.test library. 202 | * Fixed a bug introduced in 2.7.5 that prevented a ConnectionError from 203 | being raised when the Redis server is LOADING data. 204 | * Added a BusyLoadingError exception that's raised when the Redis server 205 | is starting up and not accepting commands yet. BusyLoadingError 206 | subclasses ConnectionError, which this state previously returned. 207 | Thanks Yossi Gottlieb. 208 | * 2.7.5 209 | * DEL, HDEL and ZREM commands now return the numbers of keys deleted 210 | instead of just True/False. 211 | * from_url now supports URIs with a port number. Thanks Aaron Westendorf. 212 | * 2.7.4 213 | * Added missing INCRBY method. Thanks Krzysztof Dorosz. 214 | * SET now accepts the EX, PX, NX and XX options from Redis 2.6.12. These 215 | options will generate errors if these options are used when connected 216 | to a Redis server < 2.6.12. Thanks George Yoshida. 217 | * 2.7.3 218 | * Fixed a bug with BRPOPLPUSH and lists with empty strings. 219 | * All empty except: clauses have been replaced to only catch Exception 220 | subclasses. This prevents a KeyboardInterrupt from triggering exception 221 | handlers. Thanks Lucian Branescu Mihaila. 222 | * All exceptions that are the result of redis server errors now share a 223 | command Exception subclass, ServerError. Thanks Matt Robenolt. 224 | * Prevent DISCARD from being called if MULTI wasn't also called. Thanks 225 | Pete Aykroyd. 226 | * SREM now returns an integer indicating the number of items removed from 227 | the set. Thanks http://github.com/ronniekk. 228 | * Fixed a bug with BGSAVE and BGREWRITEAOF response callbacks with Python3. 229 | Thanks Nathan Wan. 230 | * Added CLIENT GETNAME and CLIENT SETNAME commands. 231 | Thanks http://github.com/bitterb. 232 | * It's now possible to use len() on a pipeline instance to determine the 233 | number of commands that will be executed. Thanks Jon Parise. 234 | * Fixed a bug in INFO's parse routine with floating point numbers. Thanks 235 | Ali Onur Uyar. 236 | * Fixed a bug with BITCOUNT to allow `start` and `end` to both be zero. 237 | Thanks Tim Bart. 238 | * The transaction() method now accepts a boolean keyword argument, 239 | value_from_callable. By default, or if False is passes, the transaction() 240 | method will return the value of the pipelines execution. Otherwise, it 241 | will return whatever func() returns. 242 | * Python3 compatibility fix ensuring we're not already bytes(). Thanks 243 | Salimane Adjao Moustapha. 244 | * Added PSETEX. Thanks YAMAMOTO Takashi. 245 | * Added a BlockingConnectionPool to limit the number of connections that 246 | can be created. Thanks James Arthur. 247 | * SORT now accepts a `groups` option that if specified, will return 248 | tuples of n-length, where n is the number of keys specified in the GET 249 | argument. This allows for convenient row-based iteration. Thanks 250 | Ionuț Arțăriși. 251 | * 2.7.2 252 | * Parse errors are now *always* raised on multi/exec pipelines, regardless 253 | of the `raise_on_error` flag. See 254 | https://groups.google.com/forum/?hl=en&fromgroups=#!topic/redis-db/VUiEFT8U8U0 255 | for more info. 256 | * 2.7.1 257 | * Packaged tests with source code 258 | * 2.7.0 259 | * Added BITOP and BITCOUNT commands. Thanks Mark Tozzi. 260 | * Added the TIME command. Thanks Jason Knight. 261 | * Added support for LUA scripting. Thanks to Angus Peart, Drew Smathers, 262 | Issac Kelly, Louis-Philippe Perron, Sean Bleier, Jeffrey Kaditz, and 263 | Dvir Volk for various patches and contributions to this feature. 264 | * Changed the default error handling in pipelines. By default, the first 265 | error in a pipeline will now be raised. A new parameter to the 266 | pipeline's execute, `raise_on_error`, can be set to False to keep the 267 | old behavior of embeedding the exception instances in the result. 268 | * Fixed a bug with pipelines where parse errors won't corrupt the 269 | socket. 270 | * Added the optional `number` argument to SRANDMEMBER for use with 271 | Redis 2.6+ servers. 272 | * Added PEXPIRE/PEXPIREAT/PTTL commands. Thanks Luper Rouch. 273 | * Added INCRBYFLOAT/HINCRBYFLOAT commands. Thanks Nikita Uvarov. 274 | * High precision floating point values won't lose their precision when 275 | being sent to the Redis server. Thanks Jason Oster and Oleg Pudeyev. 276 | * Added CLIENT LIST/CLIENT KILL commands 277 | * 2.6.2 278 | * `from_url` is now available as a classmethod on client classes. Thanks 279 | Jon Parise for the patch. 280 | * Fixed several encoding errors resulting from the Python 3.x support. 281 | * 2.6.1 282 | * Python 3.x support! Big thanks to Alex Grönholm. 283 | * Fixed a bug in the PythonParser's read_response that could hide an error 284 | from the client (#251). 285 | * 2.6.0 286 | * Changed (p)subscribe and (p)unsubscribe to no longer return messages 287 | indicating the channel was subscribed/unsubscribed to. These messages 288 | are available in the listen() loop instead. This is to prevent the 289 | following scenario: 290 | * Client A is subscribed to "foo" 291 | * Client B publishes message to "foo" 292 | * Client A subscribes to channel "bar" at the same time. 293 | Prior to this change, the subscribe() call would return the published 294 | messages on "foo" rather than the subscription confirmation to "bar". 295 | * Added support for GETRANGE, thanks Jean-Philippe Caruana 296 | * A new setting "decode_responses" specifies whether return values from 297 | Redis commands get decoded automatically using the client's charset 298 | value. Thanks to Frankie Dintino for the patch. 299 | * 2.4.13 300 | * redis.from_url() can take an URL representing a Redis connection string 301 | and return a client object. Thanks Kenneth Reitz for the patch. 302 | * 2.4.12 303 | * ConnectionPool is now fork-safe. Thanks Josiah Carson for the patch. 304 | * 2.4.11 305 | * AuthenticationError will now be correctly raised if an invalid password 306 | is supplied. 307 | * If Hiredis is unavailable, the HiredisParser will raise a RedisError 308 | if selected manually. 309 | * Made the INFO command more tolerant of Redis changes formatting. Fix 310 | for #217. 311 | * 2.4.10 312 | * Buffer reads from socket in the PythonParser. Fix for a Windows-specific 313 | bug (#205). 314 | * Added the OBJECT and DEBUG OBJECT commands. 315 | * Added __del__ methods for classes that hold on to resources that need to 316 | be cleaned up. This should prevent resource leakage when these objects 317 | leave scope due to misuse or unhandled exceptions. Thanks David Wolever 318 | for the suggestion. 319 | * Added the ECHO command for completeness. 320 | * Fixed a bug where attempting to subscribe to a PubSub channel of a Redis 321 | server that's down would blow out the stack. Fixes #179 and #195. Thanks 322 | Ovidiu Predescu for the test case. 323 | * StrictRedis's TTL command now returns a -1 when querying a key with no 324 | expiration. The Redis class continues to return None. 325 | * ZADD and SADD now return integer values indicating the number of items 326 | added. Thanks Homer Strong. 327 | * Renamed the base client class to StrictRedis, replacing ZADD and LREM in 328 | favor of their official argument order. The Redis class is now a subclass 329 | of StrictRedis, implementing the legacy redis-py implementations of ZADD 330 | and LREM. Docs have been updated to suggesting the use of StrictRedis. 331 | * SETEX in StrictRedis is now compliant with official Redis SETEX command. 332 | the name, value, time implementation moved to "Redis" for backwards 333 | compatability. 334 | * 2.4.9 335 | * Removed socket retry logic in Connection. This is the responsbility of 336 | the caller to determine if the command is safe and can be retried. Thanks 337 | David Wolver. 338 | * Added some extra guards around various types of exceptions being raised 339 | when sending or parsing data. Thanks David Wolver and Denis Bilenko. 340 | * 2.4.8 341 | * Imported with_statement from __future__ for Python 2.5 compatability. 342 | * 2.4.7 343 | * Fixed a bug where some connections were not getting released back to the 344 | connection pool after pipeline execution. 345 | * Pipelines can now be used as context managers. This is the preferred way 346 | of use to ensure that connections get cleaned up properly. Thanks 347 | David Wolever. 348 | * Added a convenience method called transaction() on the base Redis class. 349 | This method eliminates much of the boilerplate used when using pipelines 350 | to watch Redis keys. See the documentation for details on usage. 351 | * 2.4.6 352 | * Variadic arguments for SADD, SREM, ZREN, HDEL, LPUSH, and RPUSH. Thanks 353 | Raphaël Vinot. 354 | * (CRITICAL) Fixed an error in the Hiredis parser that occasionally caused 355 | the socket connection to become corrupted and unusable. This became 356 | noticeable once connection pools started to be used. 357 | * ZRANGE, ZREVRANGE, ZRANGEBYSCORE, and ZREVRANGEBYSCORE now take an 358 | additional optional argument, score_cast_func, which is a callable used 359 | to cast the score value in the return type. The default is float. 360 | * Removed the PUBLISH method from the PubSub class. Connections that are 361 | [P]SUBSCRIBEd cannot issue PUBLISH commands, so it doesn't make sense 362 | to have it here. 363 | * Pipelines now contain WATCH and UNWATCH. Calling WATCH or UNWATCH from 364 | the base client class will result in a deprecation warning. After 365 | WATCHing one or more keys, the pipeline will be placed in immediate 366 | execution mode until UNWATCH or MULTI are called. Refer to the new 367 | pipeline docs in the README for more information. Thanks to David Wolever 368 | and Randall Leeds for greatly helping with this. 369 | * 2.4.5 370 | * The PythonParser now works better when reading zero length strings. 371 | * 2.4.4 372 | * Fixed a typo introduced in 2.4.3 373 | * 2.4.3 374 | * Fixed a bug in the UnixDomainSocketConnection caused when trying to 375 | form an error message after a socket error. 376 | * 2.4.2 377 | * Fixed a bug in pipeline that caused an exception while trying to 378 | reconnect after a connection timeout. 379 | * 2.4.1 380 | * Fixed a bug in the PythonParser if disconnect is called before connect. 381 | * 2.4.0 382 | * WARNING: 2.4 contains several backwards incompatible changes. 383 | * Completely refactored Connection objects. Moved much of the Redis 384 | protocol packing for requests here, and eliminated the nasty dependencies 385 | it had on the client to do AUTH and SELECT commands on connect. 386 | * Connection objects now have a parser attribute. Parsers are responsible 387 | for reading data Redis sends. Two parsers ship with redis-py: a 388 | PythonParser and the HiRedis parser. redis-py will automatically use the 389 | HiRedis parser if you have the Python hiredis module installed, otherwise 390 | it will fall back to the PythonParser. You can force or the other, or even 391 | an external one by passing the `parser_class` argument to ConnectionPool. 392 | * Added a UnixDomainSocketConnection for users wanting to talk to the Redis 393 | instance running on a local machine only. You can use this connection 394 | by passing it to the `connection_class` argument of the ConnectionPool. 395 | * Connections no longer derive from threading.local. See threading.local 396 | note below. 397 | * ConnectionPool has been comletely refactored. The ConnectionPool now 398 | maintains a list of connections. The redis-py client only hangs on to 399 | a ConnectionPool instance, calling get_connection() anytime it needs to 400 | send a command. When get_connection() is called, the command name and 401 | any keys involved in the command are passed as arguments. Subclasses of 402 | ConnectionPool could use this information to identify the shard the keys 403 | belong to and return a connection to it. ConnectionPool also implements 404 | disconnect() to force all connections in the pool to disconnect from 405 | the Redis server. 406 | * redis-py no longer support the SELECT command. You can still connect to 407 | a specific database by specifing it when instantiating a client instance 408 | or by creating a connection pool. If you need to talk to multiplate 409 | databases within your application, you should use a separate client 410 | instance for each database you want to talk to. 411 | * Completely refactored Publish/Subscribe support. The subscribe and listen 412 | commands are no longer available on the redis-py Client class. Instead, 413 | the `pubsub` method returns an instance of the PubSub class which contains 414 | all publish/subscribe support. Note, you can still PUBLISH from the 415 | redis-py client class if you desire. 416 | * Removed support for all previously deprecated commands or options. 417 | * redis-py no longer uses threading.local in any way. Since the Client 418 | class no longer holds on to a connection, it's no longer needed. You can 419 | now pass client instances between threads, and commands run on those 420 | threads will retrieve an available connection from the pool, use it and 421 | release it. It should now be trivial to use redis-py with eventlet or 422 | greenlet. 423 | * ZADD now accepts pairs of value=score keyword arguments. This should help 424 | resolve the long standing #72. The older value and score arguments have 425 | been deprecated in favor of the keyword argument style. 426 | * Client instances now get their own copy of RESPONSE_CALLBACKS. The new 427 | set_response_callback method adds a user defined callback to the instance. 428 | * Support Jython, fixing #97. Thanks to Adam Vandenberg for the patch. 429 | * Using __getitem__ now properly raises a KeyError when the key is not 430 | found. Thanks Ionuț Arțăriși for the patch. 431 | * Newer Redis versions return a LOADING message for some commands while 432 | the database is loading from disk during server start. This could cause 433 | problems with SELECT. We now force a socket disconnection prior to 434 | raising a ResponseError so subsuquent connections have to reconnect and 435 | re-select the appropriate database. Thanks to Benjamin Anderson for 436 | finding this and fixing. 437 | * 2.2.4 438 | * WARNING: Potential backwards incompatible change - Changed order of 439 | parameters of ZREVRANGEBYSCORE to match those of the actual Redis command. 440 | This is only backwards-incompatible if you were passing max and min via 441 | keyword args. If passing by normal args, nothing in user code should have 442 | to change. Thanks Stéphane Angel for the fix. 443 | * Fixed INFO to properly parse the Redis data correctly for both 2.2.x and 444 | 2.3+. Thanks Stéphane Angel for the fix. 445 | * Lock objects now store their timeout value as a float. This allows floats 446 | to be used as timeout values. No changes to existing code required. 447 | * WATCH now supports multiple keys. Thanks Rich Schumacher. 448 | * Broke out some code that was Python 2.4 incompatible. redis-py should 449 | now be useable on 2.4, but this hasn't actually been tested. Thanks 450 | Dan Colish for the patch. 451 | * Optimized some code using izip and islice. Should have a pretty good 452 | speed up on larger data sets. Thanks Dan Colish. 453 | * Better error handling when submitting an empty mapping to HMSET. Thanks 454 | Dan Colish. 455 | * Subscription status is now reset after every (re)connection. 456 | * 2.2.3 457 | * Added support for Hiredis. To use, simply "pip install hiredis" or 458 | "easy_install hiredis". Thanks for Pieter Noordhuis for the hiredis-py 459 | bindings and the patch to redis-py. 460 | * The connection class is chosen based on whether hiredis is installed 461 | or not. To force the use of the PythonConnection, simply create 462 | your own ConnectionPool instance with the connection_class argument 463 | assigned to to PythonConnection class. 464 | * Added missing command ZREVRANGEBYSCORE. Thanks Jay Baird for the patch. 465 | * The INFO command should be parsed correctly on 2.2.x server versions 466 | and is backwards compatible with older versions. Thanks Brett Hoerner. 467 | * 2.2.2 468 | * Fixed a bug in ZREVRANK where retriving the rank of a value not in 469 | the zset would raise an error. 470 | * Fixed a bug in Connection.send where the errno import was getting 471 | overwritten by a local variable. 472 | * Fixed a bug in SLAVEOF when promoting an existing slave to a master. 473 | * Reverted change of download URL back to redis-VERSION.tar.gz. 2.2.1's 474 | change of this actually broke Pypi for Pip installs. Sorry! 475 | * 2.2.1 476 | * Changed archive name to redis-py-VERSION.tar.gz to not conflict 477 | with the Redis server archive. 478 | * 2.2.0 479 | * Implemented SLAVEOF 480 | * Implemented CONFIG as config_get and config_set 481 | * Implemented GETBIT/SETBIT 482 | * Implemented BRPOPLPUSH 483 | * Implemented STRLEN 484 | * Implemented PERSIST 485 | * Implemented SETRANGE 486 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | redis-py 2 | ======== 3 | 4 | The Python interface to the Redis key-value store. 5 | 6 | .. image:: https://secure.travis-ci.org/andymccurdy/redis-py.png?branch=master 7 | :target: http://travis-ci.org/andymccurdy/redis-py 8 | 9 | Installation 10 | ------------ 11 | 12 | redis-py requires a running Redis server. See `Redis's quickstart 13 | `_ for installation instructions. 14 | 15 | To install redis-py, simply: 16 | 17 | .. code-block:: bash 18 | 19 | $ sudo pip install redis 20 | 21 | or alternatively (you really should be using pip though): 22 | 23 | .. code-block:: bash 24 | 25 | $ sudo easy_install redis 26 | 27 | or from source: 28 | 29 | .. code-block:: bash 30 | 31 | $ sudo python setup.py install 32 | 33 | 34 | Getting Started 35 | --------------- 36 | 37 | .. code-block:: pycon 38 | 39 | >>> import redis 40 | >>> r = redis.StrictRedis(host='localhost', port=6379, db=0) 41 | >>> r.set('foo', 'bar') 42 | True 43 | >>> r.get('foo') 44 | 'bar' 45 | 46 | By default, all responses are returned as `bytes` in Python 3 and `str` in 47 | Python 2. The user is responsible for decoding to Python 3 strings or Python 2 48 | unicode objects. 49 | 50 | If **all** string responses from a client should be decoded, the user can 51 | specify `decode_responses=True` to `StrictRedis.__init__`. In this case, any 52 | Redis command that returns a string type will be decoded with the `encoding` 53 | specified. 54 | 55 | API Reference 56 | ------------- 57 | 58 | The `official Redis command documentation `_ does a 59 | great job of explaining each command in detail. redis-py exposes two client 60 | classes that implement these commands. The StrictRedis class attempts to adhere 61 | to the official command syntax. There are a few exceptions: 62 | 63 | * **SELECT**: Not implemented. See the explanation in the Thread Safety section 64 | below. 65 | * **DEL**: 'del' is a reserved keyword in the Python syntax. Therefore redis-py 66 | uses 'delete' instead. 67 | * **CONFIG GET|SET**: These are implemented separately as config_get or config_set. 68 | * **MULTI/EXEC**: These are implemented as part of the Pipeline class. The 69 | pipeline is wrapped with the MULTI and EXEC statements by default when it 70 | is executed, which can be disabled by specifying transaction=False. 71 | See more about Pipelines below. 72 | * **SUBSCRIBE/LISTEN**: Similar to pipelines, PubSub is implemented as a separate 73 | class as it places the underlying connection in a state where it can't 74 | execute non-pubsub commands. Calling the pubsub method from the Redis client 75 | will return a PubSub instance where you can subscribe to channels and listen 76 | for messages. You can only call PUBLISH from the Redis client (see 77 | `this comment on issue #151 78 | `_ 79 | for details). 80 | * **SCAN/SSCAN/HSCAN/ZSCAN**: The \*SCAN commands are implemented as they 81 | exist in the Redis documentation. In addition, each command has an equivilant 82 | iterator method. These are purely for convenience so the user doesn't have 83 | to keep track of the cursor while iterating. Use the 84 | scan_iter/sscan_iter/hscan_iter/zscan_iter methods for this behavior. 85 | 86 | In addition to the changes above, the Redis class, a subclass of StrictRedis, 87 | overrides several other commands to provide backwards compatibility with older 88 | versions of redis-py: 89 | 90 | * **LREM**: Order of 'num' and 'value' arguments reversed such that 'num' can 91 | provide a default value of zero. 92 | * **ZADD**: Redis specifies the 'score' argument before 'value'. These were swapped 93 | accidentally when being implemented and not discovered until after people 94 | were already using it. The Redis class expects \*args in the form of: 95 | `name1, score1, name2, score2, ...` 96 | * **SETEX**: Order of 'time' and 'value' arguments reversed. 97 | 98 | 99 | More Detail 100 | ----------- 101 | 102 | Connection Pools 103 | ^^^^^^^^^^^^^^^^ 104 | 105 | Behind the scenes, redis-py uses a connection pool to manage connections to 106 | a Redis server. By default, each Redis instance you create will in turn create 107 | its own connection pool. You can override this behavior and use an existing 108 | connection pool by passing an already created connection pool instance to the 109 | connection_pool argument of the Redis class. You may choose to do this in order 110 | to implement client side sharding or have finer grain control of how 111 | connections are managed. 112 | 113 | .. code-block:: pycon 114 | 115 | >>> pool = redis.ConnectionPool(host='localhost', port=6379, db=0) 116 | >>> r = redis.Redis(connection_pool=pool) 117 | 118 | Connections 119 | ^^^^^^^^^^^ 120 | 121 | ConnectionPools manage a set of Connection instances. redis-py ships with two 122 | types of Connections. The default, Connection, is a normal TCP socket based 123 | connection. The UnixDomainSocketConnection allows for clients running on the 124 | same device as the server to connect via a unix domain socket. To use a 125 | UnixDomainSocketConnection connection, simply pass the unix_socket_path 126 | argument, which is a string to the unix domain socket file. Additionally, make 127 | sure the unixsocket parameter is defined in your redis.conf file. It's 128 | commented out by default. 129 | 130 | .. code-block:: pycon 131 | 132 | >>> r = redis.Redis(unix_socket_path='/tmp/redis.sock') 133 | 134 | You can create your own Connection subclasses as well. This may be useful if 135 | you want to control the socket behavior within an async framework. To 136 | instantiate a client class using your own connection, you need to create 137 | a connection pool, passing your class to the connection_class argument. 138 | Other keyword parameters you pass to the pool will be passed to the class 139 | specified during initialization. 140 | 141 | .. code-block:: pycon 142 | 143 | >>> pool = redis.ConnectionPool(connection_class=YourConnectionClass, 144 | your_arg='...', ...) 145 | 146 | Parsers 147 | ^^^^^^^ 148 | 149 | Parser classes provide a way to control how responses from the Redis server 150 | are parsed. redis-py ships with two parser classes, the PythonParser and the 151 | HiredisParser. By default, redis-py will attempt to use the HiredisParser if 152 | you have the hiredis module installed and will fallback to the PythonParser 153 | otherwise. 154 | 155 | Hiredis is a C library maintained by the core Redis team. Pieter Noordhuis was 156 | kind enough to create Python bindings. Using Hiredis can provide up to a 157 | 10x speed improvement in parsing responses from the Redis server. The 158 | performance increase is most noticeable when retrieving many pieces of data, 159 | such as from LRANGE or SMEMBERS operations. 160 | 161 | Hiredis is available on PyPI, and can be installed via pip or easy_install 162 | just like redis-py. 163 | 164 | .. code-block:: bash 165 | 166 | $ pip install hiredis 167 | 168 | or 169 | 170 | .. code-block:: bash 171 | 172 | $ easy_install hiredis 173 | 174 | Response Callbacks 175 | ^^^^^^^^^^^^^^^^^^ 176 | 177 | The client class uses a set of callbacks to cast Redis responses to the 178 | appropriate Python type. There are a number of these callbacks defined on 179 | the Redis client class in a dictionary called RESPONSE_CALLBACKS. 180 | 181 | Custom callbacks can be added on a per-instance basis using the 182 | set_response_callback method. This method accepts two arguments: a command 183 | name and the callback. Callbacks added in this manner are only valid on the 184 | instance the callback is added to. If you want to define or override a callback 185 | globally, you should make a subclass of the Redis client and add your callback 186 | to its RESPONSE_CALLBACKS class dictionary. 187 | 188 | Response callbacks take at least one parameter: the response from the Redis 189 | server. Keyword arguments may also be accepted in order to further control 190 | how to interpret the response. These keyword arguments are specified during the 191 | command's call to execute_command. The ZRANGE implementation demonstrates the 192 | use of response callback keyword arguments with its "withscores" argument. 193 | 194 | Thread Safety 195 | ^^^^^^^^^^^^^ 196 | 197 | Redis client instances can safely be shared between threads. Internally, 198 | connection instances are only retrieved from the connection pool during 199 | command execution, and returned to the pool directly after. Command execution 200 | never modifies state on the client instance. 201 | 202 | However, there is one caveat: the Redis SELECT command. The SELECT command 203 | allows you to switch the database currently in use by the connection. That 204 | database remains selected until another is selected or until the connection is 205 | closed. This creates an issue in that connections could be returned to the pool 206 | that are connected to a different database. 207 | 208 | As a result, redis-py does not implement the SELECT command on client 209 | instances. If you use multiple Redis databases within the same application, you 210 | should create a separate client instance (and possibly a separate connection 211 | pool) for each database. 212 | 213 | It is not safe to pass PubSub or Pipeline objects between threads. 214 | 215 | Pipelines 216 | ^^^^^^^^^ 217 | 218 | Pipelines are a subclass of the base Redis class that provide support for 219 | buffering multiple commands to the server in a single request. They can be used 220 | to dramatically increase the performance of groups of commands by reducing the 221 | number of back-and-forth TCP packets between the client and server. 222 | 223 | Pipelines are quite simple to use: 224 | 225 | .. code-block:: pycon 226 | 227 | >>> r = redis.Redis(...) 228 | >>> r.set('bing', 'baz') 229 | >>> # Use the pipeline() method to create a pipeline instance 230 | >>> pipe = r.pipeline() 231 | >>> # The following SET commands are buffered 232 | >>> pipe.set('foo', 'bar') 233 | >>> pipe.get('bing') 234 | >>> # the EXECUTE call sends all buffered commands to the server, returning 235 | >>> # a list of responses, one for each command. 236 | >>> pipe.execute() 237 | [True, 'baz'] 238 | 239 | For ease of use, all commands being buffered into the pipeline return the 240 | pipeline object itself. Therefore calls can be chained like: 241 | 242 | .. code-block:: pycon 243 | 244 | >>> pipe.set('foo', 'bar').sadd('faz', 'baz').incr('auto_number').execute() 245 | [True, True, 6] 246 | 247 | In addition, pipelines can also ensure the buffered commands are executed 248 | atomically as a group. This happens by default. If you want to disable the 249 | atomic nature of a pipeline but still want to buffer commands, you can turn 250 | off transactions. 251 | 252 | .. code-block:: pycon 253 | 254 | >>> pipe = r.pipeline(transaction=False) 255 | 256 | A common issue occurs when requiring atomic transactions but needing to 257 | retrieve values in Redis prior for use within the transaction. For instance, 258 | let's assume that the INCR command didn't exist and we need to build an atomic 259 | version of INCR in Python. 260 | 261 | The completely naive implementation could GET the value, increment it in 262 | Python, and SET the new value back. However, this is not atomic because 263 | multiple clients could be doing this at the same time, each getting the same 264 | value from GET. 265 | 266 | Enter the WATCH command. WATCH provides the ability to monitor one or more keys 267 | prior to starting a transaction. If any of those keys change prior the 268 | execution of that transaction, the entire transaction will be canceled and a 269 | WatchError will be raised. To implement our own client-side INCR command, we 270 | could do something like this: 271 | 272 | .. code-block:: pycon 273 | 274 | >>> with r.pipeline() as pipe: 275 | ... while 1: 276 | ... try: 277 | ... # put a WATCH on the key that holds our sequence value 278 | ... pipe.watch('OUR-SEQUENCE-KEY') 279 | ... # after WATCHing, the pipeline is put into immediate execution 280 | ... # mode until we tell it to start buffering commands again. 281 | ... # this allows us to get the current value of our sequence 282 | ... current_value = pipe.get('OUR-SEQUENCE-KEY') 283 | ... next_value = int(current_value) + 1 284 | ... # now we can put the pipeline back into buffered mode with MULTI 285 | ... pipe.multi() 286 | ... pipe.set('OUR-SEQUENCE-KEY', next_value) 287 | ... # and finally, execute the pipeline (the set command) 288 | ... pipe.execute() 289 | ... # if a WatchError wasn't raised during execution, everything 290 | ... # we just did happened atomically. 291 | ... break 292 | ... except WatchError: 293 | ... # another client must have changed 'OUR-SEQUENCE-KEY' between 294 | ... # the time we started WATCHing it and the pipeline's execution. 295 | ... # our best bet is to just retry. 296 | ... continue 297 | 298 | Note that, because the Pipeline must bind to a single connection for the 299 | duration of a WATCH, care must be taken to ensure that the connection is 300 | returned to the connection pool by calling the reset() method. If the 301 | Pipeline is used as a context manager (as in the example above) reset() 302 | will be called automatically. Of course you can do this the manual way by 303 | explicitly calling reset(): 304 | 305 | .. code-block:: pycon 306 | 307 | >>> pipe = r.pipeline() 308 | >>> while 1: 309 | ... try: 310 | ... pipe.watch('OUR-SEQUENCE-KEY') 311 | ... ... 312 | ... pipe.execute() 313 | ... break 314 | ... except WatchError: 315 | ... continue 316 | ... finally: 317 | ... pipe.reset() 318 | 319 | A convenience method named "transaction" exists for handling all the 320 | boilerplate of handling and retrying watch errors. It takes a callable that 321 | should expect a single parameter, a pipeline object, and any number of keys to 322 | be WATCHed. Our client-side INCR command above can be written like this, 323 | which is much easier to read: 324 | 325 | .. code-block:: pycon 326 | 327 | >>> def client_side_incr(pipe): 328 | ... current_value = pipe.get('OUR-SEQUENCE-KEY') 329 | ... next_value = int(current_value) + 1 330 | ... pipe.multi() 331 | ... pipe.set('OUR-SEQUENCE-KEY', next_value) 332 | >>> 333 | >>> r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY') 334 | [True] 335 | 336 | Publish / Subscribe 337 | ^^^^^^^^^^^^^^^^^^^ 338 | 339 | redis-py includes a `PubSub` object that subscribes to channels and listens 340 | for new messages. Creating a `PubSub` object is easy. 341 | 342 | .. code-block:: pycon 343 | 344 | >>> r = redis.StrictRedis(...) 345 | >>> p = r.pubsub() 346 | 347 | Once a `PubSub` instance is created, channels and patterns can be subscribed 348 | to. 349 | 350 | .. code-block:: pycon 351 | 352 | >>> p.subscribe('my-first-channel', 'my-second-channel', ...) 353 | >>> p.psubscribe('my-*', ...) 354 | 355 | The `PubSub` instance is now subscribed to those channels/patterns. The 356 | subscription confirmations can be seen by reading messages from the `PubSub` 357 | instance. 358 | 359 | .. code-block:: pycon 360 | 361 | >>> p.get_message() 362 | {'pattern': None, 'type': 'subscribe', 'channel': 'my-second-channel', 'data': 1L} 363 | >>> p.get_message() 364 | {'pattern': None, 'type': 'subscribe', 'channel': 'my-first-channel', 'data': 2L} 365 | >>> p.get_message() 366 | {'pattern': None, 'type': 'psubscribe', 'channel': 'my-*', 'data': 3L} 367 | 368 | Every message read from a `PubSub` instance will be a dictionary with the 369 | following keys. 370 | 371 | * **type**: One of the following: 'subscribe', 'unsubscribe', 'psubscribe', 372 | 'punsubscribe', 'message', 'pmessage' 373 | * **channel**: The channel [un]subscribed to or the channel a message was 374 | published to 375 | * **pattern**: The pattern that matched a published message's channel. Will be 376 | `None` in all cases except for 'pmessage' types. 377 | * **data**: The message data. With [un]subscribe messages, this value will be 378 | the number of channels and patterns the connection is currently subscribed 379 | to. With [p]message messages, this value will be the actual published 380 | message. 381 | 382 | Let's send a message now. 383 | 384 | .. code-block:: pycon 385 | 386 | # the publish method returns the number matching channel and pattern 387 | # subscriptions. 'my-first-channel' matches both the 'my-first-channel' 388 | # subscription and the 'my-*' pattern subscription, so this message will 389 | # be delivered to 2 channels/patterns 390 | >>> r.publish('my-first-channel', 'some data') 391 | 2 392 | >>> p.get_message() 393 | {'channel': 'my-first-channel', 'data': 'some data', 'pattern': None, 'type': 'message'} 394 | >>> p.get_message() 395 | {'channel': 'my-first-channel', 'data': 'some data', 'pattern': 'my-*', 'type': 'pmessage'} 396 | 397 | Unsubscribing works just like subscribing. If no arguments are passed to 398 | [p]unsubscribe, all channels or patterns will be unsubscribed from. 399 | 400 | .. code-block:: pycon 401 | 402 | >>> p.unsubscribe() 403 | >>> p.punsubscribe('my-*') 404 | >>> p.get_message() 405 | {'channel': 'my-second-channel', 'data': 2L, 'pattern': None, 'type': 'unsubscribe'} 406 | >>> p.get_message() 407 | {'channel': 'my-first-channel', 'data': 1L, 'pattern': None, 'type': 'unsubscribe'} 408 | >>> p.get_message() 409 | {'channel': 'my-*', 'data': 0L, 'pattern': None, 'type': 'punsubscribe'} 410 | 411 | redis-py also allows you to register callback functions to handle published 412 | messages. Message handlers take a single argument, the message, which is a 413 | dictionary just like the examples above. To subscribe to a channel or pattern 414 | with a message handler, pass the channel or pattern name as a keyword argument 415 | with its value being the callback function. 416 | 417 | When a message is read on a channel or pattern with a message handler, the 418 | message dictionary is created and passed to the message handler. In this case, 419 | a `None` value is returned from get_message() since the message was already 420 | handled. 421 | 422 | .. code-block:: pycon 423 | 424 | >>> def my_handler(message): 425 | ... print 'MY HANDLER: ', message['data'] 426 | >>> p.subscribe(**{'my-channel': my_handler}) 427 | # read the subscribe confirmation message 428 | >>> p.get_message() 429 | {'pattern': None, 'type': 'subscribe', 'channel': 'my-channel', 'data': 1L} 430 | >>> r.publish('my-channel', 'awesome data') 431 | 1 432 | # for the message handler to work, we need tell the instance to read data. 433 | # this can be done in several ways (read more below). we'll just use 434 | # the familiar get_message() function for now 435 | >>> message = p.get_message() 436 | MY HANDLER: awesome data 437 | # note here that the my_handler callback printed the string above. 438 | # `message` is None because the message was handled by our handler. 439 | >>> print message 440 | None 441 | 442 | If your application is not interested in the (sometimes noisy) 443 | subscribe/unsubscribe confirmation messages, you can ignore them by passing 444 | `ignore_subscribe_messages=True` to `r.pubsub()`. This will cause all 445 | subscribe/unsubscribe messages to be read, but they won't bubble up to your 446 | application. 447 | 448 | .. code-block:: pycon 449 | 450 | >>> p = r.pubsub(ignore_subscribe_messages=True) 451 | >>> p.subscribe('my-channel') 452 | >>> p.get_message() # hides the subscribe message and returns None 453 | >>> r.publish('my-channel') 454 | 1 455 | >>> p.get_message() 456 | {'channel': 'my-channel', 'data': 'my data', 'pattern': None, 'type': 'message'} 457 | 458 | There are three different strategies for reading messages. 459 | 460 | The examples above have been using `pubsub.get_message()`. Behind the scenes, 461 | `get_message()` uses the system's 'select' module to quickly poll the 462 | connection's socket. If there's data available to be read, `get_message()` will 463 | read it, format the message and return it or pass it to a message handler. If 464 | there's no data to be read, `get_message()` will immediately return None. This 465 | makes it trivial to integrate into an existing event loop inside your 466 | application. 467 | 468 | .. code-block:: pycon 469 | 470 | >>> while True: 471 | >>> message = p.get_message() 472 | >>> if message: 473 | >>> # do something with the message 474 | >>> time.sleep(0.001) # be nice to the system :) 475 | 476 | Older versions of redis-py only read messages with `pubsub.listen()`. listen() 477 | is a generator that blocks until a message is available. If your application 478 | doesn't need to do anything else but receive and act on messages received from 479 | redis, listen() is an easy way to get up an running. 480 | 481 | .. code-block:: pycon 482 | 483 | >>> for message in p.listen(): 484 | ... # do something with the message 485 | 486 | The third option runs an event loop in a separate thread. 487 | `pubsub.run_in_thread()` creates a new thread and starts the event loop. The 488 | thread object is returned to the caller of `run_in_thread()`. The caller can 489 | use the `thread.stop()` method to shut down the event loop and thread. Behind 490 | the scenes, this is simply a wrapper around `get_message()` that runs in a 491 | separate thread, essentially creating a tiny non-blocking event loop for you. 492 | `run_in_thread()` takes an optional `sleep_time` argument. If specified, the 493 | event loop will call `time.sleep()` with the value in each iteration of the 494 | loop. 495 | 496 | Note: Since we're running in a separate thread, there's no way to handle 497 | messages that aren't automatically handled with registered message handlers. 498 | Therefore, redis-py prevents you from calling `run_in_thread()` if you're 499 | subscribed to patterns or channels that don't have message handlers attached. 500 | 501 | .. code-block:: pycon 502 | 503 | >>> p.subscribe(**{'my-channel': my_handler}) 504 | >>> thread = p.run_in_thread(sleep_time=0.001) 505 | # the event loop is now running in the background processing messages 506 | # when it's time to shut it down... 507 | >>> thread.stop() 508 | 509 | A PubSub object adheres to the same encoding semantics as the client instance 510 | it was created from. Any channel or pattern that's unicode will be encoded 511 | using the `charset` specified on the client before being sent to Redis. If the 512 | client's `decode_responses` flag is set the False (the default), the 513 | 'channel', 'pattern' and 'data' values in message dictionaries will be byte 514 | strings (str on Python 2, bytes on Python 3). If the client's 515 | `decode_responses` is True, then the 'channel', 'pattern' and 'data' values 516 | will be automatically decoded to unicode strings using the client's `charset`. 517 | 518 | PubSub objects remember what channels and patterns they are subscribed to. In 519 | the event of a disconnection such as a network error or timeout, the 520 | PubSub object will re-subscribe to all prior channels and patterns when 521 | reconnecting. Messages that were published while the client was disconnected 522 | cannot be delivered. When you're finished with a PubSub object, call its 523 | `.close()` method to shutdown the connection. 524 | 525 | .. code-block:: pycon 526 | 527 | >>> p = r.pubsub() 528 | >>> ... 529 | >>> p.close() 530 | 531 | 532 | The PUBSUB set of subcommands CHANNELS, NUMSUB and NUMPAT are also 533 | supported: 534 | 535 | .. code-block:: pycon 536 | 537 | >>> r.pubsub_channels() 538 | ['foo', 'bar'] 539 | >>> r.pubsub_numsub('foo', 'bar') 540 | [('foo', 9001), ('bar', 42)] 541 | >>> r.pubsub_numsub('baz') 542 | [('baz', 0)] 543 | >>> r.pubsub_numpat() 544 | 1204 545 | 546 | 547 | LUA Scripting 548 | ^^^^^^^^^^^^^ 549 | 550 | redis-py supports the EVAL, EVALSHA, and SCRIPT commands. However, there are 551 | a number of edge cases that make these commands tedious to use in real world 552 | scenarios. Therefore, redis-py exposes a Script object that makes scripting 553 | much easier to use. 554 | 555 | To create a Script instance, use the `register_script` function on a client 556 | instance passing the LUA code as the first argument. `register_script` returns 557 | a Script instance that you can use throughout your code. 558 | 559 | The following trivial LUA script accepts two parameters: the name of a key and 560 | a multiplier value. The script fetches the value stored in the key, multiplies 561 | it with the multiplier value and returns the result. 562 | 563 | .. code-block:: pycon 564 | 565 | >>> r = redis.StrictRedis() 566 | >>> lua = """ 567 | ... local value = redis.call('GET', KEYS[1]) 568 | ... value = tonumber(value) 569 | ... return value * ARGV[1]""" 570 | >>> multiply = r.register_script(lua) 571 | 572 | `multiply` is now a Script instance that is invoked by calling it like a 573 | function. Script instances accept the following optional arguments: 574 | 575 | * **keys**: A list of key names that the script will access. This becomes the 576 | KEYS list in LUA. 577 | * **args**: A list of argument values. This becomes the ARGV list in LUA. 578 | * **client**: A redis-py Client or Pipeline instance that will invoke the 579 | script. If client isn't specified, the client that intiially 580 | created the Script instance (the one that `register_script` was 581 | invoked from) will be used. 582 | 583 | Continuing the example from above: 584 | 585 | .. code-block:: pycon 586 | 587 | >>> r.set('foo', 2) 588 | >>> multiply(keys=['foo'], args=[5]) 589 | 10 590 | 591 | The value of key 'foo' is set to 2. When multiply is invoked, the 'foo' key is 592 | passed to the script along with the multiplier value of 5. LUA executes the 593 | script and returns the result, 10. 594 | 595 | Script instances can be executed using a different client instance, even one 596 | that points to a completely different Redis server. 597 | 598 | .. code-block:: pycon 599 | 600 | >>> r2 = redis.StrictRedis('redis2.example.com') 601 | >>> r2.set('foo', 3) 602 | >>> multiply(keys=['foo'], args=[5], client=r2) 603 | 15 604 | 605 | The Script object ensures that the LUA script is loaded into Redis's script 606 | cache. In the event of a NOSCRIPT error, it will load the script and retry 607 | executing it. 608 | 609 | Script objects can also be used in pipelines. The pipeline instance should be 610 | passed as the client argument when calling the script. Care is taken to ensure 611 | that the script is registered in Redis's script cache just prior to pipeline 612 | execution. 613 | 614 | .. code-block:: pycon 615 | 616 | >>> pipe = r.pipeline() 617 | >>> pipe.set('foo', 5) 618 | >>> multiply(keys=['foo'], args=[5], client=pipe) 619 | >>> pipe.execute() 620 | [True, 25] 621 | 622 | Sentinel support 623 | ^^^^^^^^^^^^^^^^ 624 | 625 | redis-py can be used together with `Redis Sentinel `_ 626 | to discover Redis nodes. You need to have at least one Sentinel daemon running 627 | in order to use redis-py's Sentinel support. 628 | 629 | Connecting redis-py to the Sentinel instance(s) is easy. You can use a 630 | Sentinel connection to discover the master and slaves network addresses: 631 | 632 | .. code-block:: pycon 633 | 634 | >>> from redis.sentinel import Sentinel 635 | >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1) 636 | >>> sentinel.discover_master('mymaster') 637 | ('127.0.0.1', 6379) 638 | >>> sentinel.discover_slaves('mymaster') 639 | [('127.0.0.1', 6380)] 640 | 641 | You can also create Redis client connections from a Sentinel instance. You can 642 | connect to either the master (for write operations) or a slave (for read-only 643 | operations). 644 | 645 | .. code-block:: pycon 646 | 647 | >>> master = sentinel.master_for('mymaster', socket_timeout=0.1) 648 | >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1) 649 | >>> master.set('foo', 'bar') 650 | >>> slave.get('foo') 651 | 'bar' 652 | 653 | The master and slave objects are normal StrictRedis instances with their 654 | connection pool bound to the Sentinel instance. When a Sentinel backed client 655 | attempts to establish a connection, it first queries the Sentinel servers to 656 | determine an appropriate host to connect to. If no server is found, 657 | a MasterNotFoundError or SlaveNotFoundError is raised. Both exceptions are 658 | subclasses of ConnectionError. 659 | 660 | When trying to connect to a slave client, the Sentinel connection pool will 661 | iterate over the list of slaves until it finds one that can be connected to. 662 | If no slaves can be connected to, a connection will be established with the 663 | master. 664 | 665 | See `Guidelines for Redis clients with support for Redis Sentinel 666 | `_ to learn more about Redis Sentinel. 667 | 668 | Scan Iterators 669 | ^^^^^^^^^^^^^^ 670 | 671 | The \*SCAN commands introduced in Redis 2.8 can be cumbersome to use. While 672 | these commands are fully supported, redis-py also exposes the following methods 673 | that return Python iterators for convenience: `scan_iter`, `hscan_iter`, 674 | `sscan_iter` and `zscan_iter`. 675 | 676 | .. code-block:: pycon 677 | 678 | >>> for key, value in (('A', '1'), ('B', '2'), ('C', '3')): 679 | ... r.set(key, value) 680 | >>> for key in r.scan_iter(): 681 | ... print key, r.get(key) 682 | A 1 683 | B 2 684 | C 3 685 | 686 | Author 687 | ^^^^^^ 688 | 689 | redis-py is developed and maintained by Andy McCurdy (sedrik@gmail.com). 690 | It can be found here: http://github.com/andymccurdy/redis-py 691 | 692 | Special thanks to: 693 | 694 | * Ludovico Magnocavallo, author of the original Python Redis client, from 695 | which some of the socket code is still used. 696 | * Alexander Solovyov for ideas on the generic response callback system. 697 | * Paul Hubbard for initial packaging support. 698 | 699 | --------------------------------------------------------------------------------