├── .gitignore
├── .travis.yml
├── LICENSE
├── MANIFEST.in
├── README.md
├── gimel
├── VERSION
├── __init__.py
├── aws_api.py
├── cli.py
├── config.json.template
├── config.py
├── deploy.py
├── gimel.py
├── logger.py
└── vendor
│ ├── redis-2.10.5.dist-info
│ ├── DESCRIPTION.rst
│ ├── METADATA
│ ├── RECORD
│ ├── WHEEL
│ ├── metadata.json
│ └── top_level.txt
│ └── redis
│ ├── __init__.py
│ ├── _compat.py
│ ├── client.py
│ ├── connection.py
│ ├── exceptions.py
│ ├── lock.py
│ ├── sentinel.py
│ └── utils.py
├── requirements.in
├── requirements.txt
├── requirements_dev.txt
├── setup.cfg
├── setup.py
└── tox.ini
/.gitignore:
--------------------------------------------------------------------------------
1 | config.json
2 | gimel.zip
3 | *.pyc
4 | dist/**
5 | build/**
6 | gimel.egg-info/**
7 |
8 | .idea/
9 | *.iml
10 | .tox/
11 |
--------------------------------------------------------------------------------
/.travis.yml:
--------------------------------------------------------------------------------
1 | sudo: false
2 | language: python
3 |
4 | matrix:
5 | include:
6 | - python: 3.6
7 | env: TOXENV=packaging
8 | - python: 3.6
9 | env: TOXENV=py3-pep8
10 | - python: 2.7
11 | env: TOXENV=py2-pep8
12 |
13 | cache:
14 | directories:
15 | - $HOME/.cache/pip
16 |
17 | install:
18 | - pip install tox
19 |
20 | script:
21 | - tox
22 |
23 | notifications:
24 | email: false
25 |
26 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Copyright (C) 2016 Yoav Aner
2 |
3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
4 | documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
5 | rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit
6 | persons to whom the Software is furnished to do so, subject to the following conditions:
7 |
8 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
9 | Software.
10 |
11 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
12 | WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
13 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
14 | OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
15 |
--------------------------------------------------------------------------------
/MANIFEST.in:
--------------------------------------------------------------------------------
1 | include README.md
2 | include LICENSE
3 | include gimel/VERSION
4 | include gimel/config.json.template
5 | recursive-include gimel/vendor *
6 |
7 | exclude tox.ini
8 | exclude requirements.in
9 | exclude requirements.txt
10 | exclude requirements_dev.txt
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Gimel
2 |
3 | [](https://travis-ci.org/Alephbet/gimel)
4 | [](https://pypi.python.org/pypi/gimel)
5 |
6 | [a Scaleable A/B testing backend in ~100 lines of code (and for free*)](http://blog.gingerlime.com/2016/a-scaleable-ab-testing-backend-in-100-lines-of-code-and-for-free/)
7 |
8 | ## What is it?
9 |
10 | an A/B testing backend using AWS Lambda/API Gateway + Redis
11 |
12 | Key Features:
13 |
14 | * Highly scalable due to the nature of AWS Lambda
15 | * High performance and low memory footprint using Redis HyperLogLog
16 | * Cost Effective
17 | * Easy deployment using `gimel deploy`. No need to twiddle with AWS.
18 |
19 | ## Looking for contributors
20 |
21 | [click here for more info](https://github.com/Alephbet/gimel/issues/2)
22 |
23 | ## What does Gimel mean?
24 |
25 | Gimel (גִּימֵל) is the 3rd letter of the Hebrew Alphabet. The letter (ג) also looks visually similar to the greek Lambda
26 | (λ).
27 |
28 | ## Installation / Quick Start
29 |
30 | You will need a live instance of redis accessible online from AWS. Then run:
31 |
32 | ```bash
33 | $ pip install gimel
34 | $ gimel configure
35 | $ gimel deploy
36 | ```
37 |
38 | 
39 |
40 | It will automatically configure your AWS Lambda functions, API gateway and produce a JS snippet ready to use
41 | for tracking your experiments.
42 |
43 | ## Architecture
44 |
45 | 
46 |
47 | ### Client
48 |
49 | I suggest looking at [Alephbet](https://github.com/Alephbet/alephbet) to get more details, but at a high level, the client runs on the end-user browser. It will randomly pick a variant and execute a javascript function to 'activate' it. When a goal is reached -- user performs a certain action, this also include the pseudo-goal of *participating* in the experiment -- then an event is sent to the backend. An event typically looks something like "experiment ABC, variant red, user participated", or "experiment XYZ, variant blue, check out goal reached".
50 |
51 | Alephbet might send duplicate events, but each event should include a `uuid` to allow the backend to de-duplicate it. More below
52 |
53 | ### Data Store - Redis HyperLogLog
54 |
55 | The data store keeps a tally of each event that comes into the system. Being able to count unique events (de-duplication) was important to keep an accurate count. One approach would be to store each event in an entry / database row / document, and then run some kind of a unique count on it. Or we could use a nifty algorithm called [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog). HyperLogLog allows you to count unique counts without storing each and every item.
56 |
57 | In terms of storage space, redis HyperLogLog offers a fixed size of 12k per counter. This gives us ample space for storing experiment data with low memory footprint.
58 |
59 | **NOTE**: there's no free lunch. HLL isn't as accurate, especially with large experiments. [See here](https://github.com/Alephbet/gimel/issues/15) or check out [lamed](https://github.com/Alephbet/lamed) if you're looking for a more accurate, but more memory-hungry option.
60 |
61 | ### Backend - AWS Lambda / API Gateway
62 |
63 | The backend had to take care of a few simple types of requests:
64 |
65 | * track an event - receive a (HTTP) request with some json data -- experiment name, variant, goal and uuid, and then push it to redis.
66 | * extract the counters for a specific experiment, or all experiments into some json that can be presented on the dashboard.
67 |
68 | ### Dashboard
69 |
70 | New! access your dashboard with `gimel dashboard`
71 |
72 | 
73 |
74 | ## How does tracking work?
75 |
76 | Check out [Alephbet](https://github.com/Alephbet/alephbet).
77 |
78 | ## Command Reference
79 |
80 | * `gimel --help` - prints a help screen.
81 | * `gimel configure` - opens your editor so you can edit the config.json file. Use it to update your redis settings.
82 | * `gimel preflight` - runs preflight checks to make sure you have access to AWS, redis etc.
83 | * `gimel deploy` - deploys the code and configs to AWS automatically.
84 |
85 | ## Advanced
86 |
87 | ### custom API endpoints
88 |
89 | If you want to use different API endpoints, you can add your own `extra_wiring` into the `config.json` file (e.g. using
90 | `gimel configure`).
91 |
92 | for example, this will add a `.../prod/my_tracking_endpoint` URL pointing to the `gimel-track` lambda:
93 |
94 | ```json
95 | {
96 | "redis": {
97 | ...
98 | },
99 | "extra_wiring": [
100 | {
101 | "lambda": {
102 | "FunctionName": "gimel-track",
103 | "Handler": "gimel.track",
104 | "MemorySize": 128,
105 | "Timeout": 3
106 | },
107 | "api_gateway": {
108 | "pathPart": "my_tracking_endpoint",
109 | "method": {
110 | "httpMethod": "GET",
111 | "apiKeyRequired": false,
112 | "requestParameters": {
113 | "method.request.querystring.namespace": false,
114 | "method.request.querystring.experiment": false,
115 | "method.request.querystring.variant": false,
116 | "method.request.querystring.event": false,
117 | "method.request.querystring.uuid": false
118 | }
119 | }
120 | }
121 | }
122 | ]
123 | }
124 | ```
125 |
126 | see [WIRING](https://github.com/Alephbet/gimel/blob/52830737835119692f3a3c157fe090adabf58150/gimel/deploy.py#L81)
127 |
128 | ## Privacy, Ad-blockers (GDPR etc)
129 |
130 | Gimel provides a backend for A/B test experiment data. This data is aggregated and does *not* contain any personal information at all. It merely stores the total number of actions with a certain variation against another.
131 |
132 | As such, Gimel should meet privacy requirements of GDPR and similar privacy regulations.
133 |
134 | Nevertheless, important disclaimers:
135 |
136 | * I am not a lawyer, and it's entirely up to you if and how you decide to use Gimel. Please check with your local regulations and get legal advice to decide on your own.
137 | * Some ad-blockers are extra vigilent, and would block requests with the `track` keyword in the URL. Therefore, track requests to Gimel might be blocked by default. As the library author, I make no attempts to conceal the fact that a form of tracking is necessary to run A/B tests, even if I believe it to be respecting privacy.
138 | * Users who decide to use Gimel can, if they wish, assign a different endpoint that might get past ad-blockers, but that's entirely up to them. see [custom API endpoints](#custom-api-endpoints) on how this can be achieved.
139 | * As with almost any tool, it can be use for good or evil. Some A/B tests can be seen as manipulative, unfair or otherwise illegitimate. Again, use your own moral compass to decide whether or not it's ok to use A/B testing, or specific A/B tests.
140 |
141 | ## License
142 |
143 | Gimel is distributed under the MIT license. All 3rd party libraries and components are distributed under their
144 | respective license terms.
145 |
146 | ```
147 | Copyright (C) 2016 Yoav Aner
148 |
149 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
150 | documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
151 | rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit
152 | persons to whom the Software is furnished to do so, subject to the following conditions:
153 |
154 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
155 | Software.
156 |
157 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
158 | WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
159 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
160 | OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
161 | ```
162 |
163 |
--------------------------------------------------------------------------------
/gimel/VERSION:
--------------------------------------------------------------------------------
1 | 1.5.0
2 |
--------------------------------------------------------------------------------
/gimel/__init__.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | __version__ = open(
4 | os.path.join(os.path.dirname(__file__), 'VERSION')).read().strip()
5 |
--------------------------------------------------------------------------------
/gimel/aws_api.py:
--------------------------------------------------------------------------------
1 | import boto3
2 | import botocore.session
3 | import jmespath
4 | from functools import partial
5 |
6 |
7 | def boto_session():
8 | return boto3.session.Session()
9 |
10 |
11 | def aws(service, action, **kwargs):
12 | client = boto_session().client(service)
13 | query = kwargs.pop('query', None)
14 | if client.can_paginate(action):
15 | paginator = client.get_paginator(action)
16 | result = paginator.paginate(**kwargs).build_full_result()
17 | else:
18 | result = getattr(client, action)(**kwargs)
19 | if query:
20 | result = jmespath.compile(query).search(result)
21 | return result
22 |
23 |
24 | def region():
25 | return boto_session().region_name
26 |
27 |
28 | def check_aws_credentials():
29 | session = botocore.session.get_session()
30 | session.get_credentials().access_key
31 | session.get_credentials().secret_key
32 |
33 |
34 | iam = partial(aws, 'iam')
35 | aws_lambda = partial(aws, 'lambda')
36 | apigateway = partial(aws, 'apigateway')
37 |
--------------------------------------------------------------------------------
/gimel/cli.py:
--------------------------------------------------------------------------------
1 | import click
2 | import logging
3 | try:
4 | from gimel import logger
5 | from gimel.deploy import run, js_code_snippet, preflight_checks, dashboard_url
6 | from gimel.config import config, config_filename, generate_config
7 | except ImportError:
8 | import logger
9 | from deploy import run, js_code_snippet, preflight_checks, dashboard_url
10 | from config import config, config_filename, generate_config
11 |
12 | logger = logger.setup()
13 |
14 |
15 | @click.group()
16 | @click.option('--debug', is_flag=True)
17 | def cli(debug):
18 | if debug:
19 | logger.setLevel(logging.DEBUG)
20 |
21 |
22 | @cli.command()
23 | def preflight():
24 | logger.info('running preflight checks')
25 | preflight_checks()
26 |
27 |
28 | @cli.command()
29 | @click.option('--preflight/--no-preflight', default=True)
30 | def deploy(preflight):
31 | if preflight:
32 | logger.info('running preflight checks')
33 | if not preflight_checks():
34 | return
35 | logger.info('deploying')
36 | run()
37 | js_code_snippet()
38 |
39 |
40 | @cli.command()
41 | def configure():
42 | if not config:
43 | logger.info('generating new config {}'.format(config_filename))
44 | generate_config(config_filename)
45 | click.edit(filename=config_filename)
46 |
47 |
48 | @cli.command()
49 | @click.option('--namespace', default='alephbet')
50 | def dashboard(namespace):
51 | click.launch(dashboard_url(namespace))
52 |
53 |
54 | if __name__ == '__main__':
55 | cli()
56 |
--------------------------------------------------------------------------------
/gimel/config.json.template:
--------------------------------------------------------------------------------
1 | {
2 | "redis": {
3 | "host": "ENTER YOUR REDIS HOST",
4 | "port": 6379,
5 | "password": "..."
6 | }
7 | }
8 |
--------------------------------------------------------------------------------
/gimel/config.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | from os.path import expanduser, realpath
3 | import os
4 | import json
5 | try:
6 | from gimel import logger
7 | except ImportError:
8 | import logger
9 |
10 | logger = logger.setup()
11 |
12 |
13 | # NOTE: Copy config.json.template to config.json and edit with your settings
14 |
15 | def _load_config(config_filename):
16 | try:
17 | with open(config_filename) as config_file:
18 | logger.info('Using config {}'.format(config_filename))
19 | return config_file.name, json.load(config_file)
20 | except IOError:
21 | logger.debug('trying to load {} (not found)'.format(config_filename))
22 | return config_filename, {}
23 |
24 |
25 | def load_config():
26 | config_filenames = (realpath('config.json'),
27 | expanduser('~/.gimel/config.json'))
28 | for config_filename in config_filenames:
29 | name, content = _load_config(config_filename)
30 | if content:
31 | break
32 | return name, content
33 |
34 |
35 | def _create_file(config_filename):
36 | dirname = os.path.split(config_filename)[0]
37 | if not os.path.isdir(dirname):
38 | os.makedirs(dirname)
39 | with os.fdopen(os.open(config_filename,
40 | os.O_WRONLY | os.O_CREAT, 0o600), 'w'):
41 | pass
42 |
43 |
44 | def _config_template():
45 | from pkg_resources import resource_filename as resource
46 | return open(resource('gimel', 'config.json.template'), 'r').read()
47 |
48 |
49 | def generate_config(config_filename=None):
50 | if config_filename is None:
51 | config_filename = expanduser('~/.gimel/config.json')
52 | _create_file(config_filename)
53 |
54 | with open(config_filename, 'w') as config_file:
55 | config_file.write(_config_template())
56 | return config_filename
57 |
58 |
59 | config_filename, config = load_config()
60 |
--------------------------------------------------------------------------------
/gimel/deploy.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | from botocore.client import ClientError
3 | import os
4 | import redis
5 | from zipfile import ZipFile, ZipInfo, ZIP_DEFLATED
6 | try:
7 | from gimel import logger
8 | from gimel.gimel import _redis
9 | from gimel.config import config
10 | from gimel.aws_api import iam, apigateway, aws_lambda, region, check_aws_credentials
11 | except ImportError:
12 | import logger
13 | from gimel import _redis
14 | from config import config
15 | from aws_api import iam, apigateway, aws_lambda, region, check_aws_credentials
16 |
17 |
18 | logger = logger.setup()
19 | LIVE = 'live'
20 | REVISIONS = 5
21 | TRACK_ENDPOINT = 'track'
22 | EXPERIMENTS_ENDPOINT = 'experiments'
23 | POLICY = """{
24 | "Version": "2012-10-17",
25 | "Statement": [
26 | {
27 | "Effect": "Allow",
28 | "Action": [
29 | "lambda:InvokeFunction"
30 | ],
31 | "Resource": [
32 | "*"
33 | ]
34 | },
35 | {
36 | "Effect": "Allow",
37 | "Action": [
38 | "kinesis:GetRecords",
39 | "kinesis:GetShardIterator",
40 | "kinesis:DescribeStream",
41 | "kinesis:ListStreams",
42 | "kinesis:PutRecord",
43 | "logs:CreateLogGroup",
44 | "logs:CreateLogStream",
45 | "logs:PutLogEvents"
46 | ],
47 | "Resource": "*"
48 | }
49 | ]
50 | }"""
51 | ASSUMED_ROLE_POLICY = """{
52 | "Version": "2012-10-17",
53 | "Statement": [
54 | {
55 | "Action": "sts:AssumeRole",
56 | "Effect": "Allow",
57 | "Principal": {
58 | "Service": "lambda.amazonaws.com"
59 | }
60 | },
61 | {
62 | "Action": "sts:AssumeRole",
63 | "Effect": "Allow",
64 | "Principal": {
65 | "Service": "apigateway.amazonaws.com"
66 | }
67 | }
68 | ]
69 | }"""
70 | # source: https://aws.amazon.com/blogs/compute/using-api-gateway-mapping-templates-to-handle-changes-in-your-back-end-apis/ # noqa
71 | REQUEST_TEMPLATE = {'application/json':
72 | """{
73 | #set($queryMap = $input.params().querystring)
74 | #foreach($key in $queryMap.keySet())
75 | "$key" : "$queryMap.get($key)"
76 | #if($foreach.hasNext),#end
77 | #end
78 | }
79 | """}
80 |
81 | WIRING = [
82 | {
83 | "lambda": {
84 | "FunctionName": "gimel-track",
85 | "Handler": "gimel.track",
86 | "MemorySize": 128,
87 | "Timeout": 3
88 | },
89 | "api_gateway": {
90 | "pathPart": TRACK_ENDPOINT,
91 | "method": {
92 | "httpMethod": "GET",
93 | "apiKeyRequired": False,
94 | "requestParameters": {
95 | "method.request.querystring.namespace": False,
96 | "method.request.querystring.experiment": False,
97 | "method.request.querystring.variant": False,
98 | "method.request.querystring.event": False,
99 | "method.request.querystring.uuid": False
100 | }
101 | }
102 | }
103 | },
104 | {
105 | "lambda": {
106 | "FunctionName": "gimel-all-experiments",
107 | "Handler": "gimel.all",
108 | "MemorySize": 128,
109 | "Timeout": 60
110 | },
111 | "api_gateway": {
112 | "pathPart": EXPERIMENTS_ENDPOINT,
113 | "method": {
114 | "httpMethod": "GET",
115 | "apiKeyRequired": True,
116 | "requestParameters": {
117 | "method.request.querystring.namespace": False,
118 | "method.request.querystring.scope": False
119 | }
120 | }
121 | }
122 | },
123 | {
124 | "lambda": {
125 | "FunctionName": "gimel-delete-experiment",
126 | "Handler": "gimel.delete",
127 | "MemorySize": 128,
128 | "Timeout": 30
129 | },
130 | "api_gateway": {
131 | "pathPart": "delete",
132 | "method": {
133 | "httpMethod": "DELETE",
134 | "apiKeyRequired": True,
135 | "requestParameters": {
136 | "method.request.querystring.namespace": False,
137 | "method.request.querystring.experiment": False,
138 | }
139 | }
140 | }
141 | }
142 | ]
143 |
144 |
145 | def prepare_zip():
146 | from pkg_resources import resource_filename as resource
147 | from json import dumps
148 | logger.info('creating/updating gimel.zip')
149 | with ZipFile('gimel.zip', 'w', ZIP_DEFLATED) as zipf:
150 | info = ZipInfo('config.json')
151 | info.external_attr = 0o664 << 16
152 | zipf.writestr(info, dumps(config))
153 | zipf.write(resource('gimel', 'config.py'), 'config.py')
154 | zipf.write(resource('gimel', 'gimel.py'), 'gimel.py')
155 | zipf.write(resource('gimel', 'logger.py'), 'logger.py')
156 | for root, dirs, files in os.walk(resource('gimel', 'vendor')):
157 | for file in files:
158 | real_file = os.path.join(root, file)
159 | relative_file = os.path.relpath(real_file,
160 | resource('gimel', ''))
161 | zipf.write(real_file, relative_file)
162 |
163 |
164 | def role():
165 | new_role = False
166 | try:
167 | logger.info('finding role')
168 | iam('get_role', RoleName='gimel')
169 | except ClientError:
170 | logger.info('role not found. creating')
171 | iam('create_role', RoleName='gimel',
172 | AssumeRolePolicyDocument=ASSUMED_ROLE_POLICY)
173 | new_role = True
174 |
175 | role_arn = iam('get_role', RoleName='gimel', query='Role.Arn')
176 | logger.debug('role_arn={}'.format(role_arn))
177 |
178 | logger.info('updating role policy')
179 |
180 | iam('put_role_policy', RoleName='gimel', PolicyName='gimel',
181 | PolicyDocument=POLICY)
182 |
183 | if new_role:
184 | from time import sleep
185 | logger.info('waiting for role policy propagation')
186 | sleep(5)
187 |
188 | return role_arn
189 |
190 |
191 | def _cleanup_old_versions(name):
192 | logger.info('cleaning up old versions of {0}. Keeping {1}'.format(
193 | name, REVISIONS))
194 | versions = _versions(name)
195 | for version in versions[0:(len(versions) - REVISIONS)]:
196 | logger.debug('deleting {} version {}'.format(name, version))
197 | aws_lambda('delete_function',
198 | FunctionName=name,
199 | Qualifier=version)
200 |
201 |
202 | def _function_alias(name, version, alias=LIVE):
203 | try:
204 | logger.info('creating function alias {0} for {1}:{2}'.format(
205 | alias, name, version))
206 | arn = aws_lambda('create_alias',
207 | FunctionName=name,
208 | FunctionVersion=version,
209 | Name=alias,
210 | query='AliasArn')
211 | except ClientError:
212 | logger.info('alias {0} exists. updating {0} -> {1}:{2}'.format(
213 | alias, name, version))
214 | arn = aws_lambda('update_alias',
215 | FunctionName=name,
216 | FunctionVersion=version,
217 | Name=alias,
218 | query='AliasArn')
219 | return arn
220 |
221 |
222 | def _versions(name):
223 | versions = aws_lambda('list_versions_by_function',
224 | FunctionName=name,
225 | query='Versions[].Version')
226 | return versions[1:]
227 |
228 |
229 | def _get_version(name, alias=LIVE):
230 | return aws_lambda('get_alias',
231 | FunctionName=name,
232 | Name=alias,
233 | query='FunctionVersion')
234 |
235 |
236 | def rollback_lambda(name, alias=LIVE):
237 | all_versions = _versions(name)
238 | live_version = _get_version(name, alias)
239 | try:
240 | live_index = all_versions.index(live_version)
241 | if live_index < 1:
242 | raise RuntimeError('Cannot find previous version')
243 | prev_version = all_versions[live_index - 1]
244 | logger.info('rolling back to version {}'.format(prev_version))
245 | _function_alias(name, prev_version)
246 | except RuntimeError as error:
247 | logger.error('Unable to rollback. {}'.format(repr(error)))
248 |
249 |
250 | def rollback(alias=LIVE):
251 | for lambda_function in ('gimel-track', 'gimel-all-experiments'):
252 | rollback_lambda(lambda_function, alias)
253 |
254 |
255 | def get_create_api():
256 | api_id = apigateway('get_rest_apis',
257 | query='items[?name==`gimel`] | [0].id')
258 | if not api_id:
259 | api_id = apigateway('create_rest_api', name='gimel',
260 | description='Gimel API', query='id')
261 | logger.debug("api_id={}".format(api_id))
262 | return api_id
263 |
264 |
265 | def get_api_key():
266 | return apigateway('get_api_keys',
267 | query='items[?name==`gimel`] | [0].id')
268 |
269 |
270 | def api_key(api_id):
271 | key = get_api_key()
272 | if key:
273 | apigateway('update_api_key', apiKey=key,
274 | patchOperations=[{'op': 'add', 'path': '/stages',
275 | 'value': '{}/prod'.format(api_id)}])
276 | else:
277 | key = apigateway('create_api_key', name='gimel', enabled=True,
278 | stageKeys=[{'restApiId': api_id, 'stageName': 'prod'}])
279 | return key
280 |
281 |
282 | def resource(api_id, path):
283 | resource_id = apigateway('get_resources', restApiId=api_id,
284 | query='items[?path==`/{}`].id | [0]'.format(path))
285 | if resource_id:
286 | return resource_id
287 | root_resource_id = apigateway('get_resources', restApiId=api_id,
288 | query='items[?path==`/`].id | [0]')
289 | resource_id = apigateway('create_resource', restApiId=api_id,
290 | parentId=root_resource_id,
291 | pathPart=path, query='id')
292 | return resource_id
293 |
294 |
295 | def function_uri(function_arn, region):
296 | uri = ('arn:aws:apigateway:{0}:lambda:path/2015-03-31/functions'
297 | '/{1}/invocations').format(region, function_arn)
298 | logger.debug("uri={0}".format(uri))
299 | return uri
300 |
301 |
302 | def _clear_method(api_id, resource_id, http_method):
303 | try:
304 | method = apigateway('get_method', restApiId=api_id,
305 | resourceId=resource_id,
306 | httpMethod=http_method)
307 | except ClientError:
308 | method = None
309 | if method:
310 | apigateway('delete_method', restApiId=api_id, resourceId=resource_id,
311 | httpMethod=http_method)
312 |
313 |
314 | def cors(api_id, resource_id):
315 | _clear_method(api_id, resource_id, 'OPTIONS')
316 | apigateway('put_method', restApiId=api_id, resourceId=resource_id,
317 | httpMethod='OPTIONS', authorizationType='NONE',
318 | apiKeyRequired=False)
319 | apigateway('put_integration', restApiId=api_id, resourceId=resource_id,
320 | httpMethod='OPTIONS', type='MOCK', integrationHttpMethod='POST',
321 | requestTemplates={'application/json': '{"statusCode": 200}'})
322 | apigateway('put_method_response', restApiId=api_id, resourceId=resource_id,
323 | httpMethod='OPTIONS', statusCode='200',
324 | responseParameters={
325 | "method.response.header.Access-Control-Allow-Origin": False,
326 | "method.response.header.Access-Control-Allow-Methods": False,
327 | "method.response.header.Access-Control-Allow-Headers": False},
328 | responseModels={'application/json': 'Empty'})
329 | apigateway('put_integration_response', restApiId=api_id,
330 | resourceId=resource_id, httpMethod='OPTIONS', statusCode='200',
331 | responseParameters={
332 | "method.response.header.Access-Control-Allow-Origin": "'*'",
333 | "method.response.header.Access-Control-Allow-Methods": "'GET,OPTIONS'",
334 | "method.response.header.Access-Control-Allow-Headers": "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"}, # noqa
335 | responseTemplates={'application/json': ''})
336 |
337 |
338 | def deploy_api(api_id):
339 | logger.info('deploying API')
340 | return apigateway('create_deployment', restApiId=api_id,
341 | description='gimel deployment',
342 | stageName='prod',
343 | stageDescription='gimel production',
344 | cacheClusterEnabled=False,
345 | query='id')
346 |
347 |
348 | def api_method(api_id, resource_id, role_arn, function_uri, wiring):
349 | http_method = wiring['method']['httpMethod']
350 | _clear_method(api_id, resource_id, http_method)
351 | apigateway('put_method', restApiId=api_id, resourceId=resource_id,
352 | authorizationType='NONE',
353 | **wiring['method'])
354 | apigateway('put_integration', restApiId=api_id, resourceId=resource_id,
355 | httpMethod=http_method, type='AWS', integrationHttpMethod='POST',
356 | credentials=role_arn,
357 | uri=function_uri,
358 | requestTemplates=REQUEST_TEMPLATE)
359 | apigateway('put_method_response', restApiId=api_id, resourceId=resource_id,
360 | httpMethod=http_method, statusCode='200',
361 | responseParameters={
362 | "method.response.header.Access-Control-Allow-Origin": False,
363 | "method.response.header.Pragma": False,
364 | "method.response.header.Cache-Control": False},
365 | responseModels={'application/json': 'Empty'})
366 | apigateway('put_integration_response', restApiId=api_id,
367 | resourceId=resource_id, httpMethod=http_method, statusCode='200',
368 | responseParameters={
369 | "method.response.header.Access-Control-Allow-Origin": "'*'",
370 | "method.response.header.Pragma": "'no-cache'",
371 | "method.response.header.Cache-Control": "'no-cache, no-store, must-revalidate'"},
372 | responseTemplates={'application/json': ''})
373 |
374 |
375 | def create_update_lambda(role_arn, wiring):
376 | name, handler, memory, timeout = (wiring[k] for k in ('FunctionName',
377 | 'Handler',
378 | 'MemorySize',
379 | 'Timeout'))
380 | try:
381 | logger.info('finding lambda function')
382 | function_arn = aws_lambda('get_function',
383 | FunctionName=name,
384 | query='Configuration.FunctionArn')
385 | except ClientError:
386 | function_arn = None
387 | if not function_arn:
388 | logger.info('creating new lambda function {}'.format(name))
389 | with open('gimel.zip', 'rb') as zf:
390 | function_arn, version = aws_lambda('create_function',
391 | FunctionName=name,
392 | Runtime='python3.8',
393 | Role=role_arn,
394 | Handler=handler,
395 | MemorySize=memory,
396 | Timeout=timeout,
397 | Publish=True,
398 | Code={'ZipFile': zf.read()},
399 | query='[FunctionArn, Version]')
400 | else:
401 | logger.info('updating lambda function {}'.format(name))
402 | aws_lambda('update_function_configuration',
403 | FunctionName=name,
404 | Runtime='python3.8',
405 | Role=role_arn,
406 | Handler=handler,
407 | MemorySize=memory,
408 | Timeout=timeout)
409 | with open('gimel.zip', 'rb') as zf:
410 | function_arn, version = aws_lambda('update_function_code',
411 | FunctionName=name,
412 | Publish=True,
413 | ZipFile=zf.read(),
414 | query='[FunctionArn, Version]')
415 | function_arn = _function_alias(name, version)
416 | _cleanup_old_versions(name)
417 | logger.debug('function_arn={} ; version={}'.format(function_arn, version))
418 | return function_arn
419 |
420 |
421 | def create_update_api(role_arn, function_arn, wiring):
422 | logger.info('creating or updating api /{}'.format(wiring['pathPart']))
423 | api_id = get_create_api()
424 | resource_id = resource(api_id, wiring['pathPart'])
425 | uri = function_uri(function_arn, region())
426 | api_method(api_id, resource_id, role_arn, uri, wiring)
427 | cors(api_id, resource_id)
428 |
429 |
430 | def js_code_snippet():
431 | api_id = get_create_api()
432 | api_region = region()
433 | endpoint = TRACK_ENDPOINT
434 | logger.info('AlephBet JS code snippet:')
435 | logger.info(
436 | """
437 |
438 |
439 |
440 |
441 |
467 | """ % locals()
468 | )
469 |
470 |
471 | def dashboard_url(namespace='alephbet'):
472 | api_id = get_create_api()
473 | api_region = region()
474 | endpoint = EXPERIMENTS_ENDPOINT
475 | experiments_url = 'https://{}.execute-api.{}.amazonaws.com/prod/{}'.format(
476 | api_id, api_region, endpoint)
477 | return ('https://codepen.io/anon/pen/LOGGZj/?experiment_url={}'
478 | '&api_key={}&namespace={}').format(experiments_url,
479 | get_api_key(),
480 | namespace)
481 |
482 |
483 | def preflight_checks():
484 | logger.info('checking aws credentials and region')
485 | if region() is None:
486 | logger.error('Region is not set up. please run aws configure')
487 | return False
488 | try:
489 | check_aws_credentials()
490 | except AttributeError:
491 | logger.error('AWS credentials not found. please run aws configure')
492 | return False
493 | logger.info('testing redis')
494 | try:
495 | _redis().ping()
496 | except redis.exceptions.ConnectionError:
497 | logger.error('Redis ping failed. Please run gimel configure')
498 | return False
499 | return True
500 |
501 |
502 | def run():
503 | prepare_zip()
504 | api_id = get_create_api()
505 | role_arn = role()
506 | for component in WIRING + config.get("extra_wiring", []):
507 | function_arn = create_update_lambda(role_arn, component['lambda'])
508 | create_update_api(role_arn, function_arn, component['api_gateway'])
509 | deploy_api(api_id)
510 | api_key(api_id)
511 |
512 |
513 | if __name__ == '__main__':
514 | try:
515 | preflight_checks()
516 | run()
517 | js_code_snippet()
518 | except Exception:
519 | logger.error('preflight checks failed')
520 |
--------------------------------------------------------------------------------
/gimel/gimel.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | import sys
3 | sys.path.insert(0, './vendor')
4 | import redis
5 | try:
6 | from gimel.config import config
7 | except ImportError:
8 | from config import config
9 |
10 |
11 | def _redis():
12 | redis_config = config['redis']
13 | redis_config["charset"] = "utf-8"
14 | redis_config["decode_responses"] = True
15 | return redis.Redis(**redis_config)
16 |
17 |
18 | def _counter_key(namespace, experiment, goal, variant):
19 | return '{0}:counters:{1}:{2}:{3}'.format(
20 | namespace,
21 | experiment,
22 | goal,
23 | variant)
24 |
25 |
26 | def _results_dict(namespace, experiment):
27 | """ returns a dict in the following format:
28 | {namespace.counters.experiment.goal.variant: count}
29 | """
30 | r = _redis()
31 | keys = r.smembers("{0}:{1}:counter_keys".format(namespace, experiment))
32 | pipe = r.pipeline()
33 | for key in keys:
34 | pipe.pfcount(key)
35 | values = pipe.execute()
36 | return dict(zip(keys, values))
37 |
38 |
39 | def _experiment_goals(namespace, experiment):
40 | raw_results = _results_dict(namespace, experiment)
41 | variants = set([x.split(':')[-1] for x in raw_results.keys()])
42 | goals = set([x.split(':')[-2] for x in raw_results.keys()])
43 | goals.discard('participate')
44 | goal_results = []
45 | for goal in goals:
46 | goal_data = {'goal': goal, 'results': []}
47 | for variant in variants:
48 | trials = raw_results.get(
49 | _counter_key(namespace, experiment, 'participate', variant), 0)
50 | successes = raw_results.get(
51 | _counter_key(namespace, experiment, goal, variant), 0)
52 | goal_data['results'].append(
53 | {'label': variant,
54 | 'successes': successes,
55 | 'trials': trials})
56 | goal_results.append(goal_data)
57 | return goal_results
58 |
59 |
60 | def experiment(event, context):
61 | """ retrieves a single experiment results from redis
62 | params:
63 | - experiment - name of the experiment
64 | - namespace (optional)
65 | """
66 | experiment = event['experiment']
67 | namespace = event.get('namespace', 'alephbet')
68 | return _experiment_goals(namespace, experiment)
69 |
70 |
71 | def all(event, context):
72 | """ retrieves all experiment results from redis
73 | params:
74 | - namespace (optional)
75 | - scope (optional, comma-separated list of experiments)
76 | """
77 | r = _redis()
78 | namespace = event.get('namespace', 'alephbet')
79 | scope = event.get('scope')
80 | if scope:
81 | experiments = scope.split(',')
82 | else:
83 | experiments = r.smembers("{0}:experiments".format(namespace))
84 | results = []
85 | results.append({'meta': {'scope': scope}})
86 | for ex in experiments:
87 | goals = experiment({'experiment': ex, 'namespace': namespace}, context)
88 | results.append({'experiment': ex, 'goals': goals})
89 | return results
90 |
91 |
92 | def track(event, context):
93 | """ tracks an alephbet event (participate, goal etc)
94 | params:
95 | - experiment - name of the experiment
96 | - uuid - a unique id for the event
97 | - variant - the name of the variant
98 | - event - either the goal name or 'participate'
99 | - namespace (optional)
100 | """
101 | experiment = event['experiment']
102 | namespace = event.get('namespace', 'alephbet')
103 | uuid = event['uuid']
104 | variant = event['variant']
105 | tracking_event = event['event']
106 |
107 | r = _redis()
108 | pipe = r.pipeline()
109 | key = '{0}:counters:{1}:{2}:{3}'.format(
110 | namespace, experiment, tracking_event, variant)
111 | pipe.sadd('{0}:experiments'.format(namespace), experiment)
112 | pipe.sadd('{0}:counter_keys'.format(namespace), key)
113 | pipe.sadd('{0}:{1}:counter_keys'.format(namespace, experiment), key)
114 | pipe.pfadd(key, uuid)
115 | pipe.execute()
116 |
117 |
118 | def delete(event, context):
119 | """ delete an experiment
120 | params:
121 | - experiment - name of the experiment
122 | - namespace
123 | """
124 |
125 | r = _redis()
126 | namespace = event.get('namespace', 'alephbet')
127 | experiment = event['experiment']
128 | experiments_set_key = '{0}:experiments'.format(namespace)
129 | experiment_counters_set_key = '{0}:{1}:counter_keys'.format(namespace, experiment)
130 | all_counters_set_key = '{0}:counter_keys'.format(namespace)
131 |
132 | if r.sismember(experiments_set_key, experiment):
133 | counter_keys = r.smembers(
134 | experiment_counters_set_key
135 | )
136 | pipe = r.pipeline()
137 | for key in counter_keys:
138 | pipe.srem(all_counters_set_key, key)
139 | pipe.delete(key)
140 | pipe.delete(
141 | experiment_counters_set_key
142 | )
143 | pipe.srem(
144 | experiments_set_key,
145 | experiment
146 | )
147 | pipe.execute()
148 |
--------------------------------------------------------------------------------
/gimel/logger.py:
--------------------------------------------------------------------------------
1 | import logging
2 |
3 |
4 | class ColorFormatter(logging.Formatter):
5 | colors = {
6 | 'error': dict(fg='red'),
7 | 'exception': dict(fg='red'),
8 | 'critical': dict(fg='red'),
9 | 'debug': dict(fg='blue'),
10 | 'warning': dict(fg='yellow'),
11 | 'info': dict(fg='green')
12 | }
13 |
14 | def format(self, record):
15 | import click
16 | s = super(ColorFormatter, self).format(record)
17 | if not record.exc_info:
18 | level = record.levelname.lower()
19 | if level in self.colors:
20 | s = click.style(s, **self.colors[level])
21 | return s
22 |
23 |
24 | class CustomFormatter(logging.Formatter):
25 | def format(self, record):
26 | s = super(CustomFormatter, self).format(record)
27 | if record.levelno == logging.ERROR:
28 | s = s.replace('[.]', '[x]')
29 | return s
30 |
31 |
32 | def setup(name=__name__, level=logging.INFO):
33 | logger = logging.getLogger(name)
34 | if logger.handlers:
35 | return logger
36 | logger.setLevel(level)
37 | try:
38 | # check if click exists to swap the logger
39 | import click # noqa
40 | formatter = ColorFormatter('[.] %(message)s')
41 | except ImportError:
42 | formatter = CustomFormatter('[.] %(message)s')
43 | handler = logging.StreamHandler(None)
44 | handler.setFormatter(formatter)
45 | logger.addHandler(handler)
46 | logger.setLevel(logging.INFO)
47 | return logger
48 |
--------------------------------------------------------------------------------
/gimel/vendor/redis-2.10.5.dist-info/DESCRIPTION.rst:
--------------------------------------------------------------------------------
1 | redis-py
2 | ========
3 |
4 | The Python interface to the Redis key-value store.
5 |
6 | .. image:: https://secure.travis-ci.org/andymccurdy/redis-py.png?branch=master
7 | :target: http://travis-ci.org/andymccurdy/redis-py
8 |
9 | Installation
10 | ------------
11 |
12 | redis-py requires a running Redis server. See `Redis's quickstart
13 | `_ for installation instructions.
14 |
15 | To install redis-py, simply:
16 |
17 | .. code-block:: bash
18 |
19 | $ sudo pip install redis
20 |
21 | or alternatively (you really should be using pip though):
22 |
23 | .. code-block:: bash
24 |
25 | $ sudo easy_install redis
26 |
27 | or from source:
28 |
29 | .. code-block:: bash
30 |
31 | $ sudo python setup.py install
32 |
33 |
34 | Getting Started
35 | ---------------
36 |
37 | .. code-block:: pycon
38 |
39 | >>> import redis
40 | >>> r = redis.StrictRedis(host='localhost', port=6379, db=0)
41 | >>> r.set('foo', 'bar')
42 | True
43 | >>> r.get('foo')
44 | 'bar'
45 |
46 | API Reference
47 | -------------
48 |
49 | The `official Redis command documentation `_ does a
50 | great job of explaining each command in detail. redis-py exposes two client
51 | classes that implement these commands. The StrictRedis class attempts to adhere
52 | to the official command syntax. There are a few exceptions:
53 |
54 | * **SELECT**: Not implemented. See the explanation in the Thread Safety section
55 | below.
56 | * **DEL**: 'del' is a reserved keyword in the Python syntax. Therefore redis-py
57 | uses 'delete' instead.
58 | * **CONFIG GET|SET**: These are implemented separately as config_get or config_set.
59 | * **MULTI/EXEC**: These are implemented as part of the Pipeline class. The
60 | pipeline is wrapped with the MULTI and EXEC statements by default when it
61 | is executed, which can be disabled by specifying transaction=False.
62 | See more about Pipelines below.
63 | * **SUBSCRIBE/LISTEN**: Similar to pipelines, PubSub is implemented as a separate
64 | class as it places the underlying connection in a state where it can't
65 | execute non-pubsub commands. Calling the pubsub method from the Redis client
66 | will return a PubSub instance where you can subscribe to channels and listen
67 | for messages. You can only call PUBLISH from the Redis client (see
68 | `this comment on issue #151
69 | `_
70 | for details).
71 | * **SCAN/SSCAN/HSCAN/ZSCAN**: The \*SCAN commands are implemented as they
72 | exist in the Redis documentation. In addition, each command has an equivilant
73 | iterator method. These are purely for convenience so the user doesn't have
74 | to keep track of the cursor while iterating. Use the
75 | scan_iter/sscan_iter/hscan_iter/zscan_iter methods for this behavior.
76 |
77 | In addition to the changes above, the Redis class, a subclass of StrictRedis,
78 | overrides several other commands to provide backwards compatibility with older
79 | versions of redis-py:
80 |
81 | * **LREM**: Order of 'num' and 'value' arguments reversed such that 'num' can
82 | provide a default value of zero.
83 | * **ZADD**: Redis specifies the 'score' argument before 'value'. These were swapped
84 | accidentally when being implemented and not discovered until after people
85 | were already using it. The Redis class expects \*args in the form of:
86 | `name1, score1, name2, score2, ...`
87 | * **SETEX**: Order of 'time' and 'value' arguments reversed.
88 |
89 |
90 | More Detail
91 | -----------
92 |
93 | Connection Pools
94 | ^^^^^^^^^^^^^^^^
95 |
96 | Behind the scenes, redis-py uses a connection pool to manage connections to
97 | a Redis server. By default, each Redis instance you create will in turn create
98 | its own connection pool. You can override this behavior and use an existing
99 | connection pool by passing an already created connection pool instance to the
100 | connection_pool argument of the Redis class. You may choose to do this in order
101 | to implement client side sharding or have finer grain control of how
102 | connections are managed.
103 |
104 | .. code-block:: pycon
105 |
106 | >>> pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
107 | >>> r = redis.Redis(connection_pool=pool)
108 |
109 | Connections
110 | ^^^^^^^^^^^
111 |
112 | ConnectionPools manage a set of Connection instances. redis-py ships with two
113 | types of Connections. The default, Connection, is a normal TCP socket based
114 | connection. The UnixDomainSocketConnection allows for clients running on the
115 | same device as the server to connect via a unix domain socket. To use a
116 | UnixDomainSocketConnection connection, simply pass the unix_socket_path
117 | argument, which is a string to the unix domain socket file. Additionally, make
118 | sure the unixsocket parameter is defined in your redis.conf file. It's
119 | commented out by default.
120 |
121 | .. code-block:: pycon
122 |
123 | >>> r = redis.Redis(unix_socket_path='/tmp/redis.sock')
124 |
125 | You can create your own Connection subclasses as well. This may be useful if
126 | you want to control the socket behavior within an async framework. To
127 | instantiate a client class using your own connection, you need to create
128 | a connection pool, passing your class to the connection_class argument.
129 | Other keyword parameters you pass to the pool will be passed to the class
130 | specified during initialization.
131 |
132 | .. code-block:: pycon
133 |
134 | >>> pool = redis.ConnectionPool(connection_class=YourConnectionClass,
135 | your_arg='...', ...)
136 |
137 | Parsers
138 | ^^^^^^^
139 |
140 | Parser classes provide a way to control how responses from the Redis server
141 | are parsed. redis-py ships with two parser classes, the PythonParser and the
142 | HiredisParser. By default, redis-py will attempt to use the HiredisParser if
143 | you have the hiredis module installed and will fallback to the PythonParser
144 | otherwise.
145 |
146 | Hiredis is a C library maintained by the core Redis team. Pieter Noordhuis was
147 | kind enough to create Python bindings. Using Hiredis can provide up to a
148 | 10x speed improvement in parsing responses from the Redis server. The
149 | performance increase is most noticeable when retrieving many pieces of data,
150 | such as from LRANGE or SMEMBERS operations.
151 |
152 | Hiredis is available on PyPI, and can be installed via pip or easy_install
153 | just like redis-py.
154 |
155 | .. code-block:: bash
156 |
157 | $ pip install hiredis
158 |
159 | or
160 |
161 | .. code-block:: bash
162 |
163 | $ easy_install hiredis
164 |
165 | Response Callbacks
166 | ^^^^^^^^^^^^^^^^^^
167 |
168 | The client class uses a set of callbacks to cast Redis responses to the
169 | appropriate Python type. There are a number of these callbacks defined on
170 | the Redis client class in a dictionary called RESPONSE_CALLBACKS.
171 |
172 | Custom callbacks can be added on a per-instance basis using the
173 | set_response_callback method. This method accepts two arguments: a command
174 | name and the callback. Callbacks added in this manner are only valid on the
175 | instance the callback is added to. If you want to define or override a callback
176 | globally, you should make a subclass of the Redis client and add your callback
177 | to its REDIS_CALLBACKS class dictionary.
178 |
179 | Response callbacks take at least one parameter: the response from the Redis
180 | server. Keyword arguments may also be accepted in order to further control
181 | how to interpret the response. These keyword arguments are specified during the
182 | command's call to execute_command. The ZRANGE implementation demonstrates the
183 | use of response callback keyword arguments with its "withscores" argument.
184 |
185 | Thread Safety
186 | ^^^^^^^^^^^^^
187 |
188 | Redis client instances can safely be shared between threads. Internally,
189 | connection instances are only retrieved from the connection pool during
190 | command execution, and returned to the pool directly after. Command execution
191 | never modifies state on the client instance.
192 |
193 | However, there is one caveat: the Redis SELECT command. The SELECT command
194 | allows you to switch the database currently in use by the connection. That
195 | database remains selected until another is selected or until the connection is
196 | closed. This creates an issue in that connections could be returned to the pool
197 | that are connected to a different database.
198 |
199 | As a result, redis-py does not implement the SELECT command on client
200 | instances. If you use multiple Redis databases within the same application, you
201 | should create a separate client instance (and possibly a separate connection
202 | pool) for each database.
203 |
204 | It is not safe to pass PubSub or Pipeline objects between threads.
205 |
206 | Pipelines
207 | ^^^^^^^^^
208 |
209 | Pipelines are a subclass of the base Redis class that provide support for
210 | buffering multiple commands to the server in a single request. They can be used
211 | to dramatically increase the performance of groups of commands by reducing the
212 | number of back-and-forth TCP packets between the client and server.
213 |
214 | Pipelines are quite simple to use:
215 |
216 | .. code-block:: pycon
217 |
218 | >>> r = redis.Redis(...)
219 | >>> r.set('bing', 'baz')
220 | >>> # Use the pipeline() method to create a pipeline instance
221 | >>> pipe = r.pipeline()
222 | >>> # The following SET commands are buffered
223 | >>> pipe.set('foo', 'bar')
224 | >>> pipe.get('bing')
225 | >>> # the EXECUTE call sends all buffered commands to the server, returning
226 | >>> # a list of responses, one for each command.
227 | >>> pipe.execute()
228 | [True, 'baz']
229 |
230 | For ease of use, all commands being buffered into the pipeline return the
231 | pipeline object itself. Therefore calls can be chained like:
232 |
233 | .. code-block:: pycon
234 |
235 | >>> pipe.set('foo', 'bar').sadd('faz', 'baz').incr('auto_number').execute()
236 | [True, True, 6]
237 |
238 | In addition, pipelines can also ensure the buffered commands are executed
239 | atomically as a group. This happens by default. If you want to disable the
240 | atomic nature of a pipeline but still want to buffer commands, you can turn
241 | off transactions.
242 |
243 | .. code-block:: pycon
244 |
245 | >>> pipe = r.pipeline(transaction=False)
246 |
247 | A common issue occurs when requiring atomic transactions but needing to
248 | retrieve values in Redis prior for use within the transaction. For instance,
249 | let's assume that the INCR command didn't exist and we need to build an atomic
250 | version of INCR in Python.
251 |
252 | The completely naive implementation could GET the value, increment it in
253 | Python, and SET the new value back. However, this is not atomic because
254 | multiple clients could be doing this at the same time, each getting the same
255 | value from GET.
256 |
257 | Enter the WATCH command. WATCH provides the ability to monitor one or more keys
258 | prior to starting a transaction. If any of those keys change prior the
259 | execution of that transaction, the entire transaction will be canceled and a
260 | WatchError will be raised. To implement our own client-side INCR command, we
261 | could do something like this:
262 |
263 | .. code-block:: pycon
264 |
265 | >>> with r.pipeline() as pipe:
266 | ... while 1:
267 | ... try:
268 | ... # put a WATCH on the key that holds our sequence value
269 | ... pipe.watch('OUR-SEQUENCE-KEY')
270 | ... # after WATCHing, the pipeline is put into immediate execution
271 | ... # mode until we tell it to start buffering commands again.
272 | ... # this allows us to get the current value of our sequence
273 | ... current_value = pipe.get('OUR-SEQUENCE-KEY')
274 | ... next_value = int(current_value) + 1
275 | ... # now we can put the pipeline back into buffered mode with MULTI
276 | ... pipe.multi()
277 | ... pipe.set('OUR-SEQUENCE-KEY', next_value)
278 | ... # and finally, execute the pipeline (the set command)
279 | ... pipe.execute()
280 | ... # if a WatchError wasn't raised during execution, everything
281 | ... # we just did happened atomically.
282 | ... break
283 | ... except WatchError:
284 | ... # another client must have changed 'OUR-SEQUENCE-KEY' between
285 | ... # the time we started WATCHing it and the pipeline's execution.
286 | ... # our best bet is to just retry.
287 | ... continue
288 |
289 | Note that, because the Pipeline must bind to a single connection for the
290 | duration of a WATCH, care must be taken to ensure that the connection is
291 | returned to the connection pool by calling the reset() method. If the
292 | Pipeline is used as a context manager (as in the example above) reset()
293 | will be called automatically. Of course you can do this the manual way by
294 | explicity calling reset():
295 |
296 | .. code-block:: pycon
297 |
298 | >>> pipe = r.pipeline()
299 | >>> while 1:
300 | ... try:
301 | ... pipe.watch('OUR-SEQUENCE-KEY')
302 | ... ...
303 | ... pipe.execute()
304 | ... break
305 | ... except WatchError:
306 | ... continue
307 | ... finally:
308 | ... pipe.reset()
309 |
310 | A convenience method named "transaction" exists for handling all the
311 | boilerplate of handling and retrying watch errors. It takes a callable that
312 | should expect a single parameter, a pipeline object, and any number of keys to
313 | be WATCHed. Our client-side INCR command above can be written like this,
314 | which is much easier to read:
315 |
316 | .. code-block:: pycon
317 |
318 | >>> def client_side_incr(pipe):
319 | ... current_value = pipe.get('OUR-SEQUENCE-KEY')
320 | ... next_value = int(current_value) + 1
321 | ... pipe.multi()
322 | ... pipe.set('OUR-SEQUENCE-KEY', next_value)
323 | >>>
324 | >>> r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY')
325 | [True]
326 |
327 | Publish / Subscribe
328 | ^^^^^^^^^^^^^^^^^^^
329 |
330 | redis-py includes a `PubSub` object that subscribes to channels and listens
331 | for new messages. Creating a `PubSub` object is easy.
332 |
333 | .. code-block:: pycon
334 |
335 | >>> r = redis.StrictRedis(...)
336 | >>> p = r.pubsub()
337 |
338 | Once a `PubSub` instance is created, channels and patterns can be subscribed
339 | to.
340 |
341 | .. code-block:: pycon
342 |
343 | >>> p.subscribe('my-first-channel', 'my-second-channel', ...)
344 | >>> p.psubscribe('my-*', ...)
345 |
346 | The `PubSub` instance is now subscribed to those channels/patterns. The
347 | subscription confirmations can be seen by reading messages from the `PubSub`
348 | instance.
349 |
350 | .. code-block:: pycon
351 |
352 | >>> p.get_message()
353 | {'pattern': None, 'type': 'subscribe', 'channel': 'my-second-channel', 'data': 1L}
354 | >>> p.get_message()
355 | {'pattern': None, 'type': 'subscribe', 'channel': 'my-first-channel', 'data': 2L}
356 | >>> p.get_message()
357 | {'pattern': None, 'type': 'psubscribe', 'channel': 'my-*', 'data': 3L}
358 |
359 | Every message read from a `PubSub` instance will be a dictionary with the
360 | following keys.
361 |
362 | * **type**: One of the following: 'subscribe', 'unsubscribe', 'psubscribe',
363 | 'punsubscribe', 'message', 'pmessage'
364 | * **channel**: The channel [un]subscribed to or the channel a message was
365 | published to
366 | * **pattern**: The pattern that matched a published message's channel. Will be
367 | `None` in all cases except for 'pmessage' types.
368 | * **data**: The message data. With [un]subscribe messages, this value will be
369 | the number of channels and patterns the connection is currently subscribed
370 | to. With [p]message messages, this value will be the actual published
371 | message.
372 |
373 | Let's send a message now.
374 |
375 | .. code-block:: pycon
376 |
377 | # the publish method returns the number matching channel and pattern
378 | # subscriptions. 'my-first-channel' matches both the 'my-first-channel'
379 | # subscription and the 'my-*' pattern subscription, so this message will
380 | # be delivered to 2 channels/patterns
381 | >>> r.publish('my-first-channel', 'some data')
382 | 2
383 | >>> p.get_message()
384 | {'channel': 'my-first-channel', 'data': 'some data', 'pattern': None, 'type': 'message'}
385 | >>> p.get_message()
386 | {'channel': 'my-first-channel', 'data': 'some data', 'pattern': 'my-*', 'type': 'pmessage'}
387 |
388 | Unsubscribing works just like subscribing. If no arguments are passed to
389 | [p]unsubscribe, all channels or patterns will be unsubscribed from.
390 |
391 | .. code-block:: pycon
392 |
393 | >>> p.unsubscribe()
394 | >>> p.punsubscribe('my-*')
395 | >>> p.get_message()
396 | {'channel': 'my-second-channel', 'data': 2L, 'pattern': None, 'type': 'unsubscribe'}
397 | >>> p.get_message()
398 | {'channel': 'my-first-channel', 'data': 1L, 'pattern': None, 'type': 'unsubscribe'}
399 | >>> p.get_message()
400 | {'channel': 'my-*', 'data': 0L, 'pattern': None, 'type': 'punsubscribe'}
401 |
402 | redis-py also allows you to register callback functions to handle published
403 | messages. Message handlers take a single argument, the message, which is a
404 | dictionary just like the examples above. To subscribe to a channel or pattern
405 | with a message handler, pass the channel or pattern name as a keyword argument
406 | with its value being the callback function.
407 |
408 | When a message is read on a channel or pattern with a message handler, the
409 | message dictionary is created and passed to the message handler. In this case,
410 | a `None` value is returned from get_message() since the message was already
411 | handled.
412 |
413 | .. code-block:: pycon
414 |
415 | >>> def my_handler(message):
416 | ... print 'MY HANDLER: ', message['data']
417 | >>> p.subscribe(**{'my-channel': my_handler})
418 | # read the subscribe confirmation message
419 | >>> p.get_message()
420 | {'pattern': None, 'type': 'subscribe', 'channel': 'my-channel', 'data': 1L}
421 | >>> r.publish('my-channel', 'awesome data')
422 | 1
423 | # for the message handler to work, we need tell the instance to read data.
424 | # this can be done in several ways (read more below). we'll just use
425 | # the familiar get_message() function for now
426 | >>> message = p.get_message()
427 | MY HANDLER: awesome data
428 | # note here that the my_handler callback printed the string above.
429 | # `message` is None because the message was handled by our handler.
430 | >>> print message
431 | None
432 |
433 | If your application is not interested in the (sometimes noisy)
434 | subscribe/unsubscribe confirmation messages, you can ignore them by passing
435 | `ignore_subscribe_messages=True` to `r.pubsub()`. This will cause all
436 | subscribe/unsubscribe messages to be read, but they won't bubble up to your
437 | application.
438 |
439 | .. code-block:: pycon
440 |
441 | >>> p = r.pubsub(ignore_subscribe_messages=True)
442 | >>> p.subscribe('my-channel')
443 | >>> p.get_message() # hides the subscribe message and returns None
444 | >>> r.publish('my-channel')
445 | 1
446 | >>> p.get_message()
447 | {'channel': 'my-channel', data': 'my data', 'pattern': None, 'type': 'message'}
448 |
449 | There are three different strategies for reading messages.
450 |
451 | The examples above have been using `pubsub.get_message()`. Behind the scenes,
452 | `get_message()` uses the system's 'select' module to quickly poll the
453 | connection's socket. If there's data available to be read, `get_message()` will
454 | read it, format the message and return it or pass it to a message handler. If
455 | there's no data to be read, `get_message()` will immediately return None. This
456 | makes it trivial to integrate into an existing event loop inside your
457 | application.
458 |
459 | .. code-block:: pycon
460 |
461 | >>> while True:
462 | >>> message = p.get_message()
463 | >>> if message:
464 | >>> # do something with the message
465 | >>> time.sleep(0.001) # be nice to the system :)
466 |
467 | Older versions of redis-py only read messages with `pubsub.listen()`. listen()
468 | is a generator that blocks until a message is available. If your application
469 | doesn't need to do anything else but receive and act on messages received from
470 | redis, listen() is an easy way to get up an running.
471 |
472 | .. code-block:: pycon
473 |
474 | >>> for message in p.listen():
475 | ... # do something with the message
476 |
477 | The third option runs an event loop in a separate thread.
478 | `pubsub.run_in_thread()` creates a new thread and starts the event loop. The
479 | thread object is returned to the caller of `run_in_thread()`. The caller can
480 | use the `thread.stop()` method to shut down the event loop and thread. Behind
481 | the scenes, this is simply a wrapper around `get_message()` that runs in a
482 | separate thread, essentially creating a tiny non-blocking event loop for you.
483 | `run_in_thread()` takes an optional `sleep_time` argument. If specified, the
484 | event loop will call `time.sleep()` with the value in each iteration of the
485 | loop.
486 |
487 | Note: Since we're running in a separate thread, there's no way to handle
488 | messages that aren't automatically handled with registered message handlers.
489 | Therefore, redis-py prevents you from calling `run_in_thread()` if you're
490 | subscribed to patterns or channels that don't have message handlers attached.
491 |
492 | .. code-block:: pycon
493 |
494 | >>> p.subscribe(**{'my-channel': my_handler})
495 | >>> thread = p.run_in_thread(sleep_time=0.001)
496 | # the event loop is now running in the background processing messages
497 | # when it's time to shut it down...
498 | >>> thread.stop()
499 |
500 | A PubSub object adheres to the same encoding semantics as the client instance
501 | it was created from. Any channel or pattern that's unicode will be encoded
502 | using the `charset` specified on the client before being sent to Redis. If the
503 | client's `decode_responses` flag is set the False (the default), the
504 | 'channel', 'pattern' and 'data' values in message dictionaries will be byte
505 | strings (str on Python 2, bytes on Python 3). If the client's
506 | `decode_responses` is True, then the 'channel', 'pattern' and 'data' values
507 | will be automatically decoded to unicode strings using the client's `charset`.
508 |
509 | PubSub objects remember what channels and patterns they are subscribed to. In
510 | the event of a disconnection such as a network error or timeout, the
511 | PubSub object will re-subscribe to all prior channels and patterns when
512 | reconnecting. Messages that were published while the client was disconnected
513 | cannot be delivered. When you're finished with a PubSub object, call its
514 | `.close()` method to shutdown the connection.
515 |
516 | .. code-block:: pycon
517 |
518 | >>> p = r.pubsub()
519 | >>> ...
520 | >>> p.close()
521 |
522 | LUA Scripting
523 | ^^^^^^^^^^^^^
524 |
525 | redis-py supports the EVAL, EVALSHA, and SCRIPT commands. However, there are
526 | a number of edge cases that make these commands tedious to use in real world
527 | scenarios. Therefore, redis-py exposes a Script object that makes scripting
528 | much easier to use.
529 |
530 | To create a Script instance, use the `register_script` function on a client
531 | instance passing the LUA code as the first argument. `register_script` returns
532 | a Script instance that you can use throughout your code.
533 |
534 | The following trivial LUA script accepts two parameters: the name of a key and
535 | a multiplier value. The script fetches the value stored in the key, multiplies
536 | it with the multiplier value and returns the result.
537 |
538 | .. code-block:: pycon
539 |
540 | >>> r = redis.StrictRedis()
541 | >>> lua = """
542 | ... local value = redis.call('GET', KEYS[1])
543 | ... value = tonumber(value)
544 | ... return value * ARGV[1]"""
545 | >>> multiply = r.register_script(lua)
546 |
547 | `multiply` is now a Script instance that is invoked by calling it like a
548 | function. Script instances accept the following optional arguments:
549 |
550 | * **keys**: A list of key names that the script will access. This becomes the
551 | KEYS list in LUA.
552 | * **args**: A list of argument values. This becomes the ARGV list in LUA.
553 | * **client**: A redis-py Client or Pipeline instance that will invoke the
554 | script. If client isn't specified, the client that intiially
555 | created the Script instance (the one that `register_script` was
556 | invoked from) will be used.
557 |
558 | Continuing the example from above:
559 |
560 | .. code-block:: pycon
561 |
562 | >>> r.set('foo', 2)
563 | >>> multiply(keys=['foo'], args=[5])
564 | 10
565 |
566 | The value of key 'foo' is set to 2. When multiply is invoked, the 'foo' key is
567 | passed to the script along with the multiplier value of 5. LUA executes the
568 | script and returns the result, 10.
569 |
570 | Script instances can be executed using a different client instance, even one
571 | that points to a completely different Redis server.
572 |
573 | .. code-block:: pycon
574 |
575 | >>> r2 = redis.StrictRedis('redis2.example.com')
576 | >>> r2.set('foo', 3)
577 | >>> multiply(keys=['foo'], args=[5], client=r2)
578 | 15
579 |
580 | The Script object ensures that the LUA script is loaded into Redis's script
581 | cache. In the event of a NOSCRIPT error, it will load the script and retry
582 | executing it.
583 |
584 | Script objects can also be used in pipelines. The pipeline instance should be
585 | passed as the client argument when calling the script. Care is taken to ensure
586 | that the script is registered in Redis's script cache just prior to pipeline
587 | execution.
588 |
589 | .. code-block:: pycon
590 |
591 | >>> pipe = r.pipeline()
592 | >>> pipe.set('foo', 5)
593 | >>> multiply(keys=['foo'], args=[5], client=pipe)
594 | >>> pipe.execute()
595 | [True, 25]
596 |
597 | Sentinel support
598 | ^^^^^^^^^^^^^^^^
599 |
600 | redis-py can be used together with `Redis Sentinel `_
601 | to discover Redis nodes. You need to have at least one Sentinel daemon running
602 | in order to use redis-py's Sentinel support.
603 |
604 | Connecting redis-py to the Sentinel instance(s) is easy. You can use a
605 | Sentinel connection to discover the master and slaves network addresses:
606 |
607 | .. code-block:: pycon
608 |
609 | >>> from redis.sentinel import Sentinel
610 | >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)
611 | >>> sentinel.discover_master('mymaster')
612 | ('127.0.0.1', 6379)
613 | >>> sentinel.discover_slaves('mymaster')
614 | [('127.0.0.1', 6380)]
615 |
616 | You can also create Redis client connections from a Sentinel instance. You can
617 | connect to either the master (for write operations) or a slave (for read-only
618 | operations).
619 |
620 | .. code-block:: pycon
621 |
622 | >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)
623 | >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)
624 | >>> master.set('foo', 'bar')
625 | >>> slave.get('foo')
626 | 'bar'
627 |
628 | The master and slave objects are normal StrictRedis instances with their
629 | connection pool bound to the Sentinel instance. When a Sentinel backed client
630 | attempts to establish a connection, it first queries the Sentinel servers to
631 | determine an appropriate host to connect to. If no server is found,
632 | a MasterNotFoundError or SlaveNotFoundError is raised. Both exceptions are
633 | subclasses of ConnectionError.
634 |
635 | When trying to connect to a slave client, the Sentinel connection pool will
636 | iterate over the list of slaves until it finds one that can be connected to.
637 | If no slaves can be connected to, a connection will be established with the
638 | master.
639 |
640 | See `Guidelines for Redis clients with support for Redis Sentinel
641 | `_ to learn more about Redis Sentinel.
642 |
643 | Scan Iterators
644 | ^^^^^^^^^^^^^^
645 |
646 | The \*SCAN commands introduced in Redis 2.8 can be cumbersome to use. While
647 | these commands are fully supported, redis-py also exposes the following methods
648 | that return Python iterators for convenience: `scan_iter`, `hscan_iter`,
649 | `sscan_iter` and `zscan_iter`.
650 |
651 | .. code-block:: pycon
652 |
653 | >>> for key, value in (('A', '1'), ('B', '2'), ('C', '3')):
654 | ... r.set(key, value)
655 | >>> for key in r.scan_iter():
656 | ... print key, r.get(key)
657 | A 1
658 | B 2
659 | C 3
660 |
661 | Author
662 | ^^^^^^
663 |
664 | redis-py is developed and maintained by Andy McCurdy (sedrik@gmail.com).
665 | It can be found here: http://github.com/andymccurdy/redis-py
666 |
667 | Special thanks to:
668 |
669 | * Ludovico Magnocavallo, author of the original Python Redis client, from
670 | which some of the socket code is still used.
671 | * Alexander Solovyov for ideas on the generic response callback system.
672 | * Paul Hubbard for initial packaging support.
673 |
674 |
675 |
676 |
--------------------------------------------------------------------------------
/gimel/vendor/redis-2.10.5.dist-info/METADATA:
--------------------------------------------------------------------------------
1 | Metadata-Version: 2.0
2 | Name: redis
3 | Version: 2.10.5
4 | Summary: Python client for Redis key-value store
5 | Home-page: http://github.com/andymccurdy/redis-py
6 | Author: Andy McCurdy
7 | Author-email: sedrik@gmail.com
8 | License: MIT
9 | Keywords: Redis,key-value store
10 | Platform: UNKNOWN
11 | Classifier: Development Status :: 5 - Production/Stable
12 | Classifier: Environment :: Console
13 | Classifier: Intended Audience :: Developers
14 | Classifier: License :: OSI Approved :: MIT License
15 | Classifier: Operating System :: OS Independent
16 | Classifier: Programming Language :: Python
17 | Classifier: Programming Language :: Python :: 2.6
18 | Classifier: Programming Language :: Python :: 2.7
19 | Classifier: Programming Language :: Python :: 3
20 | Classifier: Programming Language :: Python :: 3.2
21 | Classifier: Programming Language :: Python :: 3.3
22 | Classifier: Programming Language :: Python :: 3.4
23 |
24 | redis-py
25 | ========
26 |
27 | The Python interface to the Redis key-value store.
28 |
29 | .. image:: https://secure.travis-ci.org/andymccurdy/redis-py.png?branch=master
30 | :target: http://travis-ci.org/andymccurdy/redis-py
31 |
32 | Installation
33 | ------------
34 |
35 | redis-py requires a running Redis server. See `Redis's quickstart
36 | `_ for installation instructions.
37 |
38 | To install redis-py, simply:
39 |
40 | .. code-block:: bash
41 |
42 | $ sudo pip install redis
43 |
44 | or alternatively (you really should be using pip though):
45 |
46 | .. code-block:: bash
47 |
48 | $ sudo easy_install redis
49 |
50 | or from source:
51 |
52 | .. code-block:: bash
53 |
54 | $ sudo python setup.py install
55 |
56 |
57 | Getting Started
58 | ---------------
59 |
60 | .. code-block:: pycon
61 |
62 | >>> import redis
63 | >>> r = redis.StrictRedis(host='localhost', port=6379, db=0)
64 | >>> r.set('foo', 'bar')
65 | True
66 | >>> r.get('foo')
67 | 'bar'
68 |
69 | API Reference
70 | -------------
71 |
72 | The `official Redis command documentation `_ does a
73 | great job of explaining each command in detail. redis-py exposes two client
74 | classes that implement these commands. The StrictRedis class attempts to adhere
75 | to the official command syntax. There are a few exceptions:
76 |
77 | * **SELECT**: Not implemented. See the explanation in the Thread Safety section
78 | below.
79 | * **DEL**: 'del' is a reserved keyword in the Python syntax. Therefore redis-py
80 | uses 'delete' instead.
81 | * **CONFIG GET|SET**: These are implemented separately as config_get or config_set.
82 | * **MULTI/EXEC**: These are implemented as part of the Pipeline class. The
83 | pipeline is wrapped with the MULTI and EXEC statements by default when it
84 | is executed, which can be disabled by specifying transaction=False.
85 | See more about Pipelines below.
86 | * **SUBSCRIBE/LISTEN**: Similar to pipelines, PubSub is implemented as a separate
87 | class as it places the underlying connection in a state where it can't
88 | execute non-pubsub commands. Calling the pubsub method from the Redis client
89 | will return a PubSub instance where you can subscribe to channels and listen
90 | for messages. You can only call PUBLISH from the Redis client (see
91 | `this comment on issue #151
92 | `_
93 | for details).
94 | * **SCAN/SSCAN/HSCAN/ZSCAN**: The \*SCAN commands are implemented as they
95 | exist in the Redis documentation. In addition, each command has an equivilant
96 | iterator method. These are purely for convenience so the user doesn't have
97 | to keep track of the cursor while iterating. Use the
98 | scan_iter/sscan_iter/hscan_iter/zscan_iter methods for this behavior.
99 |
100 | In addition to the changes above, the Redis class, a subclass of StrictRedis,
101 | overrides several other commands to provide backwards compatibility with older
102 | versions of redis-py:
103 |
104 | * **LREM**: Order of 'num' and 'value' arguments reversed such that 'num' can
105 | provide a default value of zero.
106 | * **ZADD**: Redis specifies the 'score' argument before 'value'. These were swapped
107 | accidentally when being implemented and not discovered until after people
108 | were already using it. The Redis class expects \*args in the form of:
109 | `name1, score1, name2, score2, ...`
110 | * **SETEX**: Order of 'time' and 'value' arguments reversed.
111 |
112 |
113 | More Detail
114 | -----------
115 |
116 | Connection Pools
117 | ^^^^^^^^^^^^^^^^
118 |
119 | Behind the scenes, redis-py uses a connection pool to manage connections to
120 | a Redis server. By default, each Redis instance you create will in turn create
121 | its own connection pool. You can override this behavior and use an existing
122 | connection pool by passing an already created connection pool instance to the
123 | connection_pool argument of the Redis class. You may choose to do this in order
124 | to implement client side sharding or have finer grain control of how
125 | connections are managed.
126 |
127 | .. code-block:: pycon
128 |
129 | >>> pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
130 | >>> r = redis.Redis(connection_pool=pool)
131 |
132 | Connections
133 | ^^^^^^^^^^^
134 |
135 | ConnectionPools manage a set of Connection instances. redis-py ships with two
136 | types of Connections. The default, Connection, is a normal TCP socket based
137 | connection. The UnixDomainSocketConnection allows for clients running on the
138 | same device as the server to connect via a unix domain socket. To use a
139 | UnixDomainSocketConnection connection, simply pass the unix_socket_path
140 | argument, which is a string to the unix domain socket file. Additionally, make
141 | sure the unixsocket parameter is defined in your redis.conf file. It's
142 | commented out by default.
143 |
144 | .. code-block:: pycon
145 |
146 | >>> r = redis.Redis(unix_socket_path='/tmp/redis.sock')
147 |
148 | You can create your own Connection subclasses as well. This may be useful if
149 | you want to control the socket behavior within an async framework. To
150 | instantiate a client class using your own connection, you need to create
151 | a connection pool, passing your class to the connection_class argument.
152 | Other keyword parameters you pass to the pool will be passed to the class
153 | specified during initialization.
154 |
155 | .. code-block:: pycon
156 |
157 | >>> pool = redis.ConnectionPool(connection_class=YourConnectionClass,
158 | your_arg='...', ...)
159 |
160 | Parsers
161 | ^^^^^^^
162 |
163 | Parser classes provide a way to control how responses from the Redis server
164 | are parsed. redis-py ships with two parser classes, the PythonParser and the
165 | HiredisParser. By default, redis-py will attempt to use the HiredisParser if
166 | you have the hiredis module installed and will fallback to the PythonParser
167 | otherwise.
168 |
169 | Hiredis is a C library maintained by the core Redis team. Pieter Noordhuis was
170 | kind enough to create Python bindings. Using Hiredis can provide up to a
171 | 10x speed improvement in parsing responses from the Redis server. The
172 | performance increase is most noticeable when retrieving many pieces of data,
173 | such as from LRANGE or SMEMBERS operations.
174 |
175 | Hiredis is available on PyPI, and can be installed via pip or easy_install
176 | just like redis-py.
177 |
178 | .. code-block:: bash
179 |
180 | $ pip install hiredis
181 |
182 | or
183 |
184 | .. code-block:: bash
185 |
186 | $ easy_install hiredis
187 |
188 | Response Callbacks
189 | ^^^^^^^^^^^^^^^^^^
190 |
191 | The client class uses a set of callbacks to cast Redis responses to the
192 | appropriate Python type. There are a number of these callbacks defined on
193 | the Redis client class in a dictionary called RESPONSE_CALLBACKS.
194 |
195 | Custom callbacks can be added on a per-instance basis using the
196 | set_response_callback method. This method accepts two arguments: a command
197 | name and the callback. Callbacks added in this manner are only valid on the
198 | instance the callback is added to. If you want to define or override a callback
199 | globally, you should make a subclass of the Redis client and add your callback
200 | to its REDIS_CALLBACKS class dictionary.
201 |
202 | Response callbacks take at least one parameter: the response from the Redis
203 | server. Keyword arguments may also be accepted in order to further control
204 | how to interpret the response. These keyword arguments are specified during the
205 | command's call to execute_command. The ZRANGE implementation demonstrates the
206 | use of response callback keyword arguments with its "withscores" argument.
207 |
208 | Thread Safety
209 | ^^^^^^^^^^^^^
210 |
211 | Redis client instances can safely be shared between threads. Internally,
212 | connection instances are only retrieved from the connection pool during
213 | command execution, and returned to the pool directly after. Command execution
214 | never modifies state on the client instance.
215 |
216 | However, there is one caveat: the Redis SELECT command. The SELECT command
217 | allows you to switch the database currently in use by the connection. That
218 | database remains selected until another is selected or until the connection is
219 | closed. This creates an issue in that connections could be returned to the pool
220 | that are connected to a different database.
221 |
222 | As a result, redis-py does not implement the SELECT command on client
223 | instances. If you use multiple Redis databases within the same application, you
224 | should create a separate client instance (and possibly a separate connection
225 | pool) for each database.
226 |
227 | It is not safe to pass PubSub or Pipeline objects between threads.
228 |
229 | Pipelines
230 | ^^^^^^^^^
231 |
232 | Pipelines are a subclass of the base Redis class that provide support for
233 | buffering multiple commands to the server in a single request. They can be used
234 | to dramatically increase the performance of groups of commands by reducing the
235 | number of back-and-forth TCP packets between the client and server.
236 |
237 | Pipelines are quite simple to use:
238 |
239 | .. code-block:: pycon
240 |
241 | >>> r = redis.Redis(...)
242 | >>> r.set('bing', 'baz')
243 | >>> # Use the pipeline() method to create a pipeline instance
244 | >>> pipe = r.pipeline()
245 | >>> # The following SET commands are buffered
246 | >>> pipe.set('foo', 'bar')
247 | >>> pipe.get('bing')
248 | >>> # the EXECUTE call sends all buffered commands to the server, returning
249 | >>> # a list of responses, one for each command.
250 | >>> pipe.execute()
251 | [True, 'baz']
252 |
253 | For ease of use, all commands being buffered into the pipeline return the
254 | pipeline object itself. Therefore calls can be chained like:
255 |
256 | .. code-block:: pycon
257 |
258 | >>> pipe.set('foo', 'bar').sadd('faz', 'baz').incr('auto_number').execute()
259 | [True, True, 6]
260 |
261 | In addition, pipelines can also ensure the buffered commands are executed
262 | atomically as a group. This happens by default. If you want to disable the
263 | atomic nature of a pipeline but still want to buffer commands, you can turn
264 | off transactions.
265 |
266 | .. code-block:: pycon
267 |
268 | >>> pipe = r.pipeline(transaction=False)
269 |
270 | A common issue occurs when requiring atomic transactions but needing to
271 | retrieve values in Redis prior for use within the transaction. For instance,
272 | let's assume that the INCR command didn't exist and we need to build an atomic
273 | version of INCR in Python.
274 |
275 | The completely naive implementation could GET the value, increment it in
276 | Python, and SET the new value back. However, this is not atomic because
277 | multiple clients could be doing this at the same time, each getting the same
278 | value from GET.
279 |
280 | Enter the WATCH command. WATCH provides the ability to monitor one or more keys
281 | prior to starting a transaction. If any of those keys change prior the
282 | execution of that transaction, the entire transaction will be canceled and a
283 | WatchError will be raised. To implement our own client-side INCR command, we
284 | could do something like this:
285 |
286 | .. code-block:: pycon
287 |
288 | >>> with r.pipeline() as pipe:
289 | ... while 1:
290 | ... try:
291 | ... # put a WATCH on the key that holds our sequence value
292 | ... pipe.watch('OUR-SEQUENCE-KEY')
293 | ... # after WATCHing, the pipeline is put into immediate execution
294 | ... # mode until we tell it to start buffering commands again.
295 | ... # this allows us to get the current value of our sequence
296 | ... current_value = pipe.get('OUR-SEQUENCE-KEY')
297 | ... next_value = int(current_value) + 1
298 | ... # now we can put the pipeline back into buffered mode with MULTI
299 | ... pipe.multi()
300 | ... pipe.set('OUR-SEQUENCE-KEY', next_value)
301 | ... # and finally, execute the pipeline (the set command)
302 | ... pipe.execute()
303 | ... # if a WatchError wasn't raised during execution, everything
304 | ... # we just did happened atomically.
305 | ... break
306 | ... except WatchError:
307 | ... # another client must have changed 'OUR-SEQUENCE-KEY' between
308 | ... # the time we started WATCHing it and the pipeline's execution.
309 | ... # our best bet is to just retry.
310 | ... continue
311 |
312 | Note that, because the Pipeline must bind to a single connection for the
313 | duration of a WATCH, care must be taken to ensure that the connection is
314 | returned to the connection pool by calling the reset() method. If the
315 | Pipeline is used as a context manager (as in the example above) reset()
316 | will be called automatically. Of course you can do this the manual way by
317 | explicity calling reset():
318 |
319 | .. code-block:: pycon
320 |
321 | >>> pipe = r.pipeline()
322 | >>> while 1:
323 | ... try:
324 | ... pipe.watch('OUR-SEQUENCE-KEY')
325 | ... ...
326 | ... pipe.execute()
327 | ... break
328 | ... except WatchError:
329 | ... continue
330 | ... finally:
331 | ... pipe.reset()
332 |
333 | A convenience method named "transaction" exists for handling all the
334 | boilerplate of handling and retrying watch errors. It takes a callable that
335 | should expect a single parameter, a pipeline object, and any number of keys to
336 | be WATCHed. Our client-side INCR command above can be written like this,
337 | which is much easier to read:
338 |
339 | .. code-block:: pycon
340 |
341 | >>> def client_side_incr(pipe):
342 | ... current_value = pipe.get('OUR-SEQUENCE-KEY')
343 | ... next_value = int(current_value) + 1
344 | ... pipe.multi()
345 | ... pipe.set('OUR-SEQUENCE-KEY', next_value)
346 | >>>
347 | >>> r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY')
348 | [True]
349 |
350 | Publish / Subscribe
351 | ^^^^^^^^^^^^^^^^^^^
352 |
353 | redis-py includes a `PubSub` object that subscribes to channels and listens
354 | for new messages. Creating a `PubSub` object is easy.
355 |
356 | .. code-block:: pycon
357 |
358 | >>> r = redis.StrictRedis(...)
359 | >>> p = r.pubsub()
360 |
361 | Once a `PubSub` instance is created, channels and patterns can be subscribed
362 | to.
363 |
364 | .. code-block:: pycon
365 |
366 | >>> p.subscribe('my-first-channel', 'my-second-channel', ...)
367 | >>> p.psubscribe('my-*', ...)
368 |
369 | The `PubSub` instance is now subscribed to those channels/patterns. The
370 | subscription confirmations can be seen by reading messages from the `PubSub`
371 | instance.
372 |
373 | .. code-block:: pycon
374 |
375 | >>> p.get_message()
376 | {'pattern': None, 'type': 'subscribe', 'channel': 'my-second-channel', 'data': 1L}
377 | >>> p.get_message()
378 | {'pattern': None, 'type': 'subscribe', 'channel': 'my-first-channel', 'data': 2L}
379 | >>> p.get_message()
380 | {'pattern': None, 'type': 'psubscribe', 'channel': 'my-*', 'data': 3L}
381 |
382 | Every message read from a `PubSub` instance will be a dictionary with the
383 | following keys.
384 |
385 | * **type**: One of the following: 'subscribe', 'unsubscribe', 'psubscribe',
386 | 'punsubscribe', 'message', 'pmessage'
387 | * **channel**: The channel [un]subscribed to or the channel a message was
388 | published to
389 | * **pattern**: The pattern that matched a published message's channel. Will be
390 | `None` in all cases except for 'pmessage' types.
391 | * **data**: The message data. With [un]subscribe messages, this value will be
392 | the number of channels and patterns the connection is currently subscribed
393 | to. With [p]message messages, this value will be the actual published
394 | message.
395 |
396 | Let's send a message now.
397 |
398 | .. code-block:: pycon
399 |
400 | # the publish method returns the number matching channel and pattern
401 | # subscriptions. 'my-first-channel' matches both the 'my-first-channel'
402 | # subscription and the 'my-*' pattern subscription, so this message will
403 | # be delivered to 2 channels/patterns
404 | >>> r.publish('my-first-channel', 'some data')
405 | 2
406 | >>> p.get_message()
407 | {'channel': 'my-first-channel', 'data': 'some data', 'pattern': None, 'type': 'message'}
408 | >>> p.get_message()
409 | {'channel': 'my-first-channel', 'data': 'some data', 'pattern': 'my-*', 'type': 'pmessage'}
410 |
411 | Unsubscribing works just like subscribing. If no arguments are passed to
412 | [p]unsubscribe, all channels or patterns will be unsubscribed from.
413 |
414 | .. code-block:: pycon
415 |
416 | >>> p.unsubscribe()
417 | >>> p.punsubscribe('my-*')
418 | >>> p.get_message()
419 | {'channel': 'my-second-channel', 'data': 2L, 'pattern': None, 'type': 'unsubscribe'}
420 | >>> p.get_message()
421 | {'channel': 'my-first-channel', 'data': 1L, 'pattern': None, 'type': 'unsubscribe'}
422 | >>> p.get_message()
423 | {'channel': 'my-*', 'data': 0L, 'pattern': None, 'type': 'punsubscribe'}
424 |
425 | redis-py also allows you to register callback functions to handle published
426 | messages. Message handlers take a single argument, the message, which is a
427 | dictionary just like the examples above. To subscribe to a channel or pattern
428 | with a message handler, pass the channel or pattern name as a keyword argument
429 | with its value being the callback function.
430 |
431 | When a message is read on a channel or pattern with a message handler, the
432 | message dictionary is created and passed to the message handler. In this case,
433 | a `None` value is returned from get_message() since the message was already
434 | handled.
435 |
436 | .. code-block:: pycon
437 |
438 | >>> def my_handler(message):
439 | ... print 'MY HANDLER: ', message['data']
440 | >>> p.subscribe(**{'my-channel': my_handler})
441 | # read the subscribe confirmation message
442 | >>> p.get_message()
443 | {'pattern': None, 'type': 'subscribe', 'channel': 'my-channel', 'data': 1L}
444 | >>> r.publish('my-channel', 'awesome data')
445 | 1
446 | # for the message handler to work, we need tell the instance to read data.
447 | # this can be done in several ways (read more below). we'll just use
448 | # the familiar get_message() function for now
449 | >>> message = p.get_message()
450 | MY HANDLER: awesome data
451 | # note here that the my_handler callback printed the string above.
452 | # `message` is None because the message was handled by our handler.
453 | >>> print message
454 | None
455 |
456 | If your application is not interested in the (sometimes noisy)
457 | subscribe/unsubscribe confirmation messages, you can ignore them by passing
458 | `ignore_subscribe_messages=True` to `r.pubsub()`. This will cause all
459 | subscribe/unsubscribe messages to be read, but they won't bubble up to your
460 | application.
461 |
462 | .. code-block:: pycon
463 |
464 | >>> p = r.pubsub(ignore_subscribe_messages=True)
465 | >>> p.subscribe('my-channel')
466 | >>> p.get_message() # hides the subscribe message and returns None
467 | >>> r.publish('my-channel')
468 | 1
469 | >>> p.get_message()
470 | {'channel': 'my-channel', data': 'my data', 'pattern': None, 'type': 'message'}
471 |
472 | There are three different strategies for reading messages.
473 |
474 | The examples above have been using `pubsub.get_message()`. Behind the scenes,
475 | `get_message()` uses the system's 'select' module to quickly poll the
476 | connection's socket. If there's data available to be read, `get_message()` will
477 | read it, format the message and return it or pass it to a message handler. If
478 | there's no data to be read, `get_message()` will immediately return None. This
479 | makes it trivial to integrate into an existing event loop inside your
480 | application.
481 |
482 | .. code-block:: pycon
483 |
484 | >>> while True:
485 | >>> message = p.get_message()
486 | >>> if message:
487 | >>> # do something with the message
488 | >>> time.sleep(0.001) # be nice to the system :)
489 |
490 | Older versions of redis-py only read messages with `pubsub.listen()`. listen()
491 | is a generator that blocks until a message is available. If your application
492 | doesn't need to do anything else but receive and act on messages received from
493 | redis, listen() is an easy way to get up an running.
494 |
495 | .. code-block:: pycon
496 |
497 | >>> for message in p.listen():
498 | ... # do something with the message
499 |
500 | The third option runs an event loop in a separate thread.
501 | `pubsub.run_in_thread()` creates a new thread and starts the event loop. The
502 | thread object is returned to the caller of `run_in_thread()`. The caller can
503 | use the `thread.stop()` method to shut down the event loop and thread. Behind
504 | the scenes, this is simply a wrapper around `get_message()` that runs in a
505 | separate thread, essentially creating a tiny non-blocking event loop for you.
506 | `run_in_thread()` takes an optional `sleep_time` argument. If specified, the
507 | event loop will call `time.sleep()` with the value in each iteration of the
508 | loop.
509 |
510 | Note: Since we're running in a separate thread, there's no way to handle
511 | messages that aren't automatically handled with registered message handlers.
512 | Therefore, redis-py prevents you from calling `run_in_thread()` if you're
513 | subscribed to patterns or channels that don't have message handlers attached.
514 |
515 | .. code-block:: pycon
516 |
517 | >>> p.subscribe(**{'my-channel': my_handler})
518 | >>> thread = p.run_in_thread(sleep_time=0.001)
519 | # the event loop is now running in the background processing messages
520 | # when it's time to shut it down...
521 | >>> thread.stop()
522 |
523 | A PubSub object adheres to the same encoding semantics as the client instance
524 | it was created from. Any channel or pattern that's unicode will be encoded
525 | using the `charset` specified on the client before being sent to Redis. If the
526 | client's `decode_responses` flag is set the False (the default), the
527 | 'channel', 'pattern' and 'data' values in message dictionaries will be byte
528 | strings (str on Python 2, bytes on Python 3). If the client's
529 | `decode_responses` is True, then the 'channel', 'pattern' and 'data' values
530 | will be automatically decoded to unicode strings using the client's `charset`.
531 |
532 | PubSub objects remember what channels and patterns they are subscribed to. In
533 | the event of a disconnection such as a network error or timeout, the
534 | PubSub object will re-subscribe to all prior channels and patterns when
535 | reconnecting. Messages that were published while the client was disconnected
536 | cannot be delivered. When you're finished with a PubSub object, call its
537 | `.close()` method to shutdown the connection.
538 |
539 | .. code-block:: pycon
540 |
541 | >>> p = r.pubsub()
542 | >>> ...
543 | >>> p.close()
544 |
545 | LUA Scripting
546 | ^^^^^^^^^^^^^
547 |
548 | redis-py supports the EVAL, EVALSHA, and SCRIPT commands. However, there are
549 | a number of edge cases that make these commands tedious to use in real world
550 | scenarios. Therefore, redis-py exposes a Script object that makes scripting
551 | much easier to use.
552 |
553 | To create a Script instance, use the `register_script` function on a client
554 | instance passing the LUA code as the first argument. `register_script` returns
555 | a Script instance that you can use throughout your code.
556 |
557 | The following trivial LUA script accepts two parameters: the name of a key and
558 | a multiplier value. The script fetches the value stored in the key, multiplies
559 | it with the multiplier value and returns the result.
560 |
561 | .. code-block:: pycon
562 |
563 | >>> r = redis.StrictRedis()
564 | >>> lua = """
565 | ... local value = redis.call('GET', KEYS[1])
566 | ... value = tonumber(value)
567 | ... return value * ARGV[1]"""
568 | >>> multiply = r.register_script(lua)
569 |
570 | `multiply` is now a Script instance that is invoked by calling it like a
571 | function. Script instances accept the following optional arguments:
572 |
573 | * **keys**: A list of key names that the script will access. This becomes the
574 | KEYS list in LUA.
575 | * **args**: A list of argument values. This becomes the ARGV list in LUA.
576 | * **client**: A redis-py Client or Pipeline instance that will invoke the
577 | script. If client isn't specified, the client that intiially
578 | created the Script instance (the one that `register_script` was
579 | invoked from) will be used.
580 |
581 | Continuing the example from above:
582 |
583 | .. code-block:: pycon
584 |
585 | >>> r.set('foo', 2)
586 | >>> multiply(keys=['foo'], args=[5])
587 | 10
588 |
589 | The value of key 'foo' is set to 2. When multiply is invoked, the 'foo' key is
590 | passed to the script along with the multiplier value of 5. LUA executes the
591 | script and returns the result, 10.
592 |
593 | Script instances can be executed using a different client instance, even one
594 | that points to a completely different Redis server.
595 |
596 | .. code-block:: pycon
597 |
598 | >>> r2 = redis.StrictRedis('redis2.example.com')
599 | >>> r2.set('foo', 3)
600 | >>> multiply(keys=['foo'], args=[5], client=r2)
601 | 15
602 |
603 | The Script object ensures that the LUA script is loaded into Redis's script
604 | cache. In the event of a NOSCRIPT error, it will load the script and retry
605 | executing it.
606 |
607 | Script objects can also be used in pipelines. The pipeline instance should be
608 | passed as the client argument when calling the script. Care is taken to ensure
609 | that the script is registered in Redis's script cache just prior to pipeline
610 | execution.
611 |
612 | .. code-block:: pycon
613 |
614 | >>> pipe = r.pipeline()
615 | >>> pipe.set('foo', 5)
616 | >>> multiply(keys=['foo'], args=[5], client=pipe)
617 | >>> pipe.execute()
618 | [True, 25]
619 |
620 | Sentinel support
621 | ^^^^^^^^^^^^^^^^
622 |
623 | redis-py can be used together with `Redis Sentinel `_
624 | to discover Redis nodes. You need to have at least one Sentinel daemon running
625 | in order to use redis-py's Sentinel support.
626 |
627 | Connecting redis-py to the Sentinel instance(s) is easy. You can use a
628 | Sentinel connection to discover the master and slaves network addresses:
629 |
630 | .. code-block:: pycon
631 |
632 | >>> from redis.sentinel import Sentinel
633 | >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)
634 | >>> sentinel.discover_master('mymaster')
635 | ('127.0.0.1', 6379)
636 | >>> sentinel.discover_slaves('mymaster')
637 | [('127.0.0.1', 6380)]
638 |
639 | You can also create Redis client connections from a Sentinel instance. You can
640 | connect to either the master (for write operations) or a slave (for read-only
641 | operations).
642 |
643 | .. code-block:: pycon
644 |
645 | >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)
646 | >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)
647 | >>> master.set('foo', 'bar')
648 | >>> slave.get('foo')
649 | 'bar'
650 |
651 | The master and slave objects are normal StrictRedis instances with their
652 | connection pool bound to the Sentinel instance. When a Sentinel backed client
653 | attempts to establish a connection, it first queries the Sentinel servers to
654 | determine an appropriate host to connect to. If no server is found,
655 | a MasterNotFoundError or SlaveNotFoundError is raised. Both exceptions are
656 | subclasses of ConnectionError.
657 |
658 | When trying to connect to a slave client, the Sentinel connection pool will
659 | iterate over the list of slaves until it finds one that can be connected to.
660 | If no slaves can be connected to, a connection will be established with the
661 | master.
662 |
663 | See `Guidelines for Redis clients with support for Redis Sentinel
664 | `_ to learn more about Redis Sentinel.
665 |
666 | Scan Iterators
667 | ^^^^^^^^^^^^^^
668 |
669 | The \*SCAN commands introduced in Redis 2.8 can be cumbersome to use. While
670 | these commands are fully supported, redis-py also exposes the following methods
671 | that return Python iterators for convenience: `scan_iter`, `hscan_iter`,
672 | `sscan_iter` and `zscan_iter`.
673 |
674 | .. code-block:: pycon
675 |
676 | >>> for key, value in (('A', '1'), ('B', '2'), ('C', '3')):
677 | ... r.set(key, value)
678 | >>> for key in r.scan_iter():
679 | ... print key, r.get(key)
680 | A 1
681 | B 2
682 | C 3
683 |
684 | Author
685 | ^^^^^^
686 |
687 | redis-py is developed and maintained by Andy McCurdy (sedrik@gmail.com).
688 | It can be found here: http://github.com/andymccurdy/redis-py
689 |
690 | Special thanks to:
691 |
692 | * Ludovico Magnocavallo, author of the original Python Redis client, from
693 | which some of the socket code is still used.
694 | * Alexander Solovyov for ideas on the generic response callback system.
695 | * Paul Hubbard for initial packaging support.
696 |
697 |
698 |
699 |
--------------------------------------------------------------------------------
/gimel/vendor/redis-2.10.5.dist-info/RECORD:
--------------------------------------------------------------------------------
1 | redis/__init__.py,sha256=-Nvf8O6TwqjFNk0qDaylcrbxidNO_d2mHL9TWRhOG70,902
2 | redis/_compat.py,sha256=cz_GftqT341sbH_FpgJ27gV7GEfR59taDFssA6eFDoI,2927
3 | redis/client.py,sha256=nbzzvCLMrQnmQiVj-3Y5ZAPp6vA8s86QVQAZbtjZ5p4,101774
4 | redis/connection.py,sha256=fyyZQHZGlu-GE-GpX4iF7GRoZC0mBAwwGfgg-gPofs4,37280
5 | redis/exceptions.py,sha256=cNISHVuXY5HtIaMGdrIhAj40MK-T04lRUopJ_77AI0Y,1224
6 | redis/lock.py,sha256=ndqMMNbtlW_ZO6nmIAKPDN8PrQNUleCxWqXruL-Ma3s,10563
7 | redis/sentinel.py,sha256=B5LEmzfpRqJ1WFWSYIPanBn7n77l_jpqxBZTlIkdu7c,11868
8 | redis/utils.py,sha256=yTyLWUi60KTfw4U4nWhbGvQdNpQb9Wpdf_Qcx_lUJJU,666
9 | redis-2.10.5.dist-info/DESCRIPTION.rst,sha256=6wB4f2V0SDmzRV31UNFgEKLtuq-kC8RlIHrOixuDfXE,26551
10 | redis-2.10.5.dist-info/METADATA,sha256=F6RoaOc3Ef5Wd2CFikR19gbR1mHDuD53WRlLdkvYGaY,27390
11 | redis-2.10.5.dist-info/RECORD,,
12 | redis-2.10.5.dist-info/WHEEL,sha256=GrqQvamwgBV4nLoJe0vhYRSWzWsx7xjlt74FT0SWYfE,110
13 | redis-2.10.5.dist-info/metadata.json,sha256=125ixkvSdaQR6-SOEu4--VbkuOaIWbBGheWcZISdSz4,1002
14 | redis-2.10.5.dist-info/top_level.txt,sha256=OMAefszlde6ZoOtlM35AWzpRIrwtcqAMHGlRit-w2-4,6
15 | redis/lock.pyc,,
16 | redis/__init__.pyc,,
17 | redis/connection.pyc,,
18 | redis/exceptions.pyc,,
19 | redis/client.pyc,,
20 | redis/_compat.pyc,,
21 | redis/sentinel.pyc,,
22 | redis/utils.pyc,,
23 |
--------------------------------------------------------------------------------
/gimel/vendor/redis-2.10.5.dist-info/WHEEL:
--------------------------------------------------------------------------------
1 | Wheel-Version: 1.0
2 | Generator: bdist_wheel (0.26.0)
3 | Root-Is-Purelib: true
4 | Tag: py2-none-any
5 | Tag: py3-none-any
6 |
7 |
--------------------------------------------------------------------------------
/gimel/vendor/redis-2.10.5.dist-info/metadata.json:
--------------------------------------------------------------------------------
1 | {"generator": "bdist_wheel (0.26.0)", "summary": "Python client for Redis key-value store", "classifiers": ["Development Status :: 5 - Production/Stable", "Environment :: Console", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.2", "Programming Language :: Python :: 3.3", "Programming Language :: Python :: 3.4"], "extensions": {"python.details": {"project_urls": {"Home": "http://github.com/andymccurdy/redis-py"}, "contacts": [{"email": "sedrik@gmail.com", "name": "Andy McCurdy", "role": "author"}], "document_names": {"description": "DESCRIPTION.rst"}}}, "keywords": ["Redis", "key-value", "store"], "license": "MIT", "metadata_version": "2.0", "name": "redis", "version": "2.10.5", "test_requires": [{"requires": ["pytest (>=2.5.0)"]}]}
--------------------------------------------------------------------------------
/gimel/vendor/redis-2.10.5.dist-info/top_level.txt:
--------------------------------------------------------------------------------
1 | redis
2 |
--------------------------------------------------------------------------------
/gimel/vendor/redis/__init__.py:
--------------------------------------------------------------------------------
1 | from redis.client import Redis, StrictRedis
2 | from redis.connection import (
3 | BlockingConnectionPool,
4 | ConnectionPool,
5 | Connection,
6 | SSLConnection,
7 | UnixDomainSocketConnection
8 | )
9 | from redis.utils import from_url
10 | from redis.exceptions import (
11 | AuthenticationError,
12 | BusyLoadingError,
13 | ConnectionError,
14 | DataError,
15 | InvalidResponse,
16 | PubSubError,
17 | ReadOnlyError,
18 | RedisError,
19 | ResponseError,
20 | TimeoutError,
21 | WatchError
22 | )
23 |
24 |
25 | __version__ = '2.10.5'
26 | VERSION = tuple(map(int, __version__.split('.')))
27 |
28 | __all__ = [
29 | 'Redis', 'StrictRedis', 'ConnectionPool', 'BlockingConnectionPool',
30 | 'Connection', 'SSLConnection', 'UnixDomainSocketConnection', 'from_url',
31 | 'AuthenticationError', 'BusyLoadingError', 'ConnectionError', 'DataError',
32 | 'InvalidResponse', 'PubSubError', 'ReadOnlyError', 'RedisError',
33 | 'ResponseError', 'TimeoutError', 'WatchError'
34 | ]
35 |
--------------------------------------------------------------------------------
/gimel/vendor/redis/_compat.py:
--------------------------------------------------------------------------------
1 | """Internal module for Python 2 backwards compatibility."""
2 | import sys
3 |
4 |
5 | if sys.version_info[0] < 3:
6 | from urllib import unquote
7 | from urlparse import parse_qs, urlparse
8 | from itertools import imap, izip
9 | from string import letters as ascii_letters
10 | from Queue import Queue
11 | try:
12 | from cStringIO import StringIO as BytesIO
13 | except ImportError:
14 | from StringIO import StringIO as BytesIO
15 |
16 | # special unicode handling for python2 to avoid UnicodeDecodeError
17 | def safe_unicode(obj, *args):
18 | """ return the unicode representation of obj """
19 | try:
20 | return unicode(obj, *args)
21 | except UnicodeDecodeError:
22 | # obj is byte string
23 | ascii_text = str(obj).encode('string_escape')
24 | return unicode(ascii_text)
25 |
26 | def iteritems(x):
27 | return x.iteritems()
28 |
29 | def iterkeys(x):
30 | return x.iterkeys()
31 |
32 | def itervalues(x):
33 | return x.itervalues()
34 |
35 | def nativestr(x):
36 | return x if isinstance(x, str) else x.encode('utf-8', 'replace')
37 |
38 | def u(x):
39 | return x.decode()
40 |
41 | def b(x):
42 | return x
43 |
44 | def next(x):
45 | return x.next()
46 |
47 | def byte_to_chr(x):
48 | return x
49 |
50 | unichr = unichr
51 | xrange = xrange
52 | basestring = basestring
53 | unicode = unicode
54 | bytes = str
55 | long = long
56 | else:
57 | from urllib.parse import parse_qs, unquote, urlparse
58 | from io import BytesIO
59 | from string import ascii_letters
60 | from queue import Queue
61 |
62 | def iteritems(x):
63 | return iter(x.items())
64 |
65 | def iterkeys(x):
66 | return iter(x.keys())
67 |
68 | def itervalues(x):
69 | return iter(x.values())
70 |
71 | def byte_to_chr(x):
72 | return chr(x)
73 |
74 | def nativestr(x):
75 | return x if isinstance(x, str) else x.decode('utf-8', 'replace')
76 |
77 | def u(x):
78 | return x
79 |
80 | def b(x):
81 | return x.encode('latin-1') if not isinstance(x, bytes) else x
82 |
83 | next = next
84 | unichr = chr
85 | imap = map
86 | izip = zip
87 | xrange = range
88 | basestring = str
89 | unicode = str
90 | safe_unicode = str
91 | bytes = bytes
92 | long = int
93 |
94 | try: # Python 3
95 | from queue import LifoQueue, Empty, Full
96 | except ImportError:
97 | from Queue import Empty, Full
98 | try: # Python 2.6 - 2.7
99 | from Queue import LifoQueue
100 | except ImportError: # Python 2.5
101 | from Queue import Queue
102 | # From the Python 2.7 lib. Python 2.5 already extracted the core
103 | # methods to aid implementating different queue organisations.
104 |
105 | class LifoQueue(Queue):
106 | "Override queue methods to implement a last-in first-out queue."
107 |
108 | def _init(self, maxsize):
109 | self.maxsize = maxsize
110 | self.queue = []
111 |
112 | def _qsize(self, len=len):
113 | return len(self.queue)
114 |
115 | def _put(self, item):
116 | self.queue.append(item)
117 |
118 | def _get(self):
119 | return self.queue.pop()
120 |
--------------------------------------------------------------------------------
/gimel/vendor/redis/connection.py:
--------------------------------------------------------------------------------
1 | from __future__ import with_statement
2 | from distutils.version import StrictVersion
3 | from itertools import chain
4 | from select import select
5 | import os
6 | import socket
7 | import sys
8 | import threading
9 | import warnings
10 |
11 | try:
12 | import ssl
13 | ssl_available = True
14 | except ImportError:
15 | ssl_available = False
16 |
17 | from redis._compat import (b, xrange, imap, byte_to_chr, unicode, bytes, long,
18 | BytesIO, nativestr, basestring, iteritems,
19 | LifoQueue, Empty, Full, urlparse, parse_qs,
20 | unquote)
21 | from redis.exceptions import (
22 | RedisError,
23 | ConnectionError,
24 | TimeoutError,
25 | BusyLoadingError,
26 | ResponseError,
27 | InvalidResponse,
28 | AuthenticationError,
29 | NoScriptError,
30 | ExecAbortError,
31 | ReadOnlyError
32 | )
33 | from redis.utils import HIREDIS_AVAILABLE
34 | if HIREDIS_AVAILABLE:
35 | import hiredis
36 |
37 | hiredis_version = StrictVersion(hiredis.__version__)
38 | HIREDIS_SUPPORTS_CALLABLE_ERRORS = \
39 | hiredis_version >= StrictVersion('0.1.3')
40 | HIREDIS_SUPPORTS_BYTE_BUFFER = \
41 | hiredis_version >= StrictVersion('0.1.4')
42 |
43 | if not HIREDIS_SUPPORTS_BYTE_BUFFER:
44 | msg = ("redis-py works best with hiredis >= 0.1.4. You're running "
45 | "hiredis %s. Please consider upgrading." % hiredis.__version__)
46 | warnings.warn(msg)
47 |
48 | HIREDIS_USE_BYTE_BUFFER = True
49 | # only use byte buffer if hiredis supports it and the Python version
50 | # is >= 2.7
51 | if not HIREDIS_SUPPORTS_BYTE_BUFFER or (
52 | sys.version_info[0] == 2 and sys.version_info[1] < 7):
53 | HIREDIS_USE_BYTE_BUFFER = False
54 |
55 | SYM_STAR = b('*')
56 | SYM_DOLLAR = b('$')
57 | SYM_CRLF = b('\r\n')
58 | SYM_EMPTY = b('')
59 |
60 | SERVER_CLOSED_CONNECTION_ERROR = "Connection closed by server."
61 |
62 |
63 | class Token(object):
64 | """
65 | Literal strings in Redis commands, such as the command names and any
66 | hard-coded arguments are wrapped in this class so we know not to apply
67 | and encoding rules on them.
68 | """
69 | def __init__(self, value):
70 | if isinstance(value, Token):
71 | value = value.value
72 | self.value = value
73 |
74 | def __repr__(self):
75 | return self.value
76 |
77 | def __str__(self):
78 | return self.value
79 |
80 |
81 | class BaseParser(object):
82 | EXCEPTION_CLASSES = {
83 | 'ERR': {
84 | 'max number of clients reached': ConnectionError
85 | },
86 | 'EXECABORT': ExecAbortError,
87 | 'LOADING': BusyLoadingError,
88 | 'NOSCRIPT': NoScriptError,
89 | 'READONLY': ReadOnlyError,
90 | }
91 |
92 | def parse_error(self, response):
93 | "Parse an error response"
94 | error_code = response.split(' ')[0]
95 | if error_code in self.EXCEPTION_CLASSES:
96 | response = response[len(error_code) + 1:]
97 | exception_class = self.EXCEPTION_CLASSES[error_code]
98 | if isinstance(exception_class, dict):
99 | exception_class = exception_class.get(response, ResponseError)
100 | return exception_class(response)
101 | return ResponseError(response)
102 |
103 |
104 | class SocketBuffer(object):
105 | def __init__(self, socket, socket_read_size):
106 | self._sock = socket
107 | self.socket_read_size = socket_read_size
108 | self._buffer = BytesIO()
109 | # number of bytes written to the buffer from the socket
110 | self.bytes_written = 0
111 | # number of bytes read from the buffer
112 | self.bytes_read = 0
113 |
114 | @property
115 | def length(self):
116 | return self.bytes_written - self.bytes_read
117 |
118 | def _read_from_socket(self, length=None):
119 | socket_read_size = self.socket_read_size
120 | buf = self._buffer
121 | buf.seek(self.bytes_written)
122 | marker = 0
123 |
124 | try:
125 | while True:
126 | data = self._sock.recv(socket_read_size)
127 | # an empty string indicates the server shutdown the socket
128 | if isinstance(data, bytes) and len(data) == 0:
129 | raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)
130 | buf.write(data)
131 | data_length = len(data)
132 | self.bytes_written += data_length
133 | marker += data_length
134 |
135 | if length is not None and length > marker:
136 | continue
137 | break
138 | except socket.timeout:
139 | raise TimeoutError("Timeout reading from socket")
140 | except socket.error:
141 | e = sys.exc_info()[1]
142 | raise ConnectionError("Error while reading from socket: %s" %
143 | (e.args,))
144 |
145 | def read(self, length):
146 | length = length + 2 # make sure to read the \r\n terminator
147 | # make sure we've read enough data from the socket
148 | if length > self.length:
149 | self._read_from_socket(length - self.length)
150 |
151 | self._buffer.seek(self.bytes_read)
152 | data = self._buffer.read(length)
153 | self.bytes_read += len(data)
154 |
155 | # purge the buffer when we've consumed it all so it doesn't
156 | # grow forever
157 | if self.bytes_read == self.bytes_written:
158 | self.purge()
159 |
160 | return data[:-2]
161 |
162 | def readline(self):
163 | buf = self._buffer
164 | buf.seek(self.bytes_read)
165 | data = buf.readline()
166 | while not data.endswith(SYM_CRLF):
167 | # there's more data in the socket that we need
168 | self._read_from_socket()
169 | buf.seek(self.bytes_read)
170 | data = buf.readline()
171 |
172 | self.bytes_read += len(data)
173 |
174 | # purge the buffer when we've consumed it all so it doesn't
175 | # grow forever
176 | if self.bytes_read == self.bytes_written:
177 | self.purge()
178 |
179 | return data[:-2]
180 |
181 | def purge(self):
182 | self._buffer.seek(0)
183 | self._buffer.truncate()
184 | self.bytes_written = 0
185 | self.bytes_read = 0
186 |
187 | def close(self):
188 | try:
189 | self.purge()
190 | self._buffer.close()
191 | except:
192 | # issue #633 suggests the purge/close somehow raised a
193 | # BadFileDescriptor error. Perhaps the client ran out of
194 | # memory or something else? It's probably OK to ignore
195 | # any error being raised from purge/close since we're
196 | # removing the reference to the instance below.
197 | pass
198 | self._buffer = None
199 | self._sock = None
200 |
201 |
202 | class PythonParser(BaseParser):
203 | "Plain Python parsing class"
204 | encoding = None
205 |
206 | def __init__(self, socket_read_size):
207 | self.socket_read_size = socket_read_size
208 | self._sock = None
209 | self._buffer = None
210 |
211 | def __del__(self):
212 | try:
213 | self.on_disconnect()
214 | except Exception:
215 | pass
216 |
217 | def on_connect(self, connection):
218 | "Called when the socket connects"
219 | self._sock = connection._sock
220 | self._buffer = SocketBuffer(self._sock, self.socket_read_size)
221 | if connection.decode_responses:
222 | self.encoding = connection.encoding
223 |
224 | def on_disconnect(self):
225 | "Called when the socket disconnects"
226 | if self._sock is not None:
227 | self._sock.close()
228 | self._sock = None
229 | if self._buffer is not None:
230 | self._buffer.close()
231 | self._buffer = None
232 | self.encoding = None
233 |
234 | def can_read(self):
235 | return self._buffer and bool(self._buffer.length)
236 |
237 | def read_response(self):
238 | response = self._buffer.readline()
239 | if not response:
240 | raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
241 |
242 | byte, response = byte_to_chr(response[0]), response[1:]
243 |
244 | if byte not in ('-', '+', ':', '$', '*'):
245 | raise InvalidResponse("Protocol Error: %s, %s" %
246 | (str(byte), str(response)))
247 |
248 | # server returned an error
249 | if byte == '-':
250 | response = nativestr(response)
251 | error = self.parse_error(response)
252 | # if the error is a ConnectionError, raise immediately so the user
253 | # is notified
254 | if isinstance(error, ConnectionError):
255 | raise error
256 | # otherwise, we're dealing with a ResponseError that might belong
257 | # inside a pipeline response. the connection's read_response()
258 | # and/or the pipeline's execute() will raise this error if
259 | # necessary, so just return the exception instance here.
260 | return error
261 | # single value
262 | elif byte == '+':
263 | pass
264 | # int value
265 | elif byte == ':':
266 | response = long(response)
267 | # bulk response
268 | elif byte == '$':
269 | length = int(response)
270 | if length == -1:
271 | return None
272 | response = self._buffer.read(length)
273 | # multi-bulk response
274 | elif byte == '*':
275 | length = int(response)
276 | if length == -1:
277 | return None
278 | response = [self.read_response() for i in xrange(length)]
279 | if isinstance(response, bytes) and self.encoding:
280 | response = response.decode(self.encoding)
281 | return response
282 |
283 |
284 | class HiredisParser(BaseParser):
285 | "Parser class for connections using Hiredis"
286 | def __init__(self, socket_read_size):
287 | if not HIREDIS_AVAILABLE:
288 | raise RedisError("Hiredis is not installed")
289 | self.socket_read_size = socket_read_size
290 |
291 | if HIREDIS_USE_BYTE_BUFFER:
292 | self._buffer = bytearray(socket_read_size)
293 |
294 | def __del__(self):
295 | try:
296 | self.on_disconnect()
297 | except Exception:
298 | pass
299 |
300 | def on_connect(self, connection):
301 | self._sock = connection._sock
302 | kwargs = {
303 | 'protocolError': InvalidResponse,
304 | 'replyError': self.parse_error,
305 | }
306 |
307 | # hiredis < 0.1.3 doesn't support functions that create exceptions
308 | if not HIREDIS_SUPPORTS_CALLABLE_ERRORS:
309 | kwargs['replyError'] = ResponseError
310 |
311 | if connection.decode_responses:
312 | kwargs['encoding'] = connection.encoding
313 | self._reader = hiredis.Reader(**kwargs)
314 | self._next_response = False
315 |
316 | def on_disconnect(self):
317 | self._sock = None
318 | self._reader = None
319 | self._next_response = False
320 |
321 | def can_read(self):
322 | if not self._reader:
323 | raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
324 |
325 | if self._next_response is False:
326 | self._next_response = self._reader.gets()
327 | return self._next_response is not False
328 |
329 | def read_response(self):
330 | if not self._reader:
331 | raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
332 |
333 | # _next_response might be cached from a can_read() call
334 | if self._next_response is not False:
335 | response = self._next_response
336 | self._next_response = False
337 | return response
338 |
339 | response = self._reader.gets()
340 | socket_read_size = self.socket_read_size
341 | while response is False:
342 | try:
343 | if HIREDIS_USE_BYTE_BUFFER:
344 | bufflen = self._sock.recv_into(self._buffer)
345 | if bufflen == 0:
346 | raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)
347 | else:
348 | buffer = self._sock.recv(socket_read_size)
349 | # an empty string indicates the server shutdown the socket
350 | if not isinstance(buffer, bytes) or len(buffer) == 0:
351 | raise socket.error(SERVER_CLOSED_CONNECTION_ERROR)
352 | except socket.timeout:
353 | raise TimeoutError("Timeout reading from socket")
354 | except socket.error:
355 | e = sys.exc_info()[1]
356 | raise ConnectionError("Error while reading from socket: %s" %
357 | (e.args,))
358 | if HIREDIS_USE_BYTE_BUFFER:
359 | self._reader.feed(self._buffer, 0, bufflen)
360 | else:
361 | self._reader.feed(buffer)
362 | response = self._reader.gets()
363 | # if an older version of hiredis is installed, we need to attempt
364 | # to convert ResponseErrors to their appropriate types.
365 | if not HIREDIS_SUPPORTS_CALLABLE_ERRORS:
366 | if isinstance(response, ResponseError):
367 | response = self.parse_error(response.args[0])
368 | elif isinstance(response, list) and response and \
369 | isinstance(response[0], ResponseError):
370 | response[0] = self.parse_error(response[0].args[0])
371 | # if the response is a ConnectionError or the response is a list and
372 | # the first item is a ConnectionError, raise it as something bad
373 | # happened
374 | if isinstance(response, ConnectionError):
375 | raise response
376 | elif isinstance(response, list) and response and \
377 | isinstance(response[0], ConnectionError):
378 | raise response[0]
379 | return response
380 |
381 | if HIREDIS_AVAILABLE:
382 | DefaultParser = HiredisParser
383 | else:
384 | DefaultParser = PythonParser
385 |
386 |
387 | class Connection(object):
388 | "Manages TCP communication to and from a Redis server"
389 | description_format = "Connection"
390 |
391 | def __init__(self, host='localhost', port=6379, db=0, password=None,
392 | socket_timeout=None, socket_connect_timeout=None,
393 | socket_keepalive=False, socket_keepalive_options=None,
394 | retry_on_timeout=False, encoding='utf-8',
395 | encoding_errors='strict', decode_responses=False,
396 | parser_class=DefaultParser, socket_read_size=65536):
397 | self.pid = os.getpid()
398 | self.host = host
399 | self.port = int(port)
400 | self.db = db
401 | self.password = password
402 | self.socket_timeout = socket_timeout
403 | self.socket_connect_timeout = socket_connect_timeout or socket_timeout
404 | self.socket_keepalive = socket_keepalive
405 | self.socket_keepalive_options = socket_keepalive_options or {}
406 | self.retry_on_timeout = retry_on_timeout
407 | self.encoding = encoding
408 | self.encoding_errors = encoding_errors
409 | self.decode_responses = decode_responses
410 | self._sock = None
411 | self._parser = parser_class(socket_read_size=socket_read_size)
412 | self._description_args = {
413 | 'host': self.host,
414 | 'port': self.port,
415 | 'db': self.db,
416 | }
417 | self._connect_callbacks = []
418 |
419 | def __repr__(self):
420 | return self.description_format % self._description_args
421 |
422 | def __del__(self):
423 | try:
424 | self.disconnect()
425 | except Exception:
426 | pass
427 |
428 | def register_connect_callback(self, callback):
429 | self._connect_callbacks.append(callback)
430 |
431 | def clear_connect_callbacks(self):
432 | self._connect_callbacks = []
433 |
434 | def connect(self):
435 | "Connects to the Redis server if not already connected"
436 | if self._sock:
437 | return
438 | try:
439 | sock = self._connect()
440 | except socket.error:
441 | e = sys.exc_info()[1]
442 | raise ConnectionError(self._error_message(e))
443 |
444 | self._sock = sock
445 | try:
446 | self.on_connect()
447 | except RedisError:
448 | # clean up after any error in on_connect
449 | self.disconnect()
450 | raise
451 |
452 | # run any user callbacks. right now the only internal callback
453 | # is for pubsub channel/pattern resubscription
454 | for callback in self._connect_callbacks:
455 | callback(self)
456 |
457 | def _connect(self):
458 | "Create a TCP socket connection"
459 | # we want to mimic what socket.create_connection does to support
460 | # ipv4/ipv6, but we want to set options prior to calling
461 | # socket.connect()
462 | err = None
463 | for res in socket.getaddrinfo(self.host, self.port, 0,
464 | socket.SOCK_STREAM):
465 | family, socktype, proto, canonname, socket_address = res
466 | sock = None
467 | try:
468 | sock = socket.socket(family, socktype, proto)
469 | # TCP_NODELAY
470 | sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
471 |
472 | # TCP_KEEPALIVE
473 | if self.socket_keepalive:
474 | sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
475 | for k, v in iteritems(self.socket_keepalive_options):
476 | sock.setsockopt(socket.SOL_TCP, k, v)
477 |
478 | # set the socket_connect_timeout before we connect
479 | sock.settimeout(self.socket_connect_timeout)
480 |
481 | # connect
482 | sock.connect(socket_address)
483 |
484 | # set the socket_timeout now that we're connected
485 | sock.settimeout(self.socket_timeout)
486 | return sock
487 |
488 | except socket.error as _:
489 | err = _
490 | if sock is not None:
491 | sock.close()
492 |
493 | if err is not None:
494 | raise err
495 | raise socket.error("socket.getaddrinfo returned an empty list")
496 |
497 | def _error_message(self, exception):
498 | # args for socket.error can either be (errno, "message")
499 | # or just "message"
500 | if len(exception.args) == 1:
501 | return "Error connecting to %s:%s. %s." % \
502 | (self.host, self.port, exception.args[0])
503 | else:
504 | return "Error %s connecting to %s:%s. %s." % \
505 | (exception.args[0], self.host, self.port, exception.args[1])
506 |
507 | def on_connect(self):
508 | "Initialize the connection, authenticate and select a database"
509 | self._parser.on_connect(self)
510 |
511 | # if a password is specified, authenticate
512 | if self.password:
513 | self.send_command('AUTH', self.password)
514 | if nativestr(self.read_response()) != 'OK':
515 | raise AuthenticationError('Invalid Password')
516 |
517 | # if a database is specified, switch to it
518 | if self.db:
519 | self.send_command('SELECT', self.db)
520 | if nativestr(self.read_response()) != 'OK':
521 | raise ConnectionError('Invalid Database')
522 |
523 | def disconnect(self):
524 | "Disconnects from the Redis server"
525 | self._parser.on_disconnect()
526 | if self._sock is None:
527 | return
528 | try:
529 | self._sock.shutdown(socket.SHUT_RDWR)
530 | self._sock.close()
531 | except socket.error:
532 | pass
533 | self._sock = None
534 |
535 | def send_packed_command(self, command):
536 | "Send an already packed command to the Redis server"
537 | if not self._sock:
538 | self.connect()
539 | try:
540 | if isinstance(command, str):
541 | command = [command]
542 | for item in command:
543 | self._sock.sendall(item)
544 | except socket.timeout:
545 | self.disconnect()
546 | raise TimeoutError("Timeout writing to socket")
547 | except socket.error:
548 | e = sys.exc_info()[1]
549 | self.disconnect()
550 | if len(e.args) == 1:
551 | errno, errmsg = 'UNKNOWN', e.args[0]
552 | else:
553 | errno = e.args[0]
554 | errmsg = e.args[1]
555 | raise ConnectionError("Error %s while writing to socket. %s." %
556 | (errno, errmsg))
557 | except:
558 | self.disconnect()
559 | raise
560 |
561 | def send_command(self, *args):
562 | "Pack and send a command to the Redis server"
563 | self.send_packed_command(self.pack_command(*args))
564 |
565 | def can_read(self, timeout=0):
566 | "Poll the socket to see if there's data that can be read."
567 | sock = self._sock
568 | if not sock:
569 | self.connect()
570 | sock = self._sock
571 | return self._parser.can_read() or \
572 | bool(select([sock], [], [], timeout)[0])
573 |
574 | def read_response(self):
575 | "Read the response from a previously sent command"
576 | try:
577 | response = self._parser.read_response()
578 | except:
579 | self.disconnect()
580 | raise
581 | if isinstance(response, ResponseError):
582 | raise response
583 | return response
584 |
585 | def encode(self, value):
586 | "Return a bytestring representation of the value"
587 | if isinstance(value, Token):
588 | return b(value.value)
589 | elif isinstance(value, bytes):
590 | return value
591 | elif isinstance(value, (int, long)):
592 | value = b(str(value))
593 | elif isinstance(value, float):
594 | value = b(repr(value))
595 | elif not isinstance(value, basestring):
596 | value = unicode(value)
597 | if isinstance(value, unicode):
598 | value = value.encode(self.encoding, self.encoding_errors)
599 | return value
600 |
601 | def pack_command(self, *args):
602 | "Pack a series of arguments into the Redis protocol"
603 | output = []
604 | # the client might have included 1 or more literal arguments in
605 | # the command name, e.g., 'CONFIG GET'. The Redis server expects these
606 | # arguments to be sent separately, so split the first argument
607 | # manually. All of these arguements get wrapped in the Token class
608 | # to prevent them from being encoded.
609 | command = args[0]
610 | if ' ' in command:
611 | args = tuple([Token(s) for s in command.split(' ')]) + args[1:]
612 | else:
613 | args = (Token(command),) + args[1:]
614 |
615 | buff = SYM_EMPTY.join(
616 | (SYM_STAR, b(str(len(args))), SYM_CRLF))
617 |
618 | for arg in imap(self.encode, args):
619 | # to avoid large string mallocs, chunk the command into the
620 | # output list if we're sending large values
621 | if len(buff) > 6000 or len(arg) > 6000:
622 | buff = SYM_EMPTY.join(
623 | (buff, SYM_DOLLAR, b(str(len(arg))), SYM_CRLF))
624 | output.append(buff)
625 | output.append(arg)
626 | buff = SYM_CRLF
627 | else:
628 | buff = SYM_EMPTY.join((buff, SYM_DOLLAR, b(str(len(arg))),
629 | SYM_CRLF, arg, SYM_CRLF))
630 | output.append(buff)
631 | return output
632 |
633 | def pack_commands(self, commands):
634 | "Pack multiple commands into the Redis protocol"
635 | output = []
636 | pieces = []
637 | buffer_length = 0
638 |
639 | for cmd in commands:
640 | for chunk in self.pack_command(*cmd):
641 | pieces.append(chunk)
642 | buffer_length += len(chunk)
643 |
644 | if buffer_length > 6000:
645 | output.append(SYM_EMPTY.join(pieces))
646 | buffer_length = 0
647 | pieces = []
648 |
649 | if pieces:
650 | output.append(SYM_EMPTY.join(pieces))
651 | return output
652 |
653 |
654 | class SSLConnection(Connection):
655 | description_format = "SSLConnection"
656 |
657 | def __init__(self, ssl_keyfile=None, ssl_certfile=None, ssl_cert_reqs=None,
658 | ssl_ca_certs=None, **kwargs):
659 | if not ssl_available:
660 | raise RedisError("Python wasn't built with SSL support")
661 |
662 | super(SSLConnection, self).__init__(**kwargs)
663 |
664 | self.keyfile = ssl_keyfile
665 | self.certfile = ssl_certfile
666 | if ssl_cert_reqs is None:
667 | ssl_cert_reqs = ssl.CERT_NONE
668 | elif isinstance(ssl_cert_reqs, basestring):
669 | CERT_REQS = {
670 | 'none': ssl.CERT_NONE,
671 | 'optional': ssl.CERT_OPTIONAL,
672 | 'required': ssl.CERT_REQUIRED
673 | }
674 | if ssl_cert_reqs not in CERT_REQS:
675 | raise RedisError(
676 | "Invalid SSL Certificate Requirements Flag: %s" %
677 | ssl_cert_reqs)
678 | ssl_cert_reqs = CERT_REQS[ssl_cert_reqs]
679 | self.cert_reqs = ssl_cert_reqs
680 | self.ca_certs = ssl_ca_certs
681 |
682 | def _connect(self):
683 | "Wrap the socket with SSL support"
684 | sock = super(SSLConnection, self)._connect()
685 | sock = ssl.wrap_socket(sock,
686 | cert_reqs=self.cert_reqs,
687 | keyfile=self.keyfile,
688 | certfile=self.certfile,
689 | ca_certs=self.ca_certs)
690 | return sock
691 |
692 |
693 | class UnixDomainSocketConnection(Connection):
694 | description_format = "UnixDomainSocketConnection"
695 |
696 | def __init__(self, path='', db=0, password=None,
697 | socket_timeout=None, encoding='utf-8',
698 | encoding_errors='strict', decode_responses=False,
699 | retry_on_timeout=False,
700 | parser_class=DefaultParser, socket_read_size=65536):
701 | self.pid = os.getpid()
702 | self.path = path
703 | self.db = db
704 | self.password = password
705 | self.socket_timeout = socket_timeout
706 | self.retry_on_timeout = retry_on_timeout
707 | self.encoding = encoding
708 | self.encoding_errors = encoding_errors
709 | self.decode_responses = decode_responses
710 | self._sock = None
711 | self._parser = parser_class(socket_read_size=socket_read_size)
712 | self._description_args = {
713 | 'path': self.path,
714 | 'db': self.db,
715 | }
716 | self._connect_callbacks = []
717 |
718 | def _connect(self):
719 | "Create a Unix domain socket connection"
720 | sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
721 | sock.settimeout(self.socket_timeout)
722 | sock.connect(self.path)
723 | return sock
724 |
725 | def _error_message(self, exception):
726 | # args for socket.error can either be (errno, "message")
727 | # or just "message"
728 | if len(exception.args) == 1:
729 | return "Error connecting to unix socket: %s. %s." % \
730 | (self.path, exception.args[0])
731 | else:
732 | return "Error %s connecting to unix socket: %s. %s." % \
733 | (exception.args[0], self.path, exception.args[1])
734 |
735 |
736 | class ConnectionPool(object):
737 | "Generic connection pool"
738 | @classmethod
739 | def from_url(cls, url, db=None, decode_components=False, **kwargs):
740 | """
741 | Return a connection pool configured from the given URL.
742 |
743 | For example::
744 |
745 | redis://[:password]@localhost:6379/0
746 | rediss://[:password]@localhost:6379/0
747 | unix://[:password]@/path/to/socket.sock?db=0
748 |
749 | Three URL schemes are supported:
750 | redis:// creates a normal TCP socket connection
751 | rediss:// creates a SSL wrapped TCP socket connection
752 | unix:// creates a Unix Domain Socket connection
753 |
754 | There are several ways to specify a database number. The parse function
755 | will return the first specified option:
756 | 1. A ``db`` querystring option, e.g. redis://localhost?db=0
757 | 2. If using the redis:// scheme, the path argument of the url, e.g.
758 | redis://localhost/0
759 | 3. The ``db`` argument to this function.
760 |
761 | If none of these options are specified, db=0 is used.
762 |
763 | The ``decode_components`` argument allows this function to work with
764 | percent-encoded URLs. If this argument is set to ``True`` all ``%xx``
765 | escapes will be replaced by their single-character equivalents after
766 | the URL has been parsed. This only applies to the ``hostname``,
767 | ``path``, and ``password`` components.
768 |
769 | Any additional querystring arguments and keyword arguments will be
770 | passed along to the ConnectionPool class's initializer. In the case
771 | of conflicting arguments, querystring arguments always win.
772 | """
773 | url_string = url
774 | url = urlparse(url)
775 | qs = ''
776 |
777 | # in python2.6, custom URL schemes don't recognize querystring values
778 | # they're left as part of the url.path.
779 | if '?' in url.path and not url.query:
780 | # chop the querystring including the ? off the end of the url
781 | # and reparse it.
782 | qs = url.path.split('?', 1)[1]
783 | url = urlparse(url_string[:-(len(qs) + 1)])
784 | else:
785 | qs = url.query
786 |
787 | url_options = {}
788 |
789 | for name, value in iteritems(parse_qs(qs)):
790 | if value and len(value) > 0:
791 | url_options[name] = value[0]
792 |
793 | if decode_components:
794 | password = unquote(url.password) if url.password else None
795 | path = unquote(url.path) if url.path else None
796 | hostname = unquote(url.hostname) if url.hostname else None
797 | else:
798 | password = url.password
799 | path = url.path
800 | hostname = url.hostname
801 |
802 | # We only support redis:// and unix:// schemes.
803 | if url.scheme == 'unix':
804 | url_options.update({
805 | 'password': password,
806 | 'path': path,
807 | 'connection_class': UnixDomainSocketConnection,
808 | })
809 |
810 | else:
811 | url_options.update({
812 | 'host': hostname,
813 | 'port': int(url.port or 6379),
814 | 'password': password,
815 | })
816 |
817 | # If there's a path argument, use it as the db argument if a
818 | # querystring value wasn't specified
819 | if 'db' not in url_options and path:
820 | try:
821 | url_options['db'] = int(path.replace('/', ''))
822 | except (AttributeError, ValueError):
823 | pass
824 |
825 | if url.scheme == 'rediss':
826 | url_options['connection_class'] = SSLConnection
827 |
828 | # last shot at the db value
829 | url_options['db'] = int(url_options.get('db', db or 0))
830 |
831 | # update the arguments from the URL values
832 | kwargs.update(url_options)
833 |
834 | # backwards compatability
835 | if 'charset' in kwargs:
836 | warnings.warn(DeprecationWarning(
837 | '"charset" is deprecated. Use "encoding" instead'))
838 | kwargs['encoding'] = kwargs.pop('charset')
839 | if 'errors' in kwargs:
840 | warnings.warn(DeprecationWarning(
841 | '"errors" is deprecated. Use "encoding_errors" instead'))
842 | kwargs['encoding_errors'] = kwargs.pop('errors')
843 |
844 | return cls(**kwargs)
845 |
846 | def __init__(self, connection_class=Connection, max_connections=None,
847 | **connection_kwargs):
848 | """
849 | Create a connection pool. If max_connections is set, then this
850 | object raises redis.ConnectionError when the pool's limit is reached.
851 |
852 | By default, TCP connections are created connection_class is specified.
853 | Use redis.UnixDomainSocketConnection for unix sockets.
854 |
855 | Any additional keyword arguments are passed to the constructor of
856 | connection_class.
857 | """
858 | max_connections = max_connections or 2 ** 31
859 | if not isinstance(max_connections, (int, long)) or max_connections < 0:
860 | raise ValueError('"max_connections" must be a positive integer')
861 |
862 | self.connection_class = connection_class
863 | self.connection_kwargs = connection_kwargs
864 | self.max_connections = max_connections
865 |
866 | self.reset()
867 |
868 | def __repr__(self):
869 | return "%s<%s>" % (
870 | type(self).__name__,
871 | self.connection_class.description_format % self.connection_kwargs,
872 | )
873 |
874 | def reset(self):
875 | self.pid = os.getpid()
876 | self._created_connections = 0
877 | self._available_connections = []
878 | self._in_use_connections = set()
879 | self._check_lock = threading.Lock()
880 |
881 | def _checkpid(self):
882 | if self.pid != os.getpid():
883 | with self._check_lock:
884 | if self.pid == os.getpid():
885 | # another thread already did the work while we waited
886 | # on the lock.
887 | return
888 | self.disconnect()
889 | self.reset()
890 |
891 | def get_connection(self, command_name, *keys, **options):
892 | "Get a connection from the pool"
893 | self._checkpid()
894 | try:
895 | connection = self._available_connections.pop()
896 | except IndexError:
897 | connection = self.make_connection()
898 | self._in_use_connections.add(connection)
899 | return connection
900 |
901 | def make_connection(self):
902 | "Create a new connection"
903 | if self._created_connections >= self.max_connections:
904 | raise ConnectionError("Too many connections")
905 | self._created_connections += 1
906 | return self.connection_class(**self.connection_kwargs)
907 |
908 | def release(self, connection):
909 | "Releases the connection back to the pool"
910 | self._checkpid()
911 | if connection.pid != self.pid:
912 | return
913 | self._in_use_connections.remove(connection)
914 | self._available_connections.append(connection)
915 |
916 | def disconnect(self):
917 | "Disconnects all connections in the pool"
918 | all_conns = chain(self._available_connections,
919 | self._in_use_connections)
920 | for connection in all_conns:
921 | connection.disconnect()
922 |
923 |
924 | class BlockingConnectionPool(ConnectionPool):
925 | """
926 | Thread-safe blocking connection pool::
927 |
928 | >>> from redis.client import Redis
929 | >>> client = Redis(connection_pool=BlockingConnectionPool())
930 |
931 | It performs the same function as the default
932 | ``:py:class: ~redis.connection.ConnectionPool`` implementation, in that,
933 | it maintains a pool of reusable connections that can be shared by
934 | multiple redis clients (safely across threads if required).
935 |
936 | The difference is that, in the event that a client tries to get a
937 | connection from the pool when all of connections are in use, rather than
938 | raising a ``:py:class: ~redis.exceptions.ConnectionError`` (as the default
939 | ``:py:class: ~redis.connection.ConnectionPool`` implementation does), it
940 | makes the client wait ("blocks") for a specified number of seconds until
941 | a connection becomes available.
942 |
943 | Use ``max_connections`` to increase / decrease the pool size::
944 |
945 | >>> pool = BlockingConnectionPool(max_connections=10)
946 |
947 | Use ``timeout`` to tell it either how many seconds to wait for a connection
948 | to become available, or to block forever:
949 |
950 | # Block forever.
951 | >>> pool = BlockingConnectionPool(timeout=None)
952 |
953 | # Raise a ``ConnectionError`` after five seconds if a connection is
954 | # not available.
955 | >>> pool = BlockingConnectionPool(timeout=5)
956 | """
957 | def __init__(self, max_connections=50, timeout=20,
958 | connection_class=Connection, queue_class=LifoQueue,
959 | **connection_kwargs):
960 |
961 | self.queue_class = queue_class
962 | self.timeout = timeout
963 | super(BlockingConnectionPool, self).__init__(
964 | connection_class=connection_class,
965 | max_connections=max_connections,
966 | **connection_kwargs)
967 |
968 | def reset(self):
969 | self.pid = os.getpid()
970 | self._check_lock = threading.Lock()
971 |
972 | # Create and fill up a thread safe queue with ``None`` values.
973 | self.pool = self.queue_class(self.max_connections)
974 | while True:
975 | try:
976 | self.pool.put_nowait(None)
977 | except Full:
978 | break
979 |
980 | # Keep a list of actual connection instances so that we can
981 | # disconnect them later.
982 | self._connections = []
983 |
984 | def make_connection(self):
985 | "Make a fresh connection."
986 | connection = self.connection_class(**self.connection_kwargs)
987 | self._connections.append(connection)
988 | return connection
989 |
990 | def get_connection(self, command_name, *keys, **options):
991 | """
992 | Get a connection, blocking for ``self.timeout`` until a connection
993 | is available from the pool.
994 |
995 | If the connection returned is ``None`` then creates a new connection.
996 | Because we use a last-in first-out queue, the existing connections
997 | (having been returned to the pool after the initial ``None`` values
998 | were added) will be returned before ``None`` values. This means we only
999 | create new connections when we need to, i.e.: the actual number of
1000 | connections will only increase in response to demand.
1001 | """
1002 | # Make sure we haven't changed process.
1003 | self._checkpid()
1004 |
1005 | # Try and get a connection from the pool. If one isn't available within
1006 | # self.timeout then raise a ``ConnectionError``.
1007 | connection = None
1008 | try:
1009 | connection = self.pool.get(block=True, timeout=self.timeout)
1010 | except Empty:
1011 | # Note that this is not caught by the redis client and will be
1012 | # raised unless handled by application code. If you want never to
1013 | raise ConnectionError("No connection available.")
1014 |
1015 | # If the ``connection`` is actually ``None`` then that's a cue to make
1016 | # a new connection to add to the pool.
1017 | if connection is None:
1018 | connection = self.make_connection()
1019 |
1020 | return connection
1021 |
1022 | def release(self, connection):
1023 | "Releases the connection back to the pool."
1024 | # Make sure we haven't changed process.
1025 | self._checkpid()
1026 | if connection.pid != self.pid:
1027 | return
1028 |
1029 | # Put the connection back into the pool.
1030 | try:
1031 | self.pool.put_nowait(connection)
1032 | except Full:
1033 | # perhaps the pool has been reset() after a fork? regardless,
1034 | # we don't want this connection
1035 | pass
1036 |
1037 | def disconnect(self):
1038 | "Disconnects all connections in the pool."
1039 | for connection in self._connections:
1040 | connection.disconnect()
1041 |
--------------------------------------------------------------------------------
/gimel/vendor/redis/exceptions.py:
--------------------------------------------------------------------------------
1 | "Core exceptions raised by the Redis client"
2 | from redis._compat import unicode
3 |
4 |
5 | class RedisError(Exception):
6 | pass
7 |
8 |
9 | # python 2.5 doesn't implement Exception.__unicode__. Add it here to all
10 | # our exception types
11 | if not hasattr(RedisError, '__unicode__'):
12 | def __unicode__(self):
13 | if isinstance(self.args[0], unicode):
14 | return self.args[0]
15 | return unicode(self.args[0])
16 | RedisError.__unicode__ = __unicode__
17 |
18 |
19 | class AuthenticationError(RedisError):
20 | pass
21 |
22 |
23 | class ConnectionError(RedisError):
24 | pass
25 |
26 |
27 | class TimeoutError(RedisError):
28 | pass
29 |
30 |
31 | class BusyLoadingError(ConnectionError):
32 | pass
33 |
34 |
35 | class InvalidResponse(RedisError):
36 | pass
37 |
38 |
39 | class ResponseError(RedisError):
40 | pass
41 |
42 |
43 | class DataError(RedisError):
44 | pass
45 |
46 |
47 | class PubSubError(RedisError):
48 | pass
49 |
50 |
51 | class WatchError(RedisError):
52 | pass
53 |
54 |
55 | class NoScriptError(ResponseError):
56 | pass
57 |
58 |
59 | class ExecAbortError(ResponseError):
60 | pass
61 |
62 |
63 | class ReadOnlyError(ResponseError):
64 | pass
65 |
66 |
67 | class LockError(RedisError, ValueError):
68 | "Errors acquiring or releasing a lock"
69 | # NOTE: For backwards compatability, this class derives from ValueError.
70 | # This was originally chosen to behave like threading.Lock.
71 | pass
72 |
--------------------------------------------------------------------------------
/gimel/vendor/redis/lock.py:
--------------------------------------------------------------------------------
1 | import threading
2 | import time as mod_time
3 | import uuid
4 | from redis.exceptions import LockError, WatchError
5 | from redis.utils import dummy
6 | from redis._compat import b
7 |
8 |
9 | class Lock(object):
10 | """
11 | A shared, distributed Lock. Using Redis for locking allows the Lock
12 | to be shared across processes and/or machines.
13 |
14 | It's left to the user to resolve deadlock issues and make sure
15 | multiple clients play nicely together.
16 | """
17 | def __init__(self, redis, name, timeout=None, sleep=0.1,
18 | blocking=True, blocking_timeout=None, thread_local=True):
19 | """
20 | Create a new Lock instance named ``name`` using the Redis client
21 | supplied by ``redis``.
22 |
23 | ``timeout`` indicates a maximum life for the lock.
24 | By default, it will remain locked until release() is called.
25 | ``timeout`` can be specified as a float or integer, both representing
26 | the number of seconds to wait.
27 |
28 | ``sleep`` indicates the amount of time to sleep per loop iteration
29 | when the lock is in blocking mode and another client is currently
30 | holding the lock.
31 |
32 | ``blocking`` indicates whether calling ``acquire`` should block until
33 | the lock has been acquired or to fail immediately, causing ``acquire``
34 | to return False and the lock not being acquired. Defaults to True.
35 | Note this value can be overridden by passing a ``blocking``
36 | argument to ``acquire``.
37 |
38 | ``blocking_timeout`` indicates the maximum amount of time in seconds to
39 | spend trying to acquire the lock. A value of ``None`` indicates
40 | continue trying forever. ``blocking_timeout`` can be specified as a
41 | float or integer, both representing the number of seconds to wait.
42 |
43 | ``thread_local`` indicates whether the lock token is placed in
44 | thread-local storage. By default, the token is placed in thread local
45 | storage so that a thread only sees its token, not a token set by
46 | another thread. Consider the following timeline:
47 |
48 | time: 0, thread-1 acquires `my-lock`, with a timeout of 5 seconds.
49 | thread-1 sets the token to "abc"
50 | time: 1, thread-2 blocks trying to acquire `my-lock` using the
51 | Lock instance.
52 | time: 5, thread-1 has not yet completed. redis expires the lock
53 | key.
54 | time: 5, thread-2 acquired `my-lock` now that it's available.
55 | thread-2 sets the token to "xyz"
56 | time: 6, thread-1 finishes its work and calls release(). if the
57 | token is *not* stored in thread local storage, then
58 | thread-1 would see the token value as "xyz" and would be
59 | able to successfully release the thread-2's lock.
60 |
61 | In some use cases it's necessary to disable thread local storage. For
62 | example, if you have code where one thread acquires a lock and passes
63 | that lock instance to a worker thread to release later. If thread
64 | local storage isn't disabled in this case, the worker thread won't see
65 | the token set by the thread that acquired the lock. Our assumption
66 | is that these cases aren't common and as such default to using
67 | thread local storage.
68 | """
69 | self.redis = redis
70 | self.name = name
71 | self.timeout = timeout
72 | self.sleep = sleep
73 | self.blocking = blocking
74 | self.blocking_timeout = blocking_timeout
75 | self.thread_local = bool(thread_local)
76 | self.local = threading.local() if self.thread_local else dummy()
77 | self.local.token = None
78 | if self.timeout and self.sleep > self.timeout:
79 | raise LockError("'sleep' must be less than 'timeout'")
80 |
81 | def __enter__(self):
82 | # force blocking, as otherwise the user would have to check whether
83 | # the lock was actually acquired or not.
84 | self.acquire(blocking=True)
85 | return self
86 |
87 | def __exit__(self, exc_type, exc_value, traceback):
88 | self.release()
89 |
90 | def acquire(self, blocking=None, blocking_timeout=None):
91 | """
92 | Use Redis to hold a shared, distributed lock named ``name``.
93 | Returns True once the lock is acquired.
94 |
95 | If ``blocking`` is False, always return immediately. If the lock
96 | was acquired, return True, otherwise return False.
97 |
98 | ``blocking_timeout`` specifies the maximum number of seconds to
99 | wait trying to acquire the lock.
100 | """
101 | sleep = self.sleep
102 | token = b(uuid.uuid1().hex)
103 | if blocking is None:
104 | blocking = self.blocking
105 | if blocking_timeout is None:
106 | blocking_timeout = self.blocking_timeout
107 | stop_trying_at = None
108 | if blocking_timeout is not None:
109 | stop_trying_at = mod_time.time() + blocking_timeout
110 | while 1:
111 | if self.do_acquire(token):
112 | self.local.token = token
113 | return True
114 | if not blocking:
115 | return False
116 | if stop_trying_at is not None and mod_time.time() > stop_trying_at:
117 | return False
118 | mod_time.sleep(sleep)
119 |
120 | def do_acquire(self, token):
121 | if self.redis.setnx(self.name, token):
122 | if self.timeout:
123 | # convert to milliseconds
124 | timeout = int(self.timeout * 1000)
125 | self.redis.pexpire(self.name, timeout)
126 | return True
127 | return False
128 |
129 | def release(self):
130 | "Releases the already acquired lock"
131 | expected_token = self.local.token
132 | if expected_token is None:
133 | raise LockError("Cannot release an unlocked lock")
134 | self.local.token = None
135 | self.do_release(expected_token)
136 |
137 | def do_release(self, expected_token):
138 | name = self.name
139 |
140 | def execute_release(pipe):
141 | lock_value = pipe.get(name)
142 | if lock_value != expected_token:
143 | raise LockError("Cannot release a lock that's no longer owned")
144 | pipe.delete(name)
145 |
146 | self.redis.transaction(execute_release, name)
147 |
148 | def extend(self, additional_time):
149 | """
150 | Adds more time to an already acquired lock.
151 |
152 | ``additional_time`` can be specified as an integer or a float, both
153 | representing the number of seconds to add.
154 | """
155 | if self.local.token is None:
156 | raise LockError("Cannot extend an unlocked lock")
157 | if self.timeout is None:
158 | raise LockError("Cannot extend a lock with no timeout")
159 | return self.do_extend(additional_time)
160 |
161 | def do_extend(self, additional_time):
162 | pipe = self.redis.pipeline()
163 | pipe.watch(self.name)
164 | lock_value = pipe.get(self.name)
165 | if lock_value != self.local.token:
166 | raise LockError("Cannot extend a lock that's no longer owned")
167 | expiration = pipe.pttl(self.name)
168 | if expiration is None or expiration < 0:
169 | # Redis evicted the lock key between the previous get() and now
170 | # we'll handle this when we call pexpire()
171 | expiration = 0
172 | pipe.multi()
173 | pipe.pexpire(self.name, expiration + int(additional_time * 1000))
174 |
175 | try:
176 | response = pipe.execute()
177 | except WatchError:
178 | # someone else acquired the lock
179 | raise LockError("Cannot extend a lock that's no longer owned")
180 | if not response[0]:
181 | # pexpire returns False if the key doesn't exist
182 | raise LockError("Cannot extend a lock that's no longer owned")
183 | return True
184 |
185 |
186 | class LuaLock(Lock):
187 | """
188 | A lock implementation that uses Lua scripts rather than pipelines
189 | and watches.
190 | """
191 | lua_acquire = None
192 | lua_release = None
193 | lua_extend = None
194 |
195 | # KEYS[1] - lock name
196 | # ARGV[1] - token
197 | # ARGV[2] - timeout in milliseconds
198 | # return 1 if lock was acquired, otherwise 0
199 | LUA_ACQUIRE_SCRIPT = """
200 | if redis.call('setnx', KEYS[1], ARGV[1]) == 1 then
201 | if ARGV[2] ~= '' then
202 | redis.call('pexpire', KEYS[1], ARGV[2])
203 | end
204 | return 1
205 | end
206 | return 0
207 | """
208 |
209 | # KEYS[1] - lock name
210 | # ARGS[1] - token
211 | # return 1 if the lock was released, otherwise 0
212 | LUA_RELEASE_SCRIPT = """
213 | local token = redis.call('get', KEYS[1])
214 | if not token or token ~= ARGV[1] then
215 | return 0
216 | end
217 | redis.call('del', KEYS[1])
218 | return 1
219 | """
220 |
221 | # KEYS[1] - lock name
222 | # ARGS[1] - token
223 | # ARGS[2] - additional milliseconds
224 | # return 1 if the locks time was extended, otherwise 0
225 | LUA_EXTEND_SCRIPT = """
226 | local token = redis.call('get', KEYS[1])
227 | if not token or token ~= ARGV[1] then
228 | return 0
229 | end
230 | local expiration = redis.call('pttl', KEYS[1])
231 | if not expiration then
232 | expiration = 0
233 | end
234 | if expiration < 0 then
235 | return 0
236 | end
237 | redis.call('pexpire', KEYS[1], expiration + ARGV[2])
238 | return 1
239 | """
240 |
241 | def __init__(self, *args, **kwargs):
242 | super(LuaLock, self).__init__(*args, **kwargs)
243 | LuaLock.register_scripts(self.redis)
244 |
245 | @classmethod
246 | def register_scripts(cls, redis):
247 | if cls.lua_acquire is None:
248 | cls.lua_acquire = redis.register_script(cls.LUA_ACQUIRE_SCRIPT)
249 | if cls.lua_release is None:
250 | cls.lua_release = redis.register_script(cls.LUA_RELEASE_SCRIPT)
251 | if cls.lua_extend is None:
252 | cls.lua_extend = redis.register_script(cls.LUA_EXTEND_SCRIPT)
253 |
254 | def do_acquire(self, token):
255 | timeout = self.timeout and int(self.timeout * 1000) or ''
256 | return bool(self.lua_acquire(keys=[self.name],
257 | args=[token, timeout],
258 | client=self.redis))
259 |
260 | def do_release(self, expected_token):
261 | if not bool(self.lua_release(keys=[self.name],
262 | args=[expected_token],
263 | client=self.redis)):
264 | raise LockError("Cannot release a lock that's no longer owned")
265 |
266 | def do_extend(self, additional_time):
267 | additional_time = int(additional_time * 1000)
268 | if not bool(self.lua_extend(keys=[self.name],
269 | args=[self.local.token, additional_time],
270 | client=self.redis)):
271 | raise LockError("Cannot extend a lock that's no longer owned")
272 | return True
273 |
--------------------------------------------------------------------------------
/gimel/vendor/redis/sentinel.py:
--------------------------------------------------------------------------------
1 | import os
2 | import random
3 | import weakref
4 |
5 | from redis.client import StrictRedis
6 | from redis.connection import ConnectionPool, Connection
7 | from redis.exceptions import ConnectionError, ResponseError, ReadOnlyError
8 | from redis._compat import iteritems, nativestr, xrange
9 |
10 |
11 | class MasterNotFoundError(ConnectionError):
12 | pass
13 |
14 |
15 | class SlaveNotFoundError(ConnectionError):
16 | pass
17 |
18 |
19 | class SentinelManagedConnection(Connection):
20 | def __init__(self, **kwargs):
21 | self.connection_pool = kwargs.pop('connection_pool')
22 | super(SentinelManagedConnection, self).__init__(**kwargs)
23 |
24 | def __repr__(self):
25 | pool = self.connection_pool
26 | s = '%s' % (type(self).__name__, pool.service_name)
27 | if self.host:
28 | host_info = ',host=%s,port=%s' % (self.host, self.port)
29 | s = s % host_info
30 | return s
31 |
32 | def connect_to(self, address):
33 | self.host, self.port = address
34 | super(SentinelManagedConnection, self).connect()
35 | if self.connection_pool.check_connection:
36 | self.send_command('PING')
37 | if nativestr(self.read_response()) != 'PONG':
38 | raise ConnectionError('PING failed')
39 |
40 | def connect(self):
41 | if self._sock:
42 | return # already connected
43 | if self.connection_pool.is_master:
44 | self.connect_to(self.connection_pool.get_master_address())
45 | else:
46 | for slave in self.connection_pool.rotate_slaves():
47 | try:
48 | return self.connect_to(slave)
49 | except ConnectionError:
50 | continue
51 | raise SlaveNotFoundError # Never be here
52 |
53 | def read_response(self):
54 | try:
55 | return super(SentinelManagedConnection, self).read_response()
56 | except ReadOnlyError:
57 | if self.connection_pool.is_master:
58 | # When talking to a master, a ReadOnlyError when likely
59 | # indicates that the previous master that we're still connected
60 | # to has been demoted to a slave and there's a new master.
61 | # calling disconnect will force the connection to re-query
62 | # sentinel during the next connect() attempt.
63 | self.disconnect()
64 | raise ConnectionError('The previous master is now a slave')
65 | raise
66 |
67 |
68 | class SentinelConnectionPool(ConnectionPool):
69 | """
70 | Sentinel backed connection pool.
71 |
72 | If ``check_connection`` flag is set to True, SentinelManagedConnection
73 | sends a PING command right after establishing the connection.
74 | """
75 |
76 | def __init__(self, service_name, sentinel_manager, **kwargs):
77 | kwargs['connection_class'] = kwargs.get(
78 | 'connection_class', SentinelManagedConnection)
79 | self.is_master = kwargs.pop('is_master', True)
80 | self.check_connection = kwargs.pop('check_connection', False)
81 | super(SentinelConnectionPool, self).__init__(**kwargs)
82 | self.connection_kwargs['connection_pool'] = weakref.proxy(self)
83 | self.service_name = service_name
84 | self.sentinel_manager = sentinel_manager
85 |
86 | def __repr__(self):
87 | return "%s>> from redis.sentinel import Sentinel
144 | >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)
145 | >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)
146 | >>> master.set('foo', 'bar')
147 | >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)
148 | >>> slave.get('foo')
149 | 'bar'
150 |
151 | ``sentinels`` is a list of sentinel nodes. Each node is represented by
152 | a pair (hostname, port).
153 |
154 | ``min_other_sentinels`` defined a minimum number of peers for a sentinel.
155 | When querying a sentinel, if it doesn't meet this threshold, responses
156 | from that sentinel won't be considered valid.
157 |
158 | ``sentinel_kwargs`` is a dictionary of connection arguments used when
159 | connecting to sentinel instances. Any argument that can be passed to
160 | a normal Redis connection can be specified here. If ``sentinel_kwargs`` is
161 | not specified, any socket_timeout and socket_keepalive options specified
162 | in ``connection_kwargs`` will be used.
163 |
164 | ``connection_kwargs`` are keyword arguments that will be used when
165 | establishing a connection to a Redis server.
166 | """
167 |
168 | def __init__(self, sentinels, min_other_sentinels=0, sentinel_kwargs=None,
169 | **connection_kwargs):
170 | # if sentinel_kwargs isn't defined, use the socket_* options from
171 | # connection_kwargs
172 | if sentinel_kwargs is None:
173 | sentinel_kwargs = dict([(k, v)
174 | for k, v in iteritems(connection_kwargs)
175 | if k.startswith('socket_')
176 | ])
177 | self.sentinel_kwargs = sentinel_kwargs
178 |
179 | self.sentinels = [StrictRedis(hostname, port, **self.sentinel_kwargs)
180 | for hostname, port in sentinels]
181 | self.min_other_sentinels = min_other_sentinels
182 | self.connection_kwargs = connection_kwargs
183 |
184 | def __repr__(self):
185 | sentinel_addresses = []
186 | for sentinel in self.sentinels:
187 | sentinel_addresses.append('%s:%s' % (
188 | sentinel.connection_pool.connection_kwargs['host'],
189 | sentinel.connection_pool.connection_kwargs['port'],
190 | ))
191 | return '%s' % (
192 | type(self).__name__,
193 | ','.join(sentinel_addresses))
194 |
195 | def check_master_state(self, state, service_name):
196 | if not state['is_master'] or state['is_sdown'] or state['is_odown']:
197 | return False
198 | # Check if our sentinel doesn't see other nodes
199 | if state['num-other-sentinels'] < self.min_other_sentinels:
200 | return False
201 | return True
202 |
203 | def discover_master(self, service_name):
204 | """
205 | Asks sentinel servers for the Redis master's address corresponding
206 | to the service labeled ``service_name``.
207 |
208 | Returns a pair (address, port) or raises MasterNotFoundError if no
209 | master is found.
210 | """
211 | for sentinel_no, sentinel in enumerate(self.sentinels):
212 | try:
213 | masters = sentinel.sentinel_masters()
214 | except ConnectionError:
215 | continue
216 | state = masters.get(service_name)
217 | if state and self.check_master_state(state, service_name):
218 | # Put this sentinel at the top of the list
219 | self.sentinels[0], self.sentinels[sentinel_no] = (
220 | sentinel, self.sentinels[0])
221 | return state['ip'], state['port']
222 | raise MasterNotFoundError("No master found for %r" % (service_name,))
223 |
224 | def filter_slaves(self, slaves):
225 | "Remove slaves that are in an ODOWN or SDOWN state"
226 | slaves_alive = []
227 | for slave in slaves:
228 | if slave['is_odown'] or slave['is_sdown']:
229 | continue
230 | slaves_alive.append((slave['ip'], slave['port']))
231 | return slaves_alive
232 |
233 | def discover_slaves(self, service_name):
234 | "Returns a list of alive slaves for service ``service_name``"
235 | for sentinel in self.sentinels:
236 | try:
237 | slaves = sentinel.sentinel_slaves(service_name)
238 | except (ConnectionError, ResponseError):
239 | continue
240 | slaves = self.filter_slaves(slaves)
241 | if slaves:
242 | return slaves
243 | return []
244 |
245 | def master_for(self, service_name, redis_class=StrictRedis,
246 | connection_pool_class=SentinelConnectionPool, **kwargs):
247 | """
248 | Returns a redis client instance for the ``service_name`` master.
249 |
250 | A SentinelConnectionPool class is used to retrive the master's
251 | address before establishing a new connection.
252 |
253 | NOTE: If the master's address has changed, any cached connections to
254 | the old master are closed.
255 |
256 | By default clients will be a redis.StrictRedis instance. Specify a
257 | different class to the ``redis_class`` argument if you desire
258 | something different.
259 |
260 | The ``connection_pool_class`` specifies the connection pool to use.
261 | The SentinelConnectionPool will be used by default.
262 |
263 | All other keyword arguments are merged with any connection_kwargs
264 | passed to this class and passed to the connection pool as keyword
265 | arguments to be used to initialize Redis connections.
266 | """
267 | kwargs['is_master'] = True
268 | connection_kwargs = dict(self.connection_kwargs)
269 | connection_kwargs.update(kwargs)
270 | return redis_class(connection_pool=connection_pool_class(
271 | service_name, self, **connection_kwargs))
272 |
273 | def slave_for(self, service_name, redis_class=StrictRedis,
274 | connection_pool_class=SentinelConnectionPool, **kwargs):
275 | """
276 | Returns redis client instance for the ``service_name`` slave(s).
277 |
278 | A SentinelConnectionPool class is used to retrive the slave's
279 | address before establishing a new connection.
280 |
281 | By default clients will be a redis.StrictRedis instance. Specify a
282 | different class to the ``redis_class`` argument if you desire
283 | something different.
284 |
285 | The ``connection_pool_class`` specifies the connection pool to use.
286 | The SentinelConnectionPool will be used by default.
287 |
288 | All other keyword arguments are merged with any connection_kwargs
289 | passed to this class and passed to the connection pool as keyword
290 | arguments to be used to initialize Redis connections.
291 | """
292 | kwargs['is_master'] = False
293 | connection_kwargs = dict(self.connection_kwargs)
294 | connection_kwargs.update(kwargs)
295 | return redis_class(connection_pool=connection_pool_class(
296 | service_name, self, **connection_kwargs))
297 |
--------------------------------------------------------------------------------
/gimel/vendor/redis/utils.py:
--------------------------------------------------------------------------------
1 | from contextlib import contextmanager
2 |
3 |
4 | try:
5 | import hiredis
6 | HIREDIS_AVAILABLE = True
7 | except ImportError:
8 | HIREDIS_AVAILABLE = False
9 |
10 |
11 | def from_url(url, db=None, **kwargs):
12 | """
13 | Returns an active Redis client generated from the given database URL.
14 |
15 | Will attempt to extract the database id from the path url fragment, if
16 | none is provided.
17 | """
18 | from redis.client import Redis
19 | return Redis.from_url(url, db, **kwargs)
20 |
21 |
22 | @contextmanager
23 | def pipeline(redis_obj):
24 | p = redis_obj.pipeline()
25 | yield p
26 | p.execute()
27 |
28 |
29 | class dummy(object):
30 | """
31 | Instances of this class can be used as an attribute container.
32 | """
33 | pass
34 |
--------------------------------------------------------------------------------
/requirements.in:
--------------------------------------------------------------------------------
1 | awscli
2 | jmespath
3 | boto3
4 | click
5 | redis
6 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | #
2 | # This file is autogenerated by pip-compile
3 | # To update, run:
4 | #
5 | # pip-compile --output-file requirements.txt requirements.in
6 | #
7 |
8 | awscli==1.10.21
9 | boto3==1.3.0
10 | botocore==1.4.12 # via awscli, boto3, s3transfer
11 | click==6.6
12 | colorama==0.3.3 # via awscli
13 | docutils==0.12 # via awscli, botocore
14 | futures==3.0.5 # via boto3, s3transfer
15 | jmespath==0.9.0
16 | pyasn1==0.1.9 # via rsa
17 | python-dateutil==2.5.3 # via botocore
18 | redis==2.10.5
19 | rsa==3.3 # via awscli
20 | s3transfer==0.0.1 # via awscli
21 | six==1.10.0 # via python-dateutil
22 |
--------------------------------------------------------------------------------
/requirements_dev.txt:
--------------------------------------------------------------------------------
1 | pip-tools
2 | tox
3 | nose
4 | mock
5 | placebo
6 | check-manifest
7 | readme_renderer
8 |
9 | # md to rst for pypi
10 | pypandoc
--------------------------------------------------------------------------------
/setup.cfg:
--------------------------------------------------------------------------------
1 | # E128 continuation line under-indented for visual indent
2 | # E402 module level import not at top of file
3 | [flake8]
4 | ignore = E128,E402
5 | max-line-length = 100
6 | exclude = gimel/vendor/*
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from setuptools import setup, find_packages
4 | import os
5 |
6 | with open('README.md') as f:
7 | long_description = f.read()
8 |
9 | requires = [
10 | 'awscli>=1.10.21',
11 | 'jmespath>=0.9.0',
12 | 'boto3>=1.3.0',
13 | 'click>=6.6',
14 | 'redis>=2.10.5'
15 | ]
16 |
17 | setup(
18 | name='gimel',
19 | version=open(os.path.join('gimel', 'VERSION')).read().strip(),
20 | description='Run your own A/B testing backend on AWS Lambda',
21 | long_description=long_description,
22 | long_description_content_type='text/markdown',
23 | author='Yoav Aner',
24 | author_email='yoav@gingerlime.com',
25 | url='https://github.com/Alephbet/gimel',
26 | packages=find_packages(exclude=['tests*']),
27 | include_package_data=True,
28 | zip_safe=False,
29 | entry_points="""
30 | [console_scripts]
31 | gimel=gimel.cli:cli
32 | """,
33 | install_requires=requires,
34 | classifiers=(
35 | 'Development Status :: 5 - Production/Stable',
36 | 'Intended Audience :: Developers',
37 | 'Intended Audience :: System Administrators',
38 | 'Natural Language :: English',
39 | 'License :: OSI Approved :: MIT License',
40 | 'Programming Language :: Python',
41 | 'Programming Language :: Python :: 2.7',
42 | 'Programming Language :: Python :: 3',
43 | 'Programming Language :: Python :: 3.3',
44 | 'Programming Language :: Python :: 3.4',
45 | 'Programming Language :: Python :: 3.5',
46 | 'Programming Language :: Python :: 3.6',
47 | 'Programming Language :: Python :: 3.7'
48 | ),
49 | )
50 |
--------------------------------------------------------------------------------
/tox.ini:
--------------------------------------------------------------------------------
1 | [tox]
2 | skip_missing_interpreters = True
3 | skipsdist=True
4 | minversion = 1.8
5 | envlist =
6 | py2-pep8,
7 | py3-pep8,
8 | packaging,
9 | readme
10 |
11 | [testenv:packaging]
12 | deps =
13 | check-manifest
14 | commands =
15 | check-manifest
16 |
17 | [testenv:readme]
18 | deps =
19 | pypandoc
20 | readme_renderer
21 | commands =
22 | python setup.py check -m -r -s
23 |
24 | [testenv:py2-pep8]
25 | basepython = python2
26 | deps = flake8
27 | commands = flake8 {toxinidir}/gimel
28 |
29 | [testenv:py3-pep8]
30 | basepython = python3
31 | deps = flake8
32 | commands = flake8 {toxinidir}/gimel
--------------------------------------------------------------------------------