├── .gitignore
├── .travis.yml
├── LICENSE
├── MANIFEST.in
├── README.md
├── deploy.asciienma
├── lamed
├── VERSION
├── __init__.py
├── aws_api.py
├── cli.py
├── config.json.template
├── config.py
├── deploy.py
├── lamed.py
├── logger.py
└── vendor
│ ├── redis-3.4.1.dist-info
│ ├── INSTALLER
│ ├── LICENSE
│ ├── METADATA
│ ├── RECORD
│ ├── WHEEL
│ └── top_level.txt
│ └── redis
│ ├── __init__.py
│ ├── _compat.py
│ ├── client.py
│ ├── connection.py
│ ├── exceptions.py
│ ├── lock.py
│ ├── sentinel.py
│ └── utils.py
├── requirements.txt
├── requirements_dev.txt
├── setup.cfg
├── setup.py
└── tox.ini
/.gitignore:
--------------------------------------------------------------------------------
1 | config.json
2 | gimel.zip
3 | *.pyc
4 | dist/**
5 | build/**
6 | gimel.egg-info/**
7 |
8 | .idea/
9 | *.iml
10 | .tox/
11 |
--------------------------------------------------------------------------------
/.travis.yml:
--------------------------------------------------------------------------------
1 | sudo: false
2 | language: python
3 |
4 | matrix:
5 | include:
6 | - python: 3.6
7 | env: TOXENV=packaging
8 | - python: 3.6
9 | env: TOXENV=py3-pep8
10 | - python: 2.7
11 | env: TOXENV=py2-pep8
12 |
13 | cache:
14 | directories:
15 | - $HOME/.cache/pip
16 |
17 | install:
18 | - pip install tox
19 |
20 | script:
21 | - tox
22 |
23 | notifications:
24 | email: false
25 |
26 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Copyright (C) 2016 Yoav Aner
2 |
3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
4 | documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
5 | rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit
6 | persons to whom the Software is furnished to do so, subject to the following conditions:
7 |
8 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
9 | Software.
10 |
11 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
12 | WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
13 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
14 | OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
15 |
--------------------------------------------------------------------------------
/MANIFEST.in:
--------------------------------------------------------------------------------
1 | include README.md
2 | include LICENSE
3 | include lamed/VERSION
4 | include lamed/config.json.template
5 | recursive-include lamed/vendor *
6 |
7 | exclude tox.ini
8 | exclude requirements.in
9 | exclude requirements.txt
10 | exclude requirements_dev.txt
11 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Lámed (לָמֶד)
2 |
3 | [](https://pypi.python.org/pypi/lamed)
4 |
5 | ## What is it?
6 |
7 | an A/B testing backend using AWS Lambda/API Gateway + Redis.
8 |
9 | Lamed is a fork of [Gimel](https://github.com/alephbet/gimel) using different trade-offs. It offers higher accuracy but
10 | requires more memory / storage.
11 |
12 | Key Features:
13 |
14 | * Highly scalable due to the nature of AWS Lambda
15 | * High performance but with higher memory footprint than Gimel
16 | * Cost Effective
17 | * Easy deployment using `lamed deploy`. No need to twiddle with AWS.
18 |
19 | ## What does Lamed mean?
20 |
21 | Lamed (לָמֶד) is the 12th letter of the Hebrew Alphabet. It sounds similar to the greek Lambda
22 | (λ) and pronounced La'med /ˈlamɛd/, rather than lame-d. And no, it's not lame :)
23 |
24 | Lamed (לָמֶד) is also the root of the hebrew verb "to learn", and was born out of a learning experience using Gimel.
25 |
26 | ## Installation / Quick Start
27 |
28 | You will need a live instance of redis accessible online from AWS. Then run:
29 |
30 | ```bash
31 | $ pip install lamed
32 | $ lamed configure
33 | $ lamed deploy
34 | ```
35 |
36 | [](https://asciinema.org/a/316783?speed=2)
37 |
38 | It will automatically configure your AWS Lambda functions, API gateway and produce a JS snippet ready to use
39 | for tracking your experiments.
40 |
41 | ## Architecture
42 |
43 | 
44 |
45 | ### Client
46 |
47 | I suggest looking at [Alephbet](https://github.com/Alephbet/alephbet) to get more details, but at a high level, the client runs on the end-user browser. It will randomly pick a variant and execute a javascript function to 'activate' it. When a goal is reached -- user performs a certain action, this also include the pseudo-goal of *participating* in the experiment -- then an event is sent to the backend. An event typically looks something like "experiment ABC, variant red, user participated", or "experiment XYZ, variant blue, check out goal reached".
48 |
49 | Alephbet might send duplicate events, but each event should include a `uuid` to allow the backend to de-duplicate it. More below
50 |
51 | ### Data Store - No longer using Redis HyperLogLog
52 |
53 | The data store keeps a tally of each event that comes into the system. Being able to count unique events (de-duplication) was important to keep an accurate count. [Gimel](https://github.com/alephbet/gimel) is using [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) to count events. The redis HLL implementation is great, but as your number of events go up (40,000+ roughly), your A/B tests are losing accuracy and become much less reliable.
54 |
55 | Lamed uses a different approach, with different trade-offs:
56 |
57 | * Each event uuid creates a temporary flag in redis, with an expiry of X seconds.
58 | * When a new event comes in, it is checked against the flag, and if already found, it is ignored as duplicate.
59 | * Non-duplicate events are then counted using redis `INCR` atomic command.
60 | * `uuid` flags are protected with [optimistic locking transactions](https://redis.io/topics/transactions) that redis provides
61 |
62 | This mechanism is similar to how [idempotency keys are used at Stripe](https://stripe.com/docs/api/idempotent_requests) for example.
63 |
64 | > An idempotency key is a unique value generated by the client which the server uses to recognize subsequent retries of the same request. How you create unique keys is up to you, but we suggest using V4 UUIDs, or another random string with enough entropy to avoid collisions.
65 | >
66 | > Keys are eligible to be removed from the system after they're at least 24 hours old, and a new request is generated if a key is reused after the original has been pruned. The idempotency layer compares incoming parameters to those of the original request and errors unless they're the same to prevent accidental misuse.
67 |
68 |
69 | ### Backend - AWS Lambda / API Gateway
70 |
71 | The backend had to take care of a few simple types of requests:
72 |
73 | * track an event - receive a (HTTP) request with some json data -- experiment name, variant, goal and uuid, and then push it to redis.
74 | * extract the counters for a specific experiment, or all experiments into some json that can be presented on the dashboard.
75 |
76 | ### Dashboard
77 |
78 | access your dashboard with `lamed dashboard`
79 |
80 |
81 | ## How does tracking work?
82 |
83 | Check out [Alephbet](https://github.com/Alephbet/alephbet).
84 |
85 | ## Command Reference
86 |
87 | * `lamed --help` - prints a help screen.
88 | * `lamed configure` - opens your editor so you can edit the config.json file. Use it to update your redis settings.
89 | * `lamed preflight` - runs preflight checks to make sure you have access to AWS, redis etc.
90 | * `lamed deploy` - deploys the code and configs to AWS automatically.
91 |
92 | ## Advanced
93 |
94 | ### custom API endpoints
95 |
96 | If you want to use different API endpoints, you can add your own `extra_wiring` into the `config.json` file (e.g. using
97 | `lamed configure`).
98 |
99 | for example, this will add a `.../prod/my_tracking_endpoint` URL pointing to the `lamed-track` lambda:
100 |
101 | ```json
102 | {
103 | "redis": {
104 | ...
105 | },
106 | "extra_wiring": [
107 | {
108 | "lambda": {
109 | "FunctionName": "lamed-track",
110 | "Handler": "lamed.track",
111 | "MemorySize": 128,
112 | "Timeout": 3
113 | },
114 | "api_gateway": {
115 | "pathPart": "my_tracking_endpoint",
116 | "method": {
117 | "httpMethod": "GET",
118 | "apiKeyRequired": false,
119 | "requestParameters": {
120 | "method.request.querystring.namespace": false,
121 | "method.request.querystring.experiment": false,
122 | "method.request.querystring.variant": false,
123 | "method.request.querystring.event": false,
124 | "method.request.querystring.uuid": false
125 | }
126 | }
127 | }
128 | }
129 | ]
130 | }
131 | ```
132 |
133 | see [WIRING](https://github.com/Alephbet/gimel/blob/52830737835119692f3a3c157fe090adabf58150/gimel/deploy.py#L81)
134 |
135 | ## Privacy, Ad-blockers (GDPR etc)
136 |
137 | Lamed provides a backend for A/B test experiment data. This data is aggregated and does *not* contain any personal information at all. It merely stores the total number of actions with a certain variation against another.
138 |
139 | As such, Lamed should meet privacy requirements of GDPR and similar privacy regulations.
140 |
141 | Nevertheless, important disclaimers:
142 |
143 | * I am not a lawyer, and it's entirely up to you if and how you decide to use Lamed. Please check with your local regulations and get legal advice to decide on your own.
144 | * Some ad-blockers are extra vigilent, and would block requests with the `track` keyword in the URL. Therefore, track requests to Lamed might be blocked by default. As the library author, I make no attempts to conceal the fact that a form of tracking is necessary to run A/B tests, even if I believe it to be respecting privacy.
145 | * Users who decide to use Lamed can, if they wish, assign a different endpoint that might get past ad-blockers, but that's entirely up to them. see [custom API endpoints](#custom-api-endpoints) on how this can be achieved.
146 | * As with almost any tool, it can be use for good or evil. Some A/B tests can be seen as manipulative, unfair or otherwise illegitimate. Again, use your own moral compass to decide whether or not it's ok to use A/B testing, or specific A/B tests.
147 |
148 | ## License
149 |
150 | Lamed is distributed under the MIT license. All 3rd party libraries and components are distributed under their
151 | respective license terms.
152 |
153 | ```
154 | Copyright (C) 2020 Yoav Aner
155 |
156 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
157 | documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
158 | rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit
159 | persons to whom the Software is furnished to do so, subject to the following conditions:
160 |
161 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the
162 | Software.
163 |
164 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
165 | WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
166 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
167 | OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
168 | ```
169 |
170 |
--------------------------------------------------------------------------------
/deploy.asciienma:
--------------------------------------------------------------------------------
1 | {"version":2,"width":128,"height":46,"timestamp":1578058121,"theme":{},"env":{"SHELL":"/bin/bash","TERM":"screen-256color"}}
2 | [0.5,"o","$ "]
3 | [1.516,"o","l"]
4 | [1.724,"o","a"]
5 | [1.963,"o","m"]
6 | [2.174,"o","e"]
7 | [2.406,"o","d"]
8 | [2.533,"o"," "]
9 | [2.772,"o","d"]
10 | [2.972,"o","e"]
11 | [3.083,"o","p"]
12 | [3.348,"o","l"]
13 | [3.619,"o","o"]
14 | [3.781,"o","y"]
15 | [4.635,"o","\r\n"]
16 | [4.1,"o","\u001b[32m[.] Using config /root/.lamed/config.json\u001b[0m\r\n"]
17 | [4.532,"o","\u001b[32m[.] running preflight checks\u001b[0m\r\n"]
18 | [4.534,"o","\u001b[32m[.] checking aws credentials and region\u001b[0m\r\n"]
19 | [4.67,"o","\u001b[32m[.] testing redis\u001b[0m\r\n"]
20 | [5.616,"o","\u001b[32m[.] deploying\u001b[0m\r\n"]
21 | [5.809,"o","\u001b[32m[.] creating/updating lamed.zip\u001b[0m\r\n"]
22 | [6.945,"o","\u001b[32m[.] finding role\u001b[0m\r\n"]
23 | [8.443,"o","\u001b[32m[.] updating role policy\u001b[0m\r\n"]
24 | [9.199,"o","\u001b[32m[.] finding lambda function\u001b[0m\r\n"]
25 | [9.844,"o","\u001b[32m[.] creating new lambda function lamed-track\u001b[0m\r\n"]
26 | [11.325,"o","\u001b[32m[.] creating function alias live for lamed-track:1\u001b[0m\r\n"]
27 | [11.913,"o","\u001b[32m[.] cleaning up old versions of lamed-track. Keeping 5\u001b[0m\r\n"]
28 | [12.486,"o","\u001b[32m[.] creating or updating api /track\u001b[0m\r\n"]
29 | [20.931,"o","\u001b[32m[.] finding lambda function\u001b[0m\r\n"]
30 | [21.646,"o","\u001b[32m[.] creating new lambda function lamed-all-experiments\u001b[0m\r\n"]
31 | [23.247,"o","\u001b[32m[.] creating function alias live for lamed-all-experiments:1\u001b[0m\r\n"]
32 | [23.833,"o","\u001b[32m[.] cleaning up old versions of lamed-all-experiments. Keeping 5\u001b[0m\r\n"]
33 | [24.398,"o","\u001b[32m[.] creating or updating api /experiments\u001b[0m\r\n"]
34 | [32.85,"o","\u001b[32m[.] finding lambda function\u001b[0m\r\n"]
35 | [33.441,"o","\u001b[32m[.] creating new lambda function lamed-delete-experiment\u001b[0m\r\n"]
36 | [34.955,"o","\u001b[32m[.] creating function alias live for lamed-delete-experiment:1\u001b[0m\r\n"]
37 | [35.542,"o","\u001b[32m[.] cleaning up old versions of lamed-delete-experiment. Keeping 5\u001b[0m\r\n"]
38 | [36.137,"o","\u001b[32m[.] creating or updating api /delete\u001b[0m\r\n"]
39 | [44.616,"o","\u001b[32m[.] deploying API\u001b[0m\r\n"]
40 | [47.681,"o","\u001b[32m[.] AlephBet JS code snippet:\u001b[0m"]
41 | [47.681,"o","\r\n"]
42 | [47.687,"o","\u001b[32m[.] "]
43 | [47.69,"o","\r\n\r\n"]
44 | [47.694,"o"," \u003c!-- Copy and paste this snippet to start tracking with lamed --\u003e"]
45 | [47.697,"o","\r\n\r\n"]
46 | [47.702,"o"," \u003cscript src=\"https://unpkg.com/alephbet/dist/alephbet.min.js\"\u003e\u003c/script\u003e\r\n \u003cscript\u003e\r\n\r\n // * javascript code snippet to track experiments with AlephBet *\r\n // * For more information: https://github.com/Alephbet/alephbet *\r\n\r\n track_url = 'https://qhzlop9nrl.execute-api.us-east-1.amazonaws.com/prod/track';\r\n namespace = 'alephbet';\r\n\r\n experiment = new AlephBet.Experiment({\r\n name: \"my a/b test\",\r\n tracking_adapter: new AlephBet.GimelAdapter(track_url, namespace),\r\n // trigger: function() { ... }, // optional trigger\r\n variants: {\r\n red: {\r\n activate: function() {\r\n // add your code here\r\n }\r\n },\r\n blue: {\r\n activate: function() {\r\n // add your code here\r\n }\r\n }\r\n }\r\n });\r\n \u003c/script\u003e\r\n \u001b[0m\r\n"]
47 | [47.819,"o","$ "]
48 | [47.9,"o","\r\n"]
49 |
--------------------------------------------------------------------------------
/lamed/VERSION:
--------------------------------------------------------------------------------
1 | 0.4.3
2 |
--------------------------------------------------------------------------------
/lamed/__init__.py:
--------------------------------------------------------------------------------
1 | import os
2 |
3 | __version__ = open(
4 | os.path.join(os.path.dirname(__file__), 'VERSION')).read().strip()
5 |
--------------------------------------------------------------------------------
/lamed/aws_api.py:
--------------------------------------------------------------------------------
1 | import boto3
2 | import botocore.session
3 | import jmespath
4 | from functools import partial
5 |
6 |
7 | def boto_session():
8 | return boto3.session.Session()
9 |
10 |
11 | def aws(service, action, **kwargs):
12 | client = boto_session().client(service)
13 | query = kwargs.pop('query', None)
14 | if client.can_paginate(action):
15 | paginator = client.get_paginator(action)
16 | result = paginator.paginate(**kwargs).build_full_result()
17 | else:
18 | result = getattr(client, action)(**kwargs)
19 | if query:
20 | result = jmespath.compile(query).search(result)
21 | return result
22 |
23 |
24 | def region():
25 | return boto_session().region_name
26 |
27 |
28 | def check_aws_credentials():
29 | session = botocore.session.get_session()
30 | session.get_credentials().access_key
31 | session.get_credentials().secret_key
32 |
33 |
34 | iam = partial(aws, 'iam')
35 | aws_lambda = partial(aws, 'lambda')
36 | apigateway = partial(aws, 'apigateway')
37 |
--------------------------------------------------------------------------------
/lamed/cli.py:
--------------------------------------------------------------------------------
1 | import click
2 | import logging
3 | try:
4 | from lamed import logger
5 | from lamed.deploy import run, js_code_snippet, preflight_checks, dashboard_url
6 | from lamed.config import config, config_filename, generate_config
7 | except ImportError:
8 | import logger
9 | from deploy import run, js_code_snippet, preflight_checks, dashboard_url
10 | from config import config, config_filename, generate_config
11 |
12 | logger = logger.setup()
13 |
14 |
15 | @click.group()
16 | @click.option('--debug', is_flag=True)
17 | def cli(debug):
18 | if debug:
19 | logger.setLevel(logging.DEBUG)
20 |
21 |
22 | @cli.command()
23 | def preflight():
24 | logger.info('running preflight checks')
25 | preflight_checks()
26 |
27 |
28 | @cli.command()
29 | @click.option('--preflight/--no-preflight', default=True)
30 | def deploy(preflight):
31 | if preflight:
32 | logger.info('running preflight checks')
33 | if not preflight_checks():
34 | return
35 | logger.info('deploying')
36 | run()
37 | js_code_snippet()
38 |
39 |
40 | @cli.command()
41 | def configure():
42 | if not config:
43 | logger.info('generating new config {}'.format(config_filename))
44 | generate_config(config_filename)
45 | click.edit(filename=config_filename)
46 |
47 |
48 | @cli.command()
49 | @click.option('--namespace', default='alephbet')
50 | def dashboard(namespace):
51 | click.launch(dashboard_url(namespace))
52 |
53 |
54 | if __name__ == '__main__':
55 | cli()
56 |
--------------------------------------------------------------------------------
/lamed/config.json.template:
--------------------------------------------------------------------------------
1 | {
2 | "redis": {
3 | "host": "ENTER YOUR REDIS HOST",
4 | "port": 6379,
5 | "password": "..."
6 | },
7 | "uuid_expiry_seconds": 86400
8 | }
9 |
--------------------------------------------------------------------------------
/lamed/config.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | from os.path import expanduser, realpath
3 | import os
4 | import json
5 | try:
6 | from lamed import logger
7 | except ImportError:
8 | import logger
9 |
10 | logger = logger.setup()
11 |
12 |
13 | # NOTE: Copy config.json.template to config.json and edit with your settings
14 |
15 | def _load_config(config_filename):
16 | try:
17 | with open(config_filename) as config_file:
18 | logger.info('Using config {}'.format(config_filename))
19 | return config_file.name, json.load(config_file)
20 | except IOError:
21 | logger.debug('trying to load {} (not found)'.format(config_filename))
22 | return config_filename, {}
23 |
24 |
25 | def load_config():
26 | config_filenames = (realpath('config.json'),
27 | expanduser('~/.lamed/config.json'))
28 | for config_filename in config_filenames:
29 | name, content = _load_config(config_filename)
30 | if content:
31 | break
32 | return name, content
33 |
34 |
35 | def _create_file(config_filename):
36 | dirname = os.path.split(config_filename)[0]
37 | if not os.path.isdir(dirname):
38 | os.makedirs(dirname)
39 | with os.fdopen(os.open(config_filename,
40 | os.O_WRONLY | os.O_CREAT, 0o600), 'w'):
41 | pass
42 |
43 |
44 | def _config_template():
45 | from pkg_resources import resource_filename as resource
46 | return open(resource('lamed', 'config.json.template'), 'r').read()
47 |
48 |
49 | def generate_config(config_filename=None):
50 | if config_filename is None:
51 | config_filename = expanduser('~/.lamed/config.json')
52 | _create_file(config_filename)
53 |
54 | with open(config_filename, 'w') as config_file:
55 | config_file.write(_config_template())
56 | return config_filename
57 |
58 |
59 | config_filename, config = load_config()
60 |
--------------------------------------------------------------------------------
/lamed/deploy.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | from botocore.client import ClientError
3 | import os
4 | import redis
5 | from zipfile import ZipFile, ZipInfo, ZIP_DEFLATED
6 | try:
7 | from lamed import logger
8 | from lamed.lamed import _redis
9 | from lamed.config import config
10 | from lamed.aws_api import iam, apigateway, aws_lambda, region, check_aws_credentials
11 | except ImportError:
12 | import logger
13 | from lamed import _redis
14 | from config import config
15 | from aws_api import iam, apigateway, aws_lambda, region, check_aws_credentials
16 |
17 |
18 | logger = logger.setup()
19 | LIVE = 'live'
20 | REVISIONS = 5
21 | TRACK_ENDPOINT = 'track'
22 | EXPERIMENTS_ENDPOINT = 'experiments'
23 | POLICY = """{
24 | "Version": "2012-10-17",
25 | "Statement": [
26 | {
27 | "Effect": "Allow",
28 | "Action": [
29 | "lambda:InvokeFunction"
30 | ],
31 | "Resource": [
32 | "*"
33 | ]
34 | },
35 | {
36 | "Effect": "Allow",
37 | "Action": [
38 | "kinesis:GetRecords",
39 | "kinesis:GetShardIterator",
40 | "kinesis:DescribeStream",
41 | "kinesis:ListStreams",
42 | "kinesis:PutRecord",
43 | "logs:CreateLogGroup",
44 | "logs:CreateLogStream",
45 | "logs:PutLogEvents"
46 | ],
47 | "Resource": "*"
48 | }
49 | ]
50 | }"""
51 | ASSUMED_ROLE_POLICY = """{
52 | "Version": "2012-10-17",
53 | "Statement": [
54 | {
55 | "Action": "sts:AssumeRole",
56 | "Effect": "Allow",
57 | "Principal": {
58 | "Service": "lambda.amazonaws.com"
59 | }
60 | },
61 | {
62 | "Action": "sts:AssumeRole",
63 | "Effect": "Allow",
64 | "Principal": {
65 | "Service": "apigateway.amazonaws.com"
66 | }
67 | }
68 | ]
69 | }"""
70 | # source: https://aws.amazon.com/blogs/compute/using-api-gateway-mapping-templates-to-handle-changes-in-your-back-end-apis/ # noqa
71 | REQUEST_TEMPLATE = {'application/json':
72 | """{
73 | #set($queryMap = $input.params().querystring)
74 | #foreach($key in $queryMap.keySet())
75 | "$key" : "$queryMap.get($key)"
76 | #if($foreach.hasNext),#end
77 | #end
78 | }
79 | """}
80 |
81 | WIRING = [
82 | {
83 | "lambda": {
84 | "FunctionName": "lamed-track",
85 | "Handler": "lamed.track",
86 | "MemorySize": 128,
87 | "Timeout": 10
88 | },
89 | "api_gateway": {
90 | "pathPart": TRACK_ENDPOINT,
91 | "method": {
92 | "httpMethod": "GET",
93 | "apiKeyRequired": False,
94 | "requestParameters": {
95 | "method.request.querystring.namespace": False,
96 | "method.request.querystring.experiment": False,
97 | "method.request.querystring.variant": False,
98 | "method.request.querystring.event": False,
99 | "method.request.querystring.uuid": False
100 | }
101 | }
102 | }
103 | },
104 | {
105 | "lambda": {
106 | "FunctionName": "lamed-all-experiments",
107 | "Handler": "lamed.all",
108 | "MemorySize": 128,
109 | "Timeout": 60
110 | },
111 | "api_gateway": {
112 | "pathPart": EXPERIMENTS_ENDPOINT,
113 | "method": {
114 | "httpMethod": "GET",
115 | "apiKeyRequired": True,
116 | "requestParameters": {
117 | "method.request.querystring.namespace": False,
118 | "method.request.querystring.scope": False
119 | }
120 | }
121 | }
122 | },
123 | {
124 | "lambda": {
125 | "FunctionName": "lamed-delete-experiment",
126 | "Handler": "lamed.delete",
127 | "MemorySize": 128,
128 | "Timeout": 30
129 | },
130 | "api_gateway": {
131 | "pathPart": "delete",
132 | "method": {
133 | "httpMethod": "DELETE",
134 | "apiKeyRequired": True,
135 | "requestParameters": {
136 | "method.request.querystring.namespace": False,
137 | "method.request.querystring.experiment": False,
138 | }
139 | }
140 | }
141 | }
142 | ]
143 |
144 |
145 | def prepare_zip():
146 | from pkg_resources import resource_filename as resource
147 | from json import dumps
148 | logger.info('creating/updating lamed.zip')
149 | with ZipFile('lamed.zip', 'w', ZIP_DEFLATED) as zipf:
150 | info = ZipInfo('config.json')
151 | info.external_attr = 0o664 << 16
152 | zipf.writestr(info, dumps(config))
153 | zipf.write(resource('lamed', 'config.py'), 'config.py')
154 | zipf.write(resource('lamed', 'lamed.py'), 'lamed.py')
155 | zipf.write(resource('lamed', 'logger.py'), 'logger.py')
156 | for root, dirs, files in os.walk(resource('lamed', 'vendor')):
157 | for file in files:
158 | real_file = os.path.join(root, file)
159 | relative_file = os.path.relpath(real_file,
160 | resource('lamed', ''))
161 | zipf.write(real_file, relative_file)
162 |
163 |
164 | def role():
165 | new_role = False
166 | try:
167 | logger.info('finding role')
168 | iam('get_role', RoleName='lamed')
169 | except ClientError:
170 | logger.info('role not found. creating')
171 | iam('create_role', RoleName='lamed',
172 | AssumeRolePolicyDocument=ASSUMED_ROLE_POLICY)
173 | new_role = True
174 |
175 | role_arn = iam('get_role', RoleName='lamed', query='Role.Arn')
176 | logger.debug('role_arn={}'.format(role_arn))
177 |
178 | logger.info('updating role policy')
179 |
180 | iam('put_role_policy', RoleName='lamed', PolicyName='lamed',
181 | PolicyDocument=POLICY)
182 |
183 | if new_role:
184 | from time import sleep
185 | logger.info('waiting for role policy propagation')
186 | sleep(5)
187 |
188 | return role_arn
189 |
190 |
191 | def _cleanup_old_versions(name):
192 | logger.info('cleaning up old versions of {0}. Keeping {1}'.format(
193 | name, REVISIONS))
194 | versions = _versions(name)
195 | for version in versions[0:(len(versions) - REVISIONS)]:
196 | logger.debug('deleting {} version {}'.format(name, version))
197 | aws_lambda('delete_function',
198 | FunctionName=name,
199 | Qualifier=version)
200 |
201 |
202 | def _function_alias(name, version, alias=LIVE):
203 | try:
204 | logger.info('creating function alias {0} for {1}:{2}'.format(
205 | alias, name, version))
206 | arn = aws_lambda('create_alias',
207 | FunctionName=name,
208 | FunctionVersion=version,
209 | Name=alias,
210 | query='AliasArn')
211 | except ClientError:
212 | logger.info('alias {0} exists. updating {0} -> {1}:{2}'.format(
213 | alias, name, version))
214 | arn = aws_lambda('update_alias',
215 | FunctionName=name,
216 | FunctionVersion=version,
217 | Name=alias,
218 | query='AliasArn')
219 | return arn
220 |
221 |
222 | def _versions(name):
223 | versions = aws_lambda('list_versions_by_function',
224 | FunctionName=name,
225 | query='Versions[].Version')
226 | return versions[1:]
227 |
228 |
229 | def _get_version(name, alias=LIVE):
230 | return aws_lambda('get_alias',
231 | FunctionName=name,
232 | Name=alias,
233 | query='FunctionVersion')
234 |
235 |
236 | def rollback_lambda(name, alias=LIVE):
237 | all_versions = _versions(name)
238 | live_version = _get_version(name, alias)
239 | try:
240 | live_index = all_versions.index(live_version)
241 | if live_index < 1:
242 | raise RuntimeError('Cannot find previous version')
243 | prev_version = all_versions[live_index - 1]
244 | logger.info('rolling back to version {}'.format(prev_version))
245 | _function_alias(name, prev_version)
246 | except RuntimeError as error:
247 | logger.error('Unable to rollback. {}'.format(repr(error)))
248 |
249 |
250 | def rollback(alias=LIVE):
251 | for lambda_function in ('lamed-track', 'lamed-all-experiments'):
252 | rollback_lambda(lambda_function, alias)
253 |
254 |
255 | def get_create_api():
256 | api_id = apigateway('get_rest_apis',
257 | query='items[?name==`lamed`] | [0].id')
258 | if not api_id:
259 | api_id = apigateway('create_rest_api', name='lamed',
260 | description='lamed API', query='id')
261 | logger.debug("api_id={}".format(api_id))
262 | return api_id
263 |
264 |
265 | def get_api_key():
266 | return apigateway('get_api_keys',
267 | query='items[?name==`lamed`] | [0].id')
268 |
269 |
270 | def api_key(api_id):
271 | key = get_api_key()
272 | if key:
273 | apigateway('update_api_key', apiKey=key,
274 | patchOperations=[{'op': 'add', 'path': '/stages',
275 | 'value': '{}/prod'.format(api_id)}])
276 | else:
277 | key = apigateway('create_api_key', name='lamed', enabled=True,
278 | stageKeys=[{'restApiId': api_id, 'stageName': 'prod'}])
279 | return key
280 |
281 |
282 | def resource(api_id, path):
283 | resource_id = apigateway('get_resources', restApiId=api_id,
284 | query='items[?path==`/{}`].id | [0]'.format(path))
285 | if resource_id:
286 | return resource_id
287 | root_resource_id = apigateway('get_resources', restApiId=api_id,
288 | query='items[?path==`/`].id | [0]')
289 | resource_id = apigateway('create_resource', restApiId=api_id,
290 | parentId=root_resource_id,
291 | pathPart=path, query='id')
292 | return resource_id
293 |
294 |
295 | def function_uri(function_arn, region):
296 | uri = ('arn:aws:apigateway:{0}:lambda:path/2015-03-31/functions'
297 | '/{1}/invocations').format(region, function_arn)
298 | logger.debug("uri={0}".format(uri))
299 | return uri
300 |
301 |
302 | def _clear_method(api_id, resource_id, http_method):
303 | try:
304 | method = apigateway('get_method', restApiId=api_id,
305 | resourceId=resource_id,
306 | httpMethod=http_method)
307 | except ClientError:
308 | method = None
309 | if method:
310 | apigateway('delete_method', restApiId=api_id, resourceId=resource_id,
311 | httpMethod=http_method)
312 |
313 |
314 | def cors(api_id, resource_id):
315 | _clear_method(api_id, resource_id, 'OPTIONS')
316 | apigateway('put_method', restApiId=api_id, resourceId=resource_id,
317 | httpMethod='OPTIONS', authorizationType='NONE',
318 | apiKeyRequired=False)
319 | apigateway('put_integration', restApiId=api_id, resourceId=resource_id,
320 | httpMethod='OPTIONS', type='MOCK', integrationHttpMethod='POST',
321 | requestTemplates={'application/json': '{"statusCode": 200}'})
322 | apigateway('put_method_response', restApiId=api_id, resourceId=resource_id,
323 | httpMethod='OPTIONS', statusCode='200',
324 | responseParameters={
325 | "method.response.header.Access-Control-Allow-Origin": False,
326 | "method.response.header.Access-Control-Allow-Methods": False,
327 | "method.response.header.Access-Control-Allow-Headers": False},
328 | responseModels={'application/json': 'Empty'})
329 | apigateway('put_integration_response', restApiId=api_id,
330 | resourceId=resource_id, httpMethod='OPTIONS', statusCode='200',
331 | responseParameters={
332 | "method.response.header.Access-Control-Allow-Origin": "'*'",
333 | "method.response.header.Access-Control-Allow-Methods": "'GET,OPTIONS'",
334 | "method.response.header.Access-Control-Allow-Headers": "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"}, # noqa
335 | responseTemplates={'application/json': ''})
336 |
337 |
338 | def deploy_api(api_id):
339 | logger.info('deploying API')
340 | return apigateway('create_deployment', restApiId=api_id,
341 | description='lamed deployment',
342 | stageName='prod',
343 | stageDescription='lamed production',
344 | cacheClusterEnabled=False,
345 | query='id')
346 |
347 |
348 | def api_method(api_id, resource_id, role_arn, function_uri, wiring):
349 | http_method = wiring['method']['httpMethod']
350 | _clear_method(api_id, resource_id, http_method)
351 | apigateway('put_method', restApiId=api_id, resourceId=resource_id,
352 | authorizationType='NONE',
353 | **wiring['method'])
354 | apigateway('put_integration', restApiId=api_id, resourceId=resource_id,
355 | httpMethod=http_method, type='AWS', integrationHttpMethod='POST',
356 | credentials=role_arn,
357 | uri=function_uri,
358 | requestTemplates=REQUEST_TEMPLATE)
359 | apigateway('put_method_response', restApiId=api_id, resourceId=resource_id,
360 | httpMethod=http_method, statusCode='200',
361 | responseParameters={
362 | "method.response.header.Access-Control-Allow-Origin": False,
363 | "method.response.header.Pragma": False,
364 | "method.response.header.Cache-Control": False},
365 | responseModels={'application/json': 'Empty'})
366 | apigateway('put_integration_response', restApiId=api_id,
367 | resourceId=resource_id, httpMethod=http_method, statusCode='200',
368 | responseParameters={
369 | "method.response.header.Access-Control-Allow-Origin": "'*'",
370 | "method.response.header.Pragma": "'no-cache'",
371 | "method.response.header.Cache-Control": "'no-cache, no-store, must-revalidate'"},
372 | responseTemplates={'application/json': ''})
373 |
374 |
375 | def create_update_lambda(role_arn, wiring):
376 | name, handler, memory, timeout = (wiring[k] for k in ('FunctionName',
377 | 'Handler',
378 | 'MemorySize',
379 | 'Timeout'))
380 | try:
381 | logger.info('finding lambda function')
382 | function_arn = aws_lambda('get_function',
383 | FunctionName=name,
384 | query='Configuration.FunctionArn')
385 | except ClientError:
386 | function_arn = None
387 | if not function_arn:
388 | logger.info('creating new lambda function {}'.format(name))
389 | with open('lamed.zip', 'rb') as zf:
390 | function_arn, version = aws_lambda('create_function',
391 | FunctionName=name,
392 | Runtime='python3.8',
393 | Role=role_arn,
394 | Handler=handler,
395 | MemorySize=memory,
396 | Timeout=timeout,
397 | Publish=True,
398 | Code={'ZipFile': zf.read()},
399 | query='[FunctionArn, Version]')
400 | else:
401 | logger.info('updating lambda function {}'.format(name))
402 | aws_lambda('update_function_configuration',
403 | FunctionName=name,
404 | Runtime='python3.8',
405 | Role=role_arn,
406 | Handler=handler,
407 | MemorySize=memory,
408 | Timeout=timeout)
409 | with open('lamed.zip', 'rb') as zf:
410 | function_arn, version = aws_lambda('update_function_code',
411 | FunctionName=name,
412 | Publish=True,
413 | ZipFile=zf.read(),
414 | query='[FunctionArn, Version]')
415 | function_arn = _function_alias(name, version)
416 | _cleanup_old_versions(name)
417 | logger.debug('function_arn={} ; version={}'.format(function_arn, version))
418 | return function_arn
419 |
420 |
421 | def create_update_api(role_arn, function_arn, wiring):
422 | logger.info('creating or updating api /{}'.format(wiring['pathPart']))
423 | api_id = get_create_api()
424 | resource_id = resource(api_id, wiring['pathPart'])
425 | uri = function_uri(function_arn, region())
426 | api_method(api_id, resource_id, role_arn, uri, wiring)
427 | cors(api_id, resource_id)
428 |
429 |
430 | def js_code_snippet():
431 | api_id = get_create_api()
432 | api_region = region()
433 | endpoint = TRACK_ENDPOINT
434 | logger.info('AlephBet JS code snippet:')
435 | logger.info(
436 | """
437 |
438 |
439 |
440 |
441 |
467 | """ % locals()
468 | )
469 |
470 |
471 | def dashboard_url(namespace='alephbet'):
472 | api_id = get_create_api()
473 | api_region = region()
474 | endpoint = EXPERIMENTS_ENDPOINT
475 | experiments_url = 'https://{}.execute-api.{}.amazonaws.com/prod/{}'.format(
476 | api_id, api_region, endpoint)
477 | return ('https://codepen.io/anon/pen/LOGGZj/?experiment_url={}'
478 | '&api_key={}&namespace={}').format(experiments_url,
479 | get_api_key(),
480 | namespace)
481 |
482 |
483 | def preflight_checks():
484 | logger.info('checking aws credentials and region')
485 | if region() is None:
486 | logger.error('Region is not set up. please run aws configure')
487 | return False
488 | try:
489 | check_aws_credentials()
490 | except AttributeError:
491 | logger.error('AWS credentials not found. please run aws configure')
492 | return False
493 | logger.info('testing redis')
494 | try:
495 | _redis().ping()
496 | except redis.exceptions.ConnectionError:
497 | logger.error('Redis ping failed. Please run lamed configure')
498 | return False
499 | return True
500 |
501 |
502 | def run():
503 | prepare_zip()
504 | api_id = get_create_api()
505 | role_arn = role()
506 | for component in WIRING + config.get("extra_wiring", []):
507 | function_arn = create_update_lambda(role_arn, component['lambda'])
508 | create_update_api(role_arn, function_arn, component['api_gateway'])
509 | deploy_api(api_id)
510 | api_key(api_id)
511 |
512 |
513 | if __name__ == '__main__':
514 | try:
515 | preflight_checks()
516 | run()
517 | js_code_snippet()
518 | except Exception:
519 | logger.error('preflight checks failed')
520 |
--------------------------------------------------------------------------------
/lamed/lamed.py:
--------------------------------------------------------------------------------
1 | from __future__ import print_function
2 | import sys
3 | import hashlib
4 | sys.path.insert(0, './vendor')
5 | import redis
6 | try:
7 | from lamed.config import config
8 | from lamed import logger
9 | except ImportError:
10 | from config import config
11 | import logger
12 |
13 | logger = logger.setup()
14 |
15 | UUID_EXPIRY = config.get('uuid_expiry_seconds', 24 * 60 * 60)
16 |
17 | def _redis():
18 | redis_config = config['redis']
19 | redis_config["charset"] = "utf-8"
20 | redis_config["decode_responses"] = True
21 | return redis.Redis(**redis_config)
22 |
23 |
24 | def _counter_key(namespace, experiment, goal, variant):
25 | return '{0}:counters:{1}:{2}:{3}'.format(
26 | namespace,
27 | experiment,
28 | goal,
29 | variant)
30 |
31 |
32 | def _results_dict(namespace, experiment):
33 | """ returns a dict in the following format:
34 | {namespace.counters.experiment.goal.variant: count}
35 | """
36 | r = _redis()
37 | keys = r.smembers("{0}:{1}:counter_keys".format(namespace, experiment))
38 | pipe = r.pipeline()
39 | for key in keys:
40 | pipe.get(key)
41 | values = pipe.execute()
42 | return dict(zip(keys, [int(value or 0) for value in values]))
43 |
44 |
45 | def _experiment_goals(namespace, experiment):
46 | raw_results = _results_dict(namespace, experiment)
47 | variants = set([x.split(':')[-1] for x in raw_results.keys()])
48 | goals = set([x.split(':')[-2] for x in raw_results.keys()])
49 | goals.discard('participate')
50 | goal_results = []
51 | for goal in goals:
52 | goal_data = {'goal': goal, 'results': []}
53 | for variant in variants:
54 | trials = raw_results.get(
55 | _counter_key(namespace, experiment, 'participate', variant), 0)
56 | successes = raw_results.get(
57 | _counter_key(namespace, experiment, goal, variant), 0)
58 | goal_data['results'].append(
59 | {'label': variant,
60 | 'successes': successes,
61 | 'trials': trials})
62 | goal_results.append(goal_data)
63 | return goal_results
64 |
65 |
66 | def _add_unique(pipe, key, uuid):
67 | logger.info("adding {} to {}".format(uuid, key))
68 | uuid = hashlib.sha1("{} {}".format(key, uuid).encode('utf-8')).hexdigest()
69 | logger.info("sha1 uuid = {}".format(uuid))
70 | while True:
71 | try:
72 | pipe.watch(uuid)
73 | uuid_exists = pipe.get(uuid)
74 | if uuid_exists is not None:
75 | logger.debug("{} exists".format(uuid))
76 | break
77 | pipe.multi()
78 | # setting a flag for the uuid with expiry time of UUID_EXPIRY
79 | pipe.setex(uuid, UUID_EXPIRY, "1")
80 | # incrementing counter for key
81 | pipe.incr(key)
82 | pipe.execute()
83 | logger.info("added {} to {}".format(uuid, key))
84 | break
85 | except redis.WatchError:
86 | logger.debug("watch error with {} {}".format(uuid, key))
87 | continue
88 |
89 |
90 | def experiment(event, context):
91 | """ retrieves a single experiment results from redis
92 | params:
93 | - experiment - name of the experiment
94 | - namespace (optional)
95 | """
96 | experiment = event['experiment']
97 | namespace = event.get('namespace', 'alephbet')
98 | return _experiment_goals(namespace, experiment)
99 |
100 |
101 | def all(event, context):
102 | """ retrieves all experiment results from redis
103 | params:
104 | - namespace (optional)
105 | - scope (optional, comma-separated list of experiments)
106 | """
107 | r = _redis()
108 | namespace = event.get('namespace', 'alephbet')
109 | scope = event.get('scope')
110 | if scope:
111 | experiments = scope.split(',')
112 | else:
113 | experiments = r.smembers("{0}:experiments".format(namespace))
114 | results = []
115 | results.append({'meta': {'scope': scope}})
116 | for ex in experiments:
117 | goals = experiment({'experiment': ex, 'namespace': namespace}, context)
118 | results.append({'experiment': ex, 'goals': goals})
119 | return results
120 |
121 |
122 | def track(event, context):
123 | """ tracks an alephbet event (participate, goal etc)
124 | params:
125 | - experiment - name of the experiment
126 | - uuid - a unique id for the event
127 | - variant - the name of the variant
128 | - event - either the goal name or 'participate'
129 | - namespace (optional)
130 | """
131 | experiment = event['experiment']
132 | namespace = event.get('namespace', 'alephbet')
133 | uuid = event['uuid']
134 | variant = event['variant']
135 | tracking_event = event['event']
136 |
137 | r = _redis()
138 | key = '{0}:counters:{1}:{2}:{3}'.format(
139 | namespace, experiment, tracking_event, variant)
140 | with r.pipeline() as pipe:
141 | pipe.sadd('{0}:experiments'.format(namespace), experiment)
142 | pipe.sadd('{0}:counter_keys'.format(namespace), key)
143 | pipe.sadd('{0}:{1}:counter_keys'.format(namespace, experiment), key)
144 | pipe.execute()
145 | _add_unique(pipe, key, uuid)
146 |
147 |
148 | def delete(event, context):
149 | """ delete an experiment
150 | params:
151 | - experiment - name of the experiment
152 | - namespace
153 | """
154 |
155 | r = _redis()
156 | namespace = event.get('namespace', 'alephbet')
157 | experiment = event['experiment']
158 | experiments_set_key = '{0}:experiments'.format(namespace)
159 | experiment_counters_set_key = '{0}:{1}:counter_keys'.format(namespace, experiment)
160 | all_counters_set_key = '{0}:counter_keys'.format(namespace)
161 |
162 | if r.sismember(experiments_set_key, experiment):
163 | counter_keys = r.smembers(
164 | experiment_counters_set_key
165 | )
166 | pipe = r.pipeline()
167 | for key in counter_keys:
168 | pipe.srem(all_counters_set_key, key)
169 | pipe.delete(key)
170 | pipe.delete(
171 | experiment_counters_set_key
172 | )
173 | pipe.srem(
174 | experiments_set_key,
175 | experiment
176 | )
177 | pipe.execute()
178 |
--------------------------------------------------------------------------------
/lamed/logger.py:
--------------------------------------------------------------------------------
1 | import logging
2 |
3 |
4 | class ColorFormatter(logging.Formatter):
5 | colors = {
6 | 'error': dict(fg='red'),
7 | 'exception': dict(fg='red'),
8 | 'critical': dict(fg='red'),
9 | 'debug': dict(fg='blue'),
10 | 'warning': dict(fg='yellow'),
11 | 'info': dict(fg='green')
12 | }
13 |
14 | def format(self, record):
15 | import click
16 | s = super(ColorFormatter, self).format(record)
17 | if not record.exc_info:
18 | level = record.levelname.lower()
19 | if level in self.colors:
20 | s = click.style(s, **self.colors[level])
21 | return s
22 |
23 |
24 | class CustomFormatter(logging.Formatter):
25 | def format(self, record):
26 | s = super(CustomFormatter, self).format(record)
27 | if record.levelno == logging.ERROR:
28 | s = s.replace('[.]', '[x]')
29 | return s
30 |
31 |
32 | def setup(name=__name__, level=logging.INFO):
33 | logger = logging.getLogger(name)
34 | if logger.handlers:
35 | return logger
36 | logger.setLevel(level)
37 | try:
38 | # check if click exists to swap the logger
39 | import click # noqa
40 | formatter = ColorFormatter('[.] %(message)s')
41 | except ImportError:
42 | formatter = CustomFormatter('[.] %(message)s')
43 | handler = logging.StreamHandler(None)
44 | handler.setFormatter(formatter)
45 | logger.addHandler(handler)
46 | logger.setLevel(logging.INFO)
47 | return logger
48 |
--------------------------------------------------------------------------------
/lamed/vendor/redis-3.4.1.dist-info/INSTALLER:
--------------------------------------------------------------------------------
1 | pip
2 |
--------------------------------------------------------------------------------
/lamed/vendor/redis-3.4.1.dist-info/LICENSE:
--------------------------------------------------------------------------------
1 | Copyright (c) 2012 Andy McCurdy
2 |
3 | Permission is hereby granted, free of charge, to any person
4 | obtaining a copy of this software and associated documentation
5 | files (the "Software"), to deal in the Software without
6 | restriction, including without limitation the rights to use,
7 | copy, modify, merge, publish, distribute, sublicense, and/or sell
8 | copies of the Software, and to permit persons to whom the
9 | Software is furnished to do so, subject to the following
10 | conditions:
11 |
12 | The above copyright notice and this permission notice shall be
13 | included in all copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
16 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
17 | OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
18 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
19 | HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
20 | WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
21 | FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
22 | OTHER DEALINGS IN THE SOFTWARE.
23 |
--------------------------------------------------------------------------------
/lamed/vendor/redis-3.4.1.dist-info/METADATA:
--------------------------------------------------------------------------------
1 | Metadata-Version: 2.1
2 | Name: redis
3 | Version: 3.4.1
4 | Summary: Python client for Redis key-value store
5 | Home-page: https://github.com/andymccurdy/redis-py
6 | Author: Andy McCurdy
7 | Author-email: sedrik@gmail.com
8 | Maintainer: Andy McCurdy
9 | Maintainer-email: sedrik@gmail.com
10 | License: MIT
11 | Keywords: Redis,key-value store
12 | Platform: UNKNOWN
13 | Classifier: Development Status :: 5 - Production/Stable
14 | Classifier: Environment :: Console
15 | Classifier: Intended Audience :: Developers
16 | Classifier: License :: OSI Approved :: MIT License
17 | Classifier: Operating System :: OS Independent
18 | Classifier: Programming Language :: Python
19 | Classifier: Programming Language :: Python :: 2
20 | Classifier: Programming Language :: Python :: 2.7
21 | Classifier: Programming Language :: Python :: 3
22 | Classifier: Programming Language :: Python :: 3.5
23 | Classifier: Programming Language :: Python :: 3.6
24 | Classifier: Programming Language :: Python :: 3.7
25 | Classifier: Programming Language :: Python :: 3.8
26 | Requires-Python: >=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*
27 | Description-Content-Type: text/x-rst
28 | Provides-Extra: hiredis
29 | Requires-Dist: hiredis (>=0.1.3); extra == 'hiredis'
30 |
31 | redis-py
32 | ========
33 |
34 | The Python interface to the Redis key-value store.
35 |
36 | .. image:: https://secure.travis-ci.org/andymccurdy/redis-py.svg?branch=master
37 | :target: https://travis-ci.org/andymccurdy/redis-py
38 | .. image:: https://readthedocs.org/projects/redis-py/badge/?version=latest&style=flat
39 | :target: https://redis-py.readthedocs.io/en/latest/
40 | .. image:: https://badge.fury.io/py/redis.svg
41 | :target: https://pypi.org/project/redis/
42 | .. image:: https://codecov.io/gh/andymccurdy/redis-py/branch/master/graph/badge.svg
43 | :target: https://codecov.io/gh/andymccurdy/redis-py
44 |
45 | Installation
46 | ------------
47 |
48 | redis-py requires a running Redis server. See `Redis's quickstart
49 | `_ for installation instructions.
50 |
51 | redis-py can be installed using `pip` similar to other Python packages. Do not use `sudo`
52 | with `pip`. It is usually good to work in a
53 | `virtualenv `_ or
54 | `venv `_ to avoid conflicts with other package
55 | managers and Python projects. For a quick introduction see
56 | `Python Virtual Environments in Five Minutes `_.
57 |
58 | To install redis-py, simply:
59 |
60 | .. code-block:: bash
61 |
62 | $ pip install redis
63 |
64 | or from source:
65 |
66 | .. code-block:: bash
67 |
68 | $ python setup.py install
69 |
70 |
71 | Getting Started
72 | ---------------
73 |
74 | .. code-block:: python
75 |
76 | >>> import redis
77 | >>> r = redis.Redis(host='localhost', port=6379, db=0)
78 | >>> r.set('foo', 'bar')
79 | True
80 | >>> r.get('foo')
81 | 'bar'
82 |
83 | By default, all responses are returned as `bytes` in Python 3 and `str` in
84 | Python 2. The user is responsible for decoding to Python 3 strings or Python 2
85 | unicode objects.
86 |
87 | If **all** string responses from a client should be decoded, the user can
88 | specify `decode_responses=True` to `Redis.__init__`. In this case, any
89 | Redis command that returns a string type will be decoded with the `encoding`
90 | specified.
91 |
92 |
93 | Upgrading from redis-py 2.X to 3.0
94 | ----------------------------------
95 |
96 | redis-py 3.0 introduces many new features but required a number of backwards
97 | incompatible changes to be made in the process. This section attempts to
98 | provide an upgrade path for users migrating from 2.X to 3.0.
99 |
100 |
101 | Python Version Support
102 | ^^^^^^^^^^^^^^^^^^^^^^
103 |
104 | redis-py 3.0 supports Python 2.7 and Python 3.5+.
105 |
106 |
107 | Client Classes: Redis and StrictRedis
108 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
109 |
110 | redis-py 3.0 drops support for the legacy "Redis" client class. "StrictRedis"
111 | has been renamed to "Redis" and an alias named "StrictRedis" is provided so
112 | that users previously using "StrictRedis" can continue to run unchanged.
113 |
114 | The 2.X "Redis" class provided alternative implementations of a few commands.
115 | This confused users (rightfully so) and caused a number of support issues. To
116 | make things easier going forward, it was decided to drop support for these
117 | alternate implementations and instead focus on a single client class.
118 |
119 | 2.X users that are already using StrictRedis don't have to change the class
120 | name. StrictRedis will continue to work for the foreseeable future.
121 |
122 | 2.X users that are using the Redis class will have to make changes if they
123 | use any of the following commands:
124 |
125 | * SETEX: The argument order has changed. The new order is (name, time, value).
126 | * LREM: The argument order has changed. The new order is (name, num, value).
127 | * TTL and PTTL: The return value is now always an int and matches the
128 | official Redis command (>0 indicates the timeout, -1 indicates that the key
129 | exists but that it has no expire time set, -2 indicates that the key does
130 | not exist)
131 |
132 |
133 | SSL Connections
134 | ^^^^^^^^^^^^^^^
135 |
136 | redis-py 3.0 changes the default value of the `ssl_cert_reqs` option from
137 | `None` to `'required'`. See
138 | `Issue 1016 `_. This
139 | change enforces hostname validation when accepting a cert from a remote SSL
140 | terminator. If the terminator doesn't properly set the hostname on the cert
141 | this will cause redis-py 3.0 to raise a ConnectionError.
142 |
143 | This check can be disabled by setting `ssl_cert_reqs` to `None`. Note that
144 | doing so removes the security check. Do so at your own risk.
145 |
146 | It has been reported that SSL certs received from AWS ElastiCache do not have
147 | proper hostnames and turning off hostname verification is currently required.
148 |
149 |
150 | MSET, MSETNX and ZADD
151 | ^^^^^^^^^^^^^^^^^^^^^
152 |
153 | These commands all accept a mapping of key/value pairs. In redis-py 2.X
154 | this mapping could be specified as ``*args`` or as ``**kwargs``. Both of these
155 | styles caused issues when Redis introduced optional flags to ZADD. Relying on
156 | ``*args`` caused issues with the optional argument order, especially in Python
157 | 2.7. Relying on ``**kwargs`` caused potential collision issues of user keys with
158 | the argument names in the method signature.
159 |
160 | To resolve this, redis-py 3.0 has changed these three commands to all accept
161 | a single positional argument named mapping that is expected to be a dict. For
162 | MSET and MSETNX, the dict is a mapping of key-names -> values. For ZADD, the
163 | dict is a mapping of element-names -> score.
164 |
165 | MSET, MSETNX and ZADD now look like:
166 |
167 | .. code-block:: python
168 |
169 | def mset(self, mapping):
170 | def msetnx(self, mapping):
171 | def zadd(self, name, mapping, nx=False, xx=False, ch=False, incr=False):
172 |
173 | All 2.X users that use these commands must modify their code to supply
174 | keys and values as a dict to these commands.
175 |
176 |
177 | ZINCRBY
178 | ^^^^^^^
179 |
180 | redis-py 2.X accidentally modified the argument order of ZINCRBY, swapping the
181 | order of value and amount. ZINCRBY now looks like:
182 |
183 | .. code-block:: python
184 |
185 | def zincrby(self, name, amount, value):
186 |
187 | All 2.X users that rely on ZINCRBY must swap the order of amount and value
188 | for the command to continue to work as intended.
189 |
190 |
191 | Encoding of User Input
192 | ^^^^^^^^^^^^^^^^^^^^^^
193 |
194 | redis-py 3.0 only accepts user data as bytes, strings or numbers (ints, longs
195 | and floats). Attempting to specify a key or a value as any other type will
196 | raise a DataError exception.
197 |
198 | redis-py 2.X attempted to coerce any type of input into a string. While
199 | occasionally convenient, this caused all sorts of hidden errors when users
200 | passed boolean values (which were coerced to 'True' or 'False'), a None
201 | value (which was coerced to 'None') or other values, such as user defined
202 | types.
203 |
204 | All 2.X users should make sure that the keys and values they pass into
205 | redis-py are either bytes, strings or numbers.
206 |
207 |
208 | Locks
209 | ^^^^^
210 |
211 | redis-py 3.0 drops support for the pipeline-based Lock and now only supports
212 | the Lua-based lock. In doing so, LuaLock has been renamed to Lock. This also
213 | means that redis-py Lock objects require Redis server 2.6 or greater.
214 |
215 | 2.X users that were explicitly referring to "LuaLock" will have to now refer
216 | to "Lock" instead.
217 |
218 |
219 | Locks as Context Managers
220 | ^^^^^^^^^^^^^^^^^^^^^^^^^
221 |
222 | redis-py 3.0 now raises a LockError when using a lock as a context manager and
223 | the lock cannot be acquired within the specified timeout. This is more of a
224 | bug fix than a backwards incompatible change. However, given an error is now
225 | raised where none was before, this might alarm some users.
226 |
227 | 2.X users should make sure they're wrapping their lock code in a try/catch
228 | like this:
229 |
230 | .. code-block:: python
231 |
232 | try:
233 | with r.lock('my-lock-key', blocking_timeout=5) as lock:
234 | # code you want executed only after the lock has been acquired
235 | except LockError:
236 | # the lock wasn't acquired
237 |
238 |
239 | API Reference
240 | -------------
241 |
242 | The `official Redis command documentation `_ does a
243 | great job of explaining each command in detail. redis-py attempts to adhere
244 | to the official command syntax. There are a few exceptions:
245 |
246 | * **SELECT**: Not implemented. See the explanation in the Thread Safety section
247 | below.
248 | * **DEL**: 'del' is a reserved keyword in the Python syntax. Therefore redis-py
249 | uses 'delete' instead.
250 | * **MULTI/EXEC**: These are implemented as part of the Pipeline class. The
251 | pipeline is wrapped with the MULTI and EXEC statements by default when it
252 | is executed, which can be disabled by specifying transaction=False.
253 | See more about Pipelines below.
254 | * **SUBSCRIBE/LISTEN**: Similar to pipelines, PubSub is implemented as a separate
255 | class as it places the underlying connection in a state where it can't
256 | execute non-pubsub commands. Calling the pubsub method from the Redis client
257 | will return a PubSub instance where you can subscribe to channels and listen
258 | for messages. You can only call PUBLISH from the Redis client (see
259 | `this comment on issue #151
260 | `_
261 | for details).
262 | * **SCAN/SSCAN/HSCAN/ZSCAN**: The \*SCAN commands are implemented as they
263 | exist in the Redis documentation. In addition, each command has an equivalent
264 | iterator method. These are purely for convenience so the user doesn't have
265 | to keep track of the cursor while iterating. Use the
266 | scan_iter/sscan_iter/hscan_iter/zscan_iter methods for this behavior.
267 |
268 |
269 | More Detail
270 | -----------
271 |
272 | Connection Pools
273 | ^^^^^^^^^^^^^^^^
274 |
275 | Behind the scenes, redis-py uses a connection pool to manage connections to
276 | a Redis server. By default, each Redis instance you create will in turn create
277 | its own connection pool. You can override this behavior and use an existing
278 | connection pool by passing an already created connection pool instance to the
279 | connection_pool argument of the Redis class. You may choose to do this in order
280 | to implement client side sharding or have fine-grain control of how
281 | connections are managed.
282 |
283 | .. code-block:: python
284 |
285 | >>> pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
286 | >>> r = redis.Redis(connection_pool=pool)
287 |
288 | Connections
289 | ^^^^^^^^^^^
290 |
291 | ConnectionPools manage a set of Connection instances. redis-py ships with two
292 | types of Connections. The default, Connection, is a normal TCP socket based
293 | connection. The UnixDomainSocketConnection allows for clients running on the
294 | same device as the server to connect via a unix domain socket. To use a
295 | UnixDomainSocketConnection connection, simply pass the unix_socket_path
296 | argument, which is a string to the unix domain socket file. Additionally, make
297 | sure the unixsocket parameter is defined in your redis.conf file. It's
298 | commented out by default.
299 |
300 | .. code-block:: python
301 |
302 | >>> r = redis.Redis(unix_socket_path='/tmp/redis.sock')
303 |
304 | You can create your own Connection subclasses as well. This may be useful if
305 | you want to control the socket behavior within an async framework. To
306 | instantiate a client class using your own connection, you need to create
307 | a connection pool, passing your class to the connection_class argument.
308 | Other keyword parameters you pass to the pool will be passed to the class
309 | specified during initialization.
310 |
311 | .. code-block:: python
312 |
313 | >>> pool = redis.ConnectionPool(connection_class=YourConnectionClass,
314 | your_arg='...', ...)
315 |
316 | Connections maintain an open socket to the Redis server. Sometimes these
317 | sockets are interrupted or disconnected for a variety of reasons. For example,
318 | network appliances, load balancers and other services that sit between clients
319 | and servers are often configured to kill connections that remain idle for a
320 | given threshold.
321 |
322 | When a connection becomes disconnected, the next command issued on that
323 | connection will fail and redis-py will raise a ConnectionError to the caller.
324 | This allows each application that uses redis-py to handle errors in a way
325 | that's fitting for that specific application. However, constant error
326 | handling can be verbose and cumbersome, especially when socket disconnections
327 | happen frequently in many production environments.
328 |
329 | To combat this, redis-py can issue regular health checks to assess the
330 | liveliness of a connection just before issuing a command. Users can pass
331 | ``health_check_interval=N`` to the Redis or ConnectionPool classes or
332 | as a query argument within a Redis URL. The value of ``health_check_interval``
333 | must be an integer. A value of ``0``, the default, disables health checks.
334 | Any positive integer will enable health checks. Health checks are performed
335 | just before a command is executed if the underlying connection has been idle
336 | for more than ``health_check_interval`` seconds. For example,
337 | ``health_check_interval=30`` will ensure that a health check is run on any
338 | connection that has been idle for 30 or more seconds just before a command
339 | is executed on that connection.
340 |
341 | If your application is running in an environment that disconnects idle
342 | connections after 30 seconds you should set the ``health_check_interval``
343 | option to a value less than 30.
344 |
345 | This option also works on any PubSub connection that is created from a
346 | client with ``health_check_interval`` enabled. PubSub users need to ensure
347 | that ``get_message()`` or ``listen()`` are called more frequently than
348 | ``health_check_interval`` seconds. It is assumed that most workloads already
349 | do this.
350 |
351 | If your PubSub use case doesn't call ``get_message()`` or ``listen()``
352 | frequently, you should call ``pubsub.check_health()`` explicitly on a
353 | regularly basis.
354 |
355 | Parsers
356 | ^^^^^^^
357 |
358 | Parser classes provide a way to control how responses from the Redis server
359 | are parsed. redis-py ships with two parser classes, the PythonParser and the
360 | HiredisParser. By default, redis-py will attempt to use the HiredisParser if
361 | you have the hiredis module installed and will fallback to the PythonParser
362 | otherwise.
363 |
364 | Hiredis is a C library maintained by the core Redis team. Pieter Noordhuis was
365 | kind enough to create Python bindings. Using Hiredis can provide up to a
366 | 10x speed improvement in parsing responses from the Redis server. The
367 | performance increase is most noticeable when retrieving many pieces of data,
368 | such as from LRANGE or SMEMBERS operations.
369 |
370 | Hiredis is available on PyPI, and can be installed via pip just like redis-py.
371 |
372 | .. code-block:: bash
373 |
374 | $ pip install hiredis
375 |
376 | Response Callbacks
377 | ^^^^^^^^^^^^^^^^^^
378 |
379 | The client class uses a set of callbacks to cast Redis responses to the
380 | appropriate Python type. There are a number of these callbacks defined on
381 | the Redis client class in a dictionary called RESPONSE_CALLBACKS.
382 |
383 | Custom callbacks can be added on a per-instance basis using the
384 | set_response_callback method. This method accepts two arguments: a command
385 | name and the callback. Callbacks added in this manner are only valid on the
386 | instance the callback is added to. If you want to define or override a callback
387 | globally, you should make a subclass of the Redis client and add your callback
388 | to its RESPONSE_CALLBACKS class dictionary.
389 |
390 | Response callbacks take at least one parameter: the response from the Redis
391 | server. Keyword arguments may also be accepted in order to further control
392 | how to interpret the response. These keyword arguments are specified during the
393 | command's call to execute_command. The ZRANGE implementation demonstrates the
394 | use of response callback keyword arguments with its "withscores" argument.
395 |
396 | Thread Safety
397 | ^^^^^^^^^^^^^
398 |
399 | Redis client instances can safely be shared between threads. Internally,
400 | connection instances are only retrieved from the connection pool during
401 | command execution, and returned to the pool directly after. Command execution
402 | never modifies state on the client instance.
403 |
404 | However, there is one caveat: the Redis SELECT command. The SELECT command
405 | allows you to switch the database currently in use by the connection. That
406 | database remains selected until another is selected or until the connection is
407 | closed. This creates an issue in that connections could be returned to the pool
408 | that are connected to a different database.
409 |
410 | As a result, redis-py does not implement the SELECT command on client
411 | instances. If you use multiple Redis databases within the same application, you
412 | should create a separate client instance (and possibly a separate connection
413 | pool) for each database.
414 |
415 | It is not safe to pass PubSub or Pipeline objects between threads.
416 |
417 | Pipelines
418 | ^^^^^^^^^
419 |
420 | Pipelines are a subclass of the base Redis class that provide support for
421 | buffering multiple commands to the server in a single request. They can be used
422 | to dramatically increase the performance of groups of commands by reducing the
423 | number of back-and-forth TCP packets between the client and server.
424 |
425 | Pipelines are quite simple to use:
426 |
427 | .. code-block:: python
428 |
429 | >>> r = redis.Redis(...)
430 | >>> r.set('bing', 'baz')
431 | >>> # Use the pipeline() method to create a pipeline instance
432 | >>> pipe = r.pipeline()
433 | >>> # The following SET commands are buffered
434 | >>> pipe.set('foo', 'bar')
435 | >>> pipe.get('bing')
436 | >>> # the EXECUTE call sends all buffered commands to the server, returning
437 | >>> # a list of responses, one for each command.
438 | >>> pipe.execute()
439 | [True, 'baz']
440 |
441 | For ease of use, all commands being buffered into the pipeline return the
442 | pipeline object itself. Therefore calls can be chained like:
443 |
444 | .. code-block:: python
445 |
446 | >>> pipe.set('foo', 'bar').sadd('faz', 'baz').incr('auto_number').execute()
447 | [True, True, 6]
448 |
449 | In addition, pipelines can also ensure the buffered commands are executed
450 | atomically as a group. This happens by default. If you want to disable the
451 | atomic nature of a pipeline but still want to buffer commands, you can turn
452 | off transactions.
453 |
454 | .. code-block:: python
455 |
456 | >>> pipe = r.pipeline(transaction=False)
457 |
458 | A common issue occurs when requiring atomic transactions but needing to
459 | retrieve values in Redis prior for use within the transaction. For instance,
460 | let's assume that the INCR command didn't exist and we need to build an atomic
461 | version of INCR in Python.
462 |
463 | The completely naive implementation could GET the value, increment it in
464 | Python, and SET the new value back. However, this is not atomic because
465 | multiple clients could be doing this at the same time, each getting the same
466 | value from GET.
467 |
468 | Enter the WATCH command. WATCH provides the ability to monitor one or more keys
469 | prior to starting a transaction. If any of those keys change prior the
470 | execution of that transaction, the entire transaction will be canceled and a
471 | WatchError will be raised. To implement our own client-side INCR command, we
472 | could do something like this:
473 |
474 | .. code-block:: python
475 |
476 | >>> with r.pipeline() as pipe:
477 | ... while True:
478 | ... try:
479 | ... # put a WATCH on the key that holds our sequence value
480 | ... pipe.watch('OUR-SEQUENCE-KEY')
481 | ... # after WATCHing, the pipeline is put into immediate execution
482 | ... # mode until we tell it to start buffering commands again.
483 | ... # this allows us to get the current value of our sequence
484 | ... current_value = pipe.get('OUR-SEQUENCE-KEY')
485 | ... next_value = int(current_value) + 1
486 | ... # now we can put the pipeline back into buffered mode with MULTI
487 | ... pipe.multi()
488 | ... pipe.set('OUR-SEQUENCE-KEY', next_value)
489 | ... # and finally, execute the pipeline (the set command)
490 | ... pipe.execute()
491 | ... # if a WatchError wasn't raised during execution, everything
492 | ... # we just did happened atomically.
493 | ... break
494 | ... except WatchError:
495 | ... # another client must have changed 'OUR-SEQUENCE-KEY' between
496 | ... # the time we started WATCHing it and the pipeline's execution.
497 | ... # our best bet is to just retry.
498 | ... continue
499 |
500 | Note that, because the Pipeline must bind to a single connection for the
501 | duration of a WATCH, care must be taken to ensure that the connection is
502 | returned to the connection pool by calling the reset() method. If the
503 | Pipeline is used as a context manager (as in the example above) reset()
504 | will be called automatically. Of course you can do this the manual way by
505 | explicitly calling reset():
506 |
507 | .. code-block:: python
508 |
509 | >>> pipe = r.pipeline()
510 | >>> while True:
511 | ... try:
512 | ... pipe.watch('OUR-SEQUENCE-KEY')
513 | ... ...
514 | ... pipe.execute()
515 | ... break
516 | ... except WatchError:
517 | ... continue
518 | ... finally:
519 | ... pipe.reset()
520 |
521 | A convenience method named "transaction" exists for handling all the
522 | boilerplate of handling and retrying watch errors. It takes a callable that
523 | should expect a single parameter, a pipeline object, and any number of keys to
524 | be WATCHed. Our client-side INCR command above can be written like this,
525 | which is much easier to read:
526 |
527 | .. code-block:: python
528 |
529 | >>> def client_side_incr(pipe):
530 | ... current_value = pipe.get('OUR-SEQUENCE-KEY')
531 | ... next_value = int(current_value) + 1
532 | ... pipe.multi()
533 | ... pipe.set('OUR-SEQUENCE-KEY', next_value)
534 | >>>
535 | >>> r.transaction(client_side_incr, 'OUR-SEQUENCE-KEY')
536 | [True]
537 |
538 | Publish / Subscribe
539 | ^^^^^^^^^^^^^^^^^^^
540 |
541 | redis-py includes a `PubSub` object that subscribes to channels and listens
542 | for new messages. Creating a `PubSub` object is easy.
543 |
544 | .. code-block:: python
545 |
546 | >>> r = redis.Redis(...)
547 | >>> p = r.pubsub()
548 |
549 | Once a `PubSub` instance is created, channels and patterns can be subscribed
550 | to.
551 |
552 | .. code-block:: python
553 |
554 | >>> p.subscribe('my-first-channel', 'my-second-channel', ...)
555 | >>> p.psubscribe('my-*', ...)
556 |
557 | The `PubSub` instance is now subscribed to those channels/patterns. The
558 | subscription confirmations can be seen by reading messages from the `PubSub`
559 | instance.
560 |
561 | .. code-block:: python
562 |
563 | >>> p.get_message()
564 | {'pattern': None, 'type': 'subscribe', 'channel': 'my-second-channel', 'data': 1L}
565 | >>> p.get_message()
566 | {'pattern': None, 'type': 'subscribe', 'channel': 'my-first-channel', 'data': 2L}
567 | >>> p.get_message()
568 | {'pattern': None, 'type': 'psubscribe', 'channel': 'my-*', 'data': 3L}
569 |
570 | Every message read from a `PubSub` instance will be a dictionary with the
571 | following keys.
572 |
573 | * **type**: One of the following: 'subscribe', 'unsubscribe', 'psubscribe',
574 | 'punsubscribe', 'message', 'pmessage'
575 | * **channel**: The channel [un]subscribed to or the channel a message was
576 | published to
577 | * **pattern**: The pattern that matched a published message's channel. Will be
578 | `None` in all cases except for 'pmessage' types.
579 | * **data**: The message data. With [un]subscribe messages, this value will be
580 | the number of channels and patterns the connection is currently subscribed
581 | to. With [p]message messages, this value will be the actual published
582 | message.
583 |
584 | Let's send a message now.
585 |
586 | .. code-block:: python
587 |
588 | # the publish method returns the number matching channel and pattern
589 | # subscriptions. 'my-first-channel' matches both the 'my-first-channel'
590 | # subscription and the 'my-*' pattern subscription, so this message will
591 | # be delivered to 2 channels/patterns
592 | >>> r.publish('my-first-channel', 'some data')
593 | 2
594 | >>> p.get_message()
595 | {'channel': 'my-first-channel', 'data': 'some data', 'pattern': None, 'type': 'message'}
596 | >>> p.get_message()
597 | {'channel': 'my-first-channel', 'data': 'some data', 'pattern': 'my-*', 'type': 'pmessage'}
598 |
599 | Unsubscribing works just like subscribing. If no arguments are passed to
600 | [p]unsubscribe, all channels or patterns will be unsubscribed from.
601 |
602 | .. code-block:: python
603 |
604 | >>> p.unsubscribe()
605 | >>> p.punsubscribe('my-*')
606 | >>> p.get_message()
607 | {'channel': 'my-second-channel', 'data': 2L, 'pattern': None, 'type': 'unsubscribe'}
608 | >>> p.get_message()
609 | {'channel': 'my-first-channel', 'data': 1L, 'pattern': None, 'type': 'unsubscribe'}
610 | >>> p.get_message()
611 | {'channel': 'my-*', 'data': 0L, 'pattern': None, 'type': 'punsubscribe'}
612 |
613 | redis-py also allows you to register callback functions to handle published
614 | messages. Message handlers take a single argument, the message, which is a
615 | dictionary just like the examples above. To subscribe to a channel or pattern
616 | with a message handler, pass the channel or pattern name as a keyword argument
617 | with its value being the callback function.
618 |
619 | When a message is read on a channel or pattern with a message handler, the
620 | message dictionary is created and passed to the message handler. In this case,
621 | a `None` value is returned from get_message() since the message was already
622 | handled.
623 |
624 | .. code-block:: python
625 |
626 | >>> def my_handler(message):
627 | ... print 'MY HANDLER: ', message['data']
628 | >>> p.subscribe(**{'my-channel': my_handler})
629 | # read the subscribe confirmation message
630 | >>> p.get_message()
631 | {'pattern': None, 'type': 'subscribe', 'channel': 'my-channel', 'data': 1L}
632 | >>> r.publish('my-channel', 'awesome data')
633 | 1
634 | # for the message handler to work, we need tell the instance to read data.
635 | # this can be done in several ways (read more below). we'll just use
636 | # the familiar get_message() function for now
637 | >>> message = p.get_message()
638 | MY HANDLER: awesome data
639 | # note here that the my_handler callback printed the string above.
640 | # `message` is None because the message was handled by our handler.
641 | >>> print message
642 | None
643 |
644 | If your application is not interested in the (sometimes noisy)
645 | subscribe/unsubscribe confirmation messages, you can ignore them by passing
646 | `ignore_subscribe_messages=True` to `r.pubsub()`. This will cause all
647 | subscribe/unsubscribe messages to be read, but they won't bubble up to your
648 | application.
649 |
650 | .. code-block:: python
651 |
652 | >>> p = r.pubsub(ignore_subscribe_messages=True)
653 | >>> p.subscribe('my-channel')
654 | >>> p.get_message() # hides the subscribe message and returns None
655 | >>> r.publish('my-channel', 'my data')
656 | 1
657 | >>> p.get_message()
658 | {'channel': 'my-channel', 'data': 'my data', 'pattern': None, 'type': 'message'}
659 |
660 | There are three different strategies for reading messages.
661 |
662 | The examples above have been using `pubsub.get_message()`. Behind the scenes,
663 | `get_message()` uses the system's 'select' module to quickly poll the
664 | connection's socket. If there's data available to be read, `get_message()` will
665 | read it, format the message and return it or pass it to a message handler. If
666 | there's no data to be read, `get_message()` will immediately return None. This
667 | makes it trivial to integrate into an existing event loop inside your
668 | application.
669 |
670 | .. code-block:: python
671 |
672 | >>> while True:
673 | >>> message = p.get_message()
674 | >>> if message:
675 | >>> # do something with the message
676 | >>> time.sleep(0.001) # be nice to the system :)
677 |
678 | Older versions of redis-py only read messages with `pubsub.listen()`. listen()
679 | is a generator that blocks until a message is available. If your application
680 | doesn't need to do anything else but receive and act on messages received from
681 | redis, listen() is an easy way to get up an running.
682 |
683 | .. code-block:: python
684 |
685 | >>> for message in p.listen():
686 | ... # do something with the message
687 |
688 | The third option runs an event loop in a separate thread.
689 | `pubsub.run_in_thread()` creates a new thread and starts the event loop. The
690 | thread object is returned to the caller of `run_in_thread()`. The caller can
691 | use the `thread.stop()` method to shut down the event loop and thread. Behind
692 | the scenes, this is simply a wrapper around `get_message()` that runs in a
693 | separate thread, essentially creating a tiny non-blocking event loop for you.
694 | `run_in_thread()` takes an optional `sleep_time` argument. If specified, the
695 | event loop will call `time.sleep()` with the value in each iteration of the
696 | loop.
697 |
698 | Note: Since we're running in a separate thread, there's no way to handle
699 | messages that aren't automatically handled with registered message handlers.
700 | Therefore, redis-py prevents you from calling `run_in_thread()` if you're
701 | subscribed to patterns or channels that don't have message handlers attached.
702 |
703 | .. code-block:: python
704 |
705 | >>> p.subscribe(**{'my-channel': my_handler})
706 | >>> thread = p.run_in_thread(sleep_time=0.001)
707 | # the event loop is now running in the background processing messages
708 | # when it's time to shut it down...
709 | >>> thread.stop()
710 |
711 | A PubSub object adheres to the same encoding semantics as the client instance
712 | it was created from. Any channel or pattern that's unicode will be encoded
713 | using the `charset` specified on the client before being sent to Redis. If the
714 | client's `decode_responses` flag is set the False (the default), the
715 | 'channel', 'pattern' and 'data' values in message dictionaries will be byte
716 | strings (str on Python 2, bytes on Python 3). If the client's
717 | `decode_responses` is True, then the 'channel', 'pattern' and 'data' values
718 | will be automatically decoded to unicode strings using the client's `charset`.
719 |
720 | PubSub objects remember what channels and patterns they are subscribed to. In
721 | the event of a disconnection such as a network error or timeout, the
722 | PubSub object will re-subscribe to all prior channels and patterns when
723 | reconnecting. Messages that were published while the client was disconnected
724 | cannot be delivered. When you're finished with a PubSub object, call its
725 | `.close()` method to shutdown the connection.
726 |
727 | .. code-block:: python
728 |
729 | >>> p = r.pubsub()
730 | >>> ...
731 | >>> p.close()
732 |
733 |
734 | The PUBSUB set of subcommands CHANNELS, NUMSUB and NUMPAT are also
735 | supported:
736 |
737 | .. code-block:: python
738 |
739 | >>> r.pubsub_channels()
740 | ['foo', 'bar']
741 | >>> r.pubsub_numsub('foo', 'bar')
742 | [('foo', 9001), ('bar', 42)]
743 | >>> r.pubsub_numsub('baz')
744 | [('baz', 0)]
745 | >>> r.pubsub_numpat()
746 | 1204
747 |
748 | Monitor
749 | ^^^^^^^
750 | redis-py includes a `Monitor` object that streams every command processed
751 | by the Redis server. Use `listen()` on the `Monitor` object to block
752 | until a command is received.
753 |
754 | .. code-block:: python
755 |
756 | >>> r = redis.Redis(...)
757 | >>> with r.monitor() as m:
758 | >>> for command in m.listen():
759 | >>> print(command)
760 |
761 | Lua Scripting
762 | ^^^^^^^^^^^^^
763 |
764 | redis-py supports the EVAL, EVALSHA, and SCRIPT commands. However, there are
765 | a number of edge cases that make these commands tedious to use in real world
766 | scenarios. Therefore, redis-py exposes a Script object that makes scripting
767 | much easier to use.
768 |
769 | To create a Script instance, use the `register_script` function on a client
770 | instance passing the Lua code as the first argument. `register_script` returns
771 | a Script instance that you can use throughout your code.
772 |
773 | The following trivial Lua script accepts two parameters: the name of a key and
774 | a multiplier value. The script fetches the value stored in the key, multiplies
775 | it with the multiplier value and returns the result.
776 |
777 | .. code-block:: python
778 |
779 | >>> r = redis.Redis()
780 | >>> lua = """
781 | ... local value = redis.call('GET', KEYS[1])
782 | ... value = tonumber(value)
783 | ... return value * ARGV[1]"""
784 | >>> multiply = r.register_script(lua)
785 |
786 | `multiply` is now a Script instance that is invoked by calling it like a
787 | function. Script instances accept the following optional arguments:
788 |
789 | * **keys**: A list of key names that the script will access. This becomes the
790 | KEYS list in Lua.
791 | * **args**: A list of argument values. This becomes the ARGV list in Lua.
792 | * **client**: A redis-py Client or Pipeline instance that will invoke the
793 | script. If client isn't specified, the client that initially
794 | created the Script instance (the one that `register_script` was
795 | invoked from) will be used.
796 |
797 | Continuing the example from above:
798 |
799 | .. code-block:: python
800 |
801 | >>> r.set('foo', 2)
802 | >>> multiply(keys=['foo'], args=[5])
803 | 10
804 |
805 | The value of key 'foo' is set to 2. When multiply is invoked, the 'foo' key is
806 | passed to the script along with the multiplier value of 5. Lua executes the
807 | script and returns the result, 10.
808 |
809 | Script instances can be executed using a different client instance, even one
810 | that points to a completely different Redis server.
811 |
812 | .. code-block:: python
813 |
814 | >>> r2 = redis.Redis('redis2.example.com')
815 | >>> r2.set('foo', 3)
816 | >>> multiply(keys=['foo'], args=[5], client=r2)
817 | 15
818 |
819 | The Script object ensures that the Lua script is loaded into Redis's script
820 | cache. In the event of a NOSCRIPT error, it will load the script and retry
821 | executing it.
822 |
823 | Script objects can also be used in pipelines. The pipeline instance should be
824 | passed as the client argument when calling the script. Care is taken to ensure
825 | that the script is registered in Redis's script cache just prior to pipeline
826 | execution.
827 |
828 | .. code-block:: python
829 |
830 | >>> pipe = r.pipeline()
831 | >>> pipe.set('foo', 5)
832 | >>> multiply(keys=['foo'], args=[5], client=pipe)
833 | >>> pipe.execute()
834 | [True, 25]
835 |
836 | Sentinel support
837 | ^^^^^^^^^^^^^^^^
838 |
839 | redis-py can be used together with `Redis Sentinel `_
840 | to discover Redis nodes. You need to have at least one Sentinel daemon running
841 | in order to use redis-py's Sentinel support.
842 |
843 | Connecting redis-py to the Sentinel instance(s) is easy. You can use a
844 | Sentinel connection to discover the master and slaves network addresses:
845 |
846 | .. code-block:: python
847 |
848 | >>> from redis.sentinel import Sentinel
849 | >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)
850 | >>> sentinel.discover_master('mymaster')
851 | ('127.0.0.1', 6379)
852 | >>> sentinel.discover_slaves('mymaster')
853 | [('127.0.0.1', 6380)]
854 |
855 | You can also create Redis client connections from a Sentinel instance. You can
856 | connect to either the master (for write operations) or a slave (for read-only
857 | operations).
858 |
859 | .. code-block:: python
860 |
861 | >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)
862 | >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)
863 | >>> master.set('foo', 'bar')
864 | >>> slave.get('foo')
865 | 'bar'
866 |
867 | The master and slave objects are normal Redis instances with their
868 | connection pool bound to the Sentinel instance. When a Sentinel backed client
869 | attempts to establish a connection, it first queries the Sentinel servers to
870 | determine an appropriate host to connect to. If no server is found,
871 | a MasterNotFoundError or SlaveNotFoundError is raised. Both exceptions are
872 | subclasses of ConnectionError.
873 |
874 | When trying to connect to a slave client, the Sentinel connection pool will
875 | iterate over the list of slaves until it finds one that can be connected to.
876 | If no slaves can be connected to, a connection will be established with the
877 | master.
878 |
879 | See `Guidelines for Redis clients with support for Redis Sentinel
880 | `_ to learn more about Redis Sentinel.
881 |
882 | Scan Iterators
883 | ^^^^^^^^^^^^^^
884 |
885 | The \*SCAN commands introduced in Redis 2.8 can be cumbersome to use. While
886 | these commands are fully supported, redis-py also exposes the following methods
887 | that return Python iterators for convenience: `scan_iter`, `hscan_iter`,
888 | `sscan_iter` and `zscan_iter`.
889 |
890 | .. code-block:: python
891 |
892 | >>> for key, value in (('A', '1'), ('B', '2'), ('C', '3')):
893 | ... r.set(key, value)
894 | >>> for key in r.scan_iter():
895 | ... print key, r.get(key)
896 | A 1
897 | B 2
898 | C 3
899 |
900 | Author
901 | ^^^^^^
902 |
903 | redis-py is developed and maintained by Andy McCurdy (sedrik@gmail.com).
904 | It can be found here: https://github.com/andymccurdy/redis-py
905 |
906 | Special thanks to:
907 |
908 | * Ludovico Magnocavallo, author of the original Python Redis client, from
909 | which some of the socket code is still used.
910 | * Alexander Solovyov for ideas on the generic response callback system.
911 | * Paul Hubbard for initial packaging support.
912 |
913 |
914 |
--------------------------------------------------------------------------------
/lamed/vendor/redis-3.4.1.dist-info/RECORD:
--------------------------------------------------------------------------------
1 | redis-3.4.1.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
2 | redis-3.4.1.dist-info/LICENSE,sha256=eQFI2MEvijiycHp0viNDMWutEmmV_1SAGhgbiyMboSQ,1074
3 | redis-3.4.1.dist-info/METADATA,sha256=7Cd8PT-3ZnK8bZTMMJl7Xd6lxTJ9EgfHMASHA2AqAYs,36153
4 | redis-3.4.1.dist-info/RECORD,,
5 | redis-3.4.1.dist-info/WHEEL,sha256=CihQvCnsGZQBGAHLEUMf0IdA4fRduS_NBUTMgCTtvPM,110
6 | redis-3.4.1.dist-info/top_level.txt,sha256=OMAefszlde6ZoOtlM35AWzpRIrwtcqAMHGlRit-w2-4,6
7 | redis/__init__.py,sha256=GUw2b8D4ZKIAyD32TaVqRcjgkRky8dyDoHItudPeTpo,1209
8 | redis/__pycache__/__init__.cpython-38.pyc,,
9 | redis/__pycache__/_compat.cpython-38.pyc,,
10 | redis/__pycache__/client.cpython-38.pyc,,
11 | redis/__pycache__/connection.cpython-38.pyc,,
12 | redis/__pycache__/exceptions.cpython-38.pyc,,
13 | redis/__pycache__/lock.cpython-38.pyc,,
14 | redis/__pycache__/sentinel.cpython-38.pyc,,
15 | redis/__pycache__/utils.cpython-38.pyc,,
16 | redis/_compat.py,sha256=BOO1ikpjMJbWOfqxDengdU5C5SnsZDeUHvKg-FjXmN0,5649
17 | redis/client.py,sha256=3iS05aTqBE85aP971lWnG172nsBCvQ_3cBF2ZIdM3UY,157915
18 | redis/connection.py,sha256=_qnUdVuBToxi_avPjA-u3Eh7OPnkboH5f5l7JTBot9k,54334
19 | redis/exceptions.py,sha256=phjjyJjnebrM82XDzfjtreGnkWIoSNfDZiyoWs3_zQE,1341
20 | redis/lock.py,sha256=i4H8Lqb95NkRsAoKAYn71V164dnMAlStKUGja-zZVag,10845
21 | redis/sentinel.py,sha256=ql0-nMsqh_ZCidpWJv6blXSzbq9BPK3rp4LHoVJMZ64,11358
22 | redis/utils.py,sha256=yTyLWUi60KTfw4U4nWhbGvQdNpQb9Wpdf_Qcx_lUJJU,666
23 |
--------------------------------------------------------------------------------
/lamed/vendor/redis-3.4.1.dist-info/WHEEL:
--------------------------------------------------------------------------------
1 | Wheel-Version: 1.0
2 | Generator: bdist_wheel (0.32.2)
3 | Root-Is-Purelib: true
4 | Tag: py2-none-any
5 | Tag: py3-none-any
6 |
7 |
--------------------------------------------------------------------------------
/lamed/vendor/redis-3.4.1.dist-info/top_level.txt:
--------------------------------------------------------------------------------
1 | redis
2 |
--------------------------------------------------------------------------------
/lamed/vendor/redis/__init__.py:
--------------------------------------------------------------------------------
1 | from redis.client import Redis, StrictRedis
2 | from redis.connection import (
3 | BlockingConnectionPool,
4 | ConnectionPool,
5 | Connection,
6 | SSLConnection,
7 | UnixDomainSocketConnection
8 | )
9 | from redis.utils import from_url
10 | from redis.exceptions import (
11 | AuthenticationError,
12 | AuthenticationWrongNumberOfArgsError,
13 | BusyLoadingError,
14 | ChildDeadlockedError,
15 | ConnectionError,
16 | DataError,
17 | InvalidResponse,
18 | PubSubError,
19 | ReadOnlyError,
20 | RedisError,
21 | ResponseError,
22 | TimeoutError,
23 | WatchError
24 | )
25 |
26 |
27 | def int_or_str(value):
28 | try:
29 | return int(value)
30 | except ValueError:
31 | return value
32 |
33 |
34 | __version__ = '3.4.1'
35 | VERSION = tuple(map(int_or_str, __version__.split('.')))
36 |
37 | __all__ = [
38 | 'AuthenticationError',
39 | 'AuthenticationWrongNumberOfArgsError',
40 | 'BlockingConnectionPool',
41 | 'BusyLoadingError',
42 | 'ChildDeadlockedError',
43 | 'Connection',
44 | 'ConnectionError',
45 | 'ConnectionPool',
46 | 'DataError',
47 | 'from_url',
48 | 'InvalidResponse',
49 | 'PubSubError',
50 | 'ReadOnlyError',
51 | 'Redis',
52 | 'RedisError',
53 | 'ResponseError',
54 | 'SSLConnection',
55 | 'StrictRedis',
56 | 'TimeoutError',
57 | 'UnixDomainSocketConnection',
58 | 'WatchError',
59 | ]
60 |
--------------------------------------------------------------------------------
/lamed/vendor/redis/_compat.py:
--------------------------------------------------------------------------------
1 | """Internal module for Python 2 backwards compatibility."""
2 | import errno
3 | import socket
4 | import sys
5 |
6 |
7 | def sendall(sock, *args, **kwargs):
8 | return sock.sendall(*args, **kwargs)
9 |
10 |
11 | def shutdown(sock, *args, **kwargs):
12 | return sock.shutdown(*args, **kwargs)
13 |
14 |
15 | def ssl_wrap_socket(context, sock, *args, **kwargs):
16 | return context.wrap_socket(sock, *args, **kwargs)
17 |
18 |
19 | # For Python older than 3.5, retry EINTR.
20 | if sys.version_info[0] < 3 or (sys.version_info[0] == 3 and
21 | sys.version_info[1] < 5):
22 | # Adapted from https://bugs.python.org/review/23863/patch/14532/54418
23 | import time
24 |
25 | # Wrapper for handling interruptable system calls.
26 | def _retryable_call(s, func, *args, **kwargs):
27 | # Some modules (SSL) use the _fileobject wrapper directly and
28 | # implement a smaller portion of the socket interface, thus we
29 | # need to let them continue to do so.
30 | timeout, deadline = None, 0.0
31 | attempted = False
32 | try:
33 | timeout = s.gettimeout()
34 | except AttributeError:
35 | pass
36 |
37 | if timeout:
38 | deadline = time.time() + timeout
39 |
40 | try:
41 | while True:
42 | if attempted and timeout:
43 | now = time.time()
44 | if now >= deadline:
45 | raise socket.error(errno.EWOULDBLOCK, "timed out")
46 | else:
47 | # Overwrite the timeout on the socket object
48 | # to take into account elapsed time.
49 | s.settimeout(deadline - now)
50 | try:
51 | attempted = True
52 | return func(*args, **kwargs)
53 | except socket.error as e:
54 | if e.args[0] == errno.EINTR:
55 | continue
56 | raise
57 | finally:
58 | # Set the existing timeout back for future
59 | # calls.
60 | if timeout:
61 | s.settimeout(timeout)
62 |
63 | def recv(sock, *args, **kwargs):
64 | return _retryable_call(sock, sock.recv, *args, **kwargs)
65 |
66 | def recv_into(sock, *args, **kwargs):
67 | return _retryable_call(sock, sock.recv_into, *args, **kwargs)
68 |
69 | else: # Python 3.5 and above automatically retry EINTR
70 | def recv(sock, *args, **kwargs):
71 | return sock.recv(*args, **kwargs)
72 |
73 | def recv_into(sock, *args, **kwargs):
74 | return sock.recv_into(*args, **kwargs)
75 |
76 | if sys.version_info[0] < 3:
77 | # In Python 3, the ssl module raises socket.timeout whereas it raises
78 | # SSLError in Python 2. For compatibility between versions, ensure
79 | # socket.timeout is raised for both.
80 | import functools
81 |
82 | try:
83 | from ssl import SSLError as _SSLError
84 | except ImportError:
85 | class _SSLError(Exception):
86 | """A replacement in case ssl.SSLError is not available."""
87 | pass
88 |
89 | _EXPECTED_SSL_TIMEOUT_MESSAGES = (
90 | "The handshake operation timed out",
91 | "The read operation timed out",
92 | "The write operation timed out",
93 | )
94 |
95 | def _handle_ssl_timeout(func):
96 | @functools.wraps(func)
97 | def wrapper(*args, **kwargs):
98 | try:
99 | return func(*args, **kwargs)
100 | except _SSLError as e:
101 | message = len(e.args) == 1 and unicode(e.args[0]) or ''
102 | if any(x in message for x in _EXPECTED_SSL_TIMEOUT_MESSAGES):
103 | # Raise socket.timeout for compatibility with Python 3.
104 | raise socket.timeout(*e.args)
105 | raise
106 | return wrapper
107 |
108 | recv = _handle_ssl_timeout(recv)
109 | recv_into = _handle_ssl_timeout(recv_into)
110 | sendall = _handle_ssl_timeout(sendall)
111 | shutdown = _handle_ssl_timeout(shutdown)
112 | ssl_wrap_socket = _handle_ssl_timeout(ssl_wrap_socket)
113 |
114 | if sys.version_info[0] < 3:
115 | from urllib import unquote
116 | from urlparse import parse_qs, urlparse
117 | from itertools import imap, izip
118 | from string import letters as ascii_letters
119 | from Queue import Queue
120 |
121 | # special unicode handling for python2 to avoid UnicodeDecodeError
122 | def safe_unicode(obj, *args):
123 | """ return the unicode representation of obj """
124 | try:
125 | return unicode(obj, *args)
126 | except UnicodeDecodeError:
127 | # obj is byte string
128 | ascii_text = str(obj).encode('string_escape')
129 | return unicode(ascii_text)
130 |
131 | def iteritems(x):
132 | return x.iteritems()
133 |
134 | def iterkeys(x):
135 | return x.iterkeys()
136 |
137 | def itervalues(x):
138 | return x.itervalues()
139 |
140 | def nativestr(x):
141 | return x if isinstance(x, str) else x.encode('utf-8', 'replace')
142 |
143 | def next(x):
144 | return x.next()
145 |
146 | def byte_to_chr(x):
147 | return x
148 |
149 | unichr = unichr
150 | xrange = xrange
151 | basestring = basestring
152 | unicode = unicode
153 | long = long
154 | BlockingIOError = socket.error
155 | else:
156 | from urllib.parse import parse_qs, unquote, urlparse
157 | from string import ascii_letters
158 | from queue import Queue
159 |
160 | def iteritems(x):
161 | return iter(x.items())
162 |
163 | def iterkeys(x):
164 | return iter(x.keys())
165 |
166 | def itervalues(x):
167 | return iter(x.values())
168 |
169 | def byte_to_chr(x):
170 | return chr(x)
171 |
172 | def nativestr(x):
173 | return x if isinstance(x, str) else x.decode('utf-8', 'replace')
174 |
175 | next = next
176 | unichr = chr
177 | imap = map
178 | izip = zip
179 | xrange = range
180 | basestring = str
181 | unicode = str
182 | safe_unicode = str
183 | long = int
184 | BlockingIOError = BlockingIOError
185 |
186 | try: # Python 3
187 | from queue import LifoQueue, Empty, Full
188 | except ImportError: # Python 2
189 | from Queue import LifoQueue, Empty, Full
190 |
--------------------------------------------------------------------------------
/lamed/vendor/redis/connection.py:
--------------------------------------------------------------------------------
1 | from __future__ import unicode_literals
2 | from distutils.version import StrictVersion
3 | from itertools import chain
4 | from time import time
5 | import errno
6 | import io
7 | import os
8 | import socket
9 | import sys
10 | import threading
11 | import warnings
12 |
13 | from redis._compat import (xrange, imap, byte_to_chr, unicode, long,
14 | nativestr, basestring, iteritems,
15 | LifoQueue, Empty, Full, urlparse, parse_qs,
16 | recv, recv_into, unquote, BlockingIOError,
17 | sendall, shutdown, ssl_wrap_socket)
18 | from redis.exceptions import (
19 | AuthenticationError,
20 | AuthenticationWrongNumberOfArgsError,
21 | BusyLoadingError,
22 | ChildDeadlockedError,
23 | ConnectionError,
24 | DataError,
25 | ExecAbortError,
26 | InvalidResponse,
27 | NoPermissionError,
28 | NoScriptError,
29 | ReadOnlyError,
30 | RedisError,
31 | ResponseError,
32 | TimeoutError,
33 | )
34 | from redis.utils import HIREDIS_AVAILABLE
35 |
36 | try:
37 | import ssl
38 | ssl_available = True
39 | except ImportError:
40 | ssl_available = False
41 |
42 | NONBLOCKING_EXCEPTION_ERROR_NUMBERS = {
43 | BlockingIOError: errno.EWOULDBLOCK,
44 | }
45 |
46 | if ssl_available:
47 | if hasattr(ssl, 'SSLWantReadError'):
48 | NONBLOCKING_EXCEPTION_ERROR_NUMBERS[ssl.SSLWantReadError] = 2
49 | NONBLOCKING_EXCEPTION_ERROR_NUMBERS[ssl.SSLWantWriteError] = 2
50 | else:
51 | NONBLOCKING_EXCEPTION_ERROR_NUMBERS[ssl.SSLError] = 2
52 |
53 | # In Python 2.7 a socket.error is raised for a nonblocking read.
54 | # The _compat module aliases BlockingIOError to socket.error to be
55 | # Python 2/3 compatible.
56 | # However this means that all socket.error exceptions need to be handled
57 | # properly within these exception handlers.
58 | # We need to make sure socket.error is included in these handlers and
59 | # provide a dummy error number that will never match a real exception.
60 | if socket.error not in NONBLOCKING_EXCEPTION_ERROR_NUMBERS:
61 | NONBLOCKING_EXCEPTION_ERROR_NUMBERS[socket.error] = -999999
62 |
63 | NONBLOCKING_EXCEPTIONS = tuple(NONBLOCKING_EXCEPTION_ERROR_NUMBERS.keys())
64 |
65 | if HIREDIS_AVAILABLE:
66 | import hiredis
67 |
68 | hiredis_version = StrictVersion(hiredis.__version__)
69 | HIREDIS_SUPPORTS_CALLABLE_ERRORS = \
70 | hiredis_version >= StrictVersion('0.1.3')
71 | HIREDIS_SUPPORTS_BYTE_BUFFER = \
72 | hiredis_version >= StrictVersion('0.1.4')
73 | HIREDIS_SUPPORTS_ENCODING_ERRORS = \
74 | hiredis_version >= StrictVersion('1.0.0')
75 |
76 | if not HIREDIS_SUPPORTS_BYTE_BUFFER:
77 | msg = ("redis-py works best with hiredis >= 0.1.4. You're running "
78 | "hiredis %s. Please consider upgrading." % hiredis.__version__)
79 | warnings.warn(msg)
80 |
81 | HIREDIS_USE_BYTE_BUFFER = True
82 | # only use byte buffer if hiredis supports it
83 | if not HIREDIS_SUPPORTS_BYTE_BUFFER:
84 | HIREDIS_USE_BYTE_BUFFER = False
85 |
86 | SYM_STAR = b'*'
87 | SYM_DOLLAR = b'$'
88 | SYM_CRLF = b'\r\n'
89 | SYM_EMPTY = b''
90 |
91 | SERVER_CLOSED_CONNECTION_ERROR = "Connection closed by server."
92 |
93 | SENTINEL = object()
94 |
95 |
96 | class Encoder(object):
97 | "Encode strings to bytes and decode bytes to strings"
98 |
99 | def __init__(self, encoding, encoding_errors, decode_responses):
100 | self.encoding = encoding
101 | self.encoding_errors = encoding_errors
102 | self.decode_responses = decode_responses
103 |
104 | def encode(self, value):
105 | "Return a bytestring representation of the value"
106 | if isinstance(value, bytes):
107 | return value
108 | elif isinstance(value, bool):
109 | # special case bool since it is a subclass of int
110 | raise DataError("Invalid input of type: 'bool'. Convert to a "
111 | "bytes, string, int or float first.")
112 | elif isinstance(value, float):
113 | value = repr(value).encode()
114 | elif isinstance(value, (int, long)):
115 | # python 2 repr() on longs is '123L', so use str() instead
116 | value = str(value).encode()
117 | elif not isinstance(value, basestring):
118 | # a value we don't know how to deal with. throw an error
119 | typename = type(value).__name__
120 | raise DataError("Invalid input of type: '%s'. Convert to a "
121 | "bytes, string, int or float first." % typename)
122 | if isinstance(value, unicode):
123 | value = value.encode(self.encoding, self.encoding_errors)
124 | return value
125 |
126 | def decode(self, value, force=False):
127 | "Return a unicode string from the byte representation"
128 | if (self.decode_responses or force) and isinstance(value, bytes):
129 | value = value.decode(self.encoding, self.encoding_errors)
130 | return value
131 |
132 |
133 | class BaseParser(object):
134 | EXCEPTION_CLASSES = {
135 | 'ERR': {
136 | 'max number of clients reached': ConnectionError,
137 | 'Client sent AUTH, but no password is set': AuthenticationError,
138 | 'invalid password': AuthenticationError,
139 | 'wrong number of arguments for \'auth\' command':
140 | AuthenticationWrongNumberOfArgsError,
141 | },
142 | 'EXECABORT': ExecAbortError,
143 | 'LOADING': BusyLoadingError,
144 | 'NOSCRIPT': NoScriptError,
145 | 'READONLY': ReadOnlyError,
146 | 'NOAUTH': AuthenticationError,
147 | 'NOPERM': NoPermissionError,
148 | }
149 |
150 | def parse_error(self, response):
151 | "Parse an error response"
152 | error_code = response.split(' ')[0]
153 | if error_code in self.EXCEPTION_CLASSES:
154 | response = response[len(error_code) + 1:]
155 | exception_class = self.EXCEPTION_CLASSES[error_code]
156 | if isinstance(exception_class, dict):
157 | exception_class = exception_class.get(response, ResponseError)
158 | return exception_class(response)
159 | return ResponseError(response)
160 |
161 |
162 | class SocketBuffer(object):
163 | def __init__(self, socket, socket_read_size, socket_timeout):
164 | self._sock = socket
165 | self.socket_read_size = socket_read_size
166 | self.socket_timeout = socket_timeout
167 | self._buffer = io.BytesIO()
168 | # number of bytes written to the buffer from the socket
169 | self.bytes_written = 0
170 | # number of bytes read from the buffer
171 | self.bytes_read = 0
172 |
173 | @property
174 | def length(self):
175 | return self.bytes_written - self.bytes_read
176 |
177 | def _read_from_socket(self, length=None, timeout=SENTINEL,
178 | raise_on_timeout=True):
179 | sock = self._sock
180 | socket_read_size = self.socket_read_size
181 | buf = self._buffer
182 | buf.seek(self.bytes_written)
183 | marker = 0
184 | custom_timeout = timeout is not SENTINEL
185 |
186 | try:
187 | if custom_timeout:
188 | sock.settimeout(timeout)
189 | while True:
190 | data = recv(self._sock, socket_read_size)
191 | # an empty string indicates the server shutdown the socket
192 | if isinstance(data, bytes) and len(data) == 0:
193 | raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
194 | buf.write(data)
195 | data_length = len(data)
196 | self.bytes_written += data_length
197 | marker += data_length
198 |
199 | if length is not None and length > marker:
200 | continue
201 | return True
202 | except socket.timeout:
203 | if raise_on_timeout:
204 | raise TimeoutError("Timeout reading from socket")
205 | return False
206 | except NONBLOCKING_EXCEPTIONS as ex:
207 | # if we're in nonblocking mode and the recv raises a
208 | # blocking error, simply return False indicating that
209 | # there's no data to be read. otherwise raise the
210 | # original exception.
211 | allowed = NONBLOCKING_EXCEPTION_ERROR_NUMBERS.get(ex.__class__, -1)
212 | if not raise_on_timeout and ex.errno == allowed:
213 | return False
214 | raise ConnectionError("Error while reading from socket: %s" %
215 | (ex.args,))
216 | finally:
217 | if custom_timeout:
218 | sock.settimeout(self.socket_timeout)
219 |
220 | def can_read(self, timeout):
221 | return bool(self.length) or \
222 | self._read_from_socket(timeout=timeout,
223 | raise_on_timeout=False)
224 |
225 | def read(self, length):
226 | length = length + 2 # make sure to read the \r\n terminator
227 | # make sure we've read enough data from the socket
228 | if length > self.length:
229 | self._read_from_socket(length - self.length)
230 |
231 | self._buffer.seek(self.bytes_read)
232 | data = self._buffer.read(length)
233 | self.bytes_read += len(data)
234 |
235 | # purge the buffer when we've consumed it all so it doesn't
236 | # grow forever
237 | if self.bytes_read == self.bytes_written:
238 | self.purge()
239 |
240 | return data[:-2]
241 |
242 | def readline(self):
243 | buf = self._buffer
244 | buf.seek(self.bytes_read)
245 | data = buf.readline()
246 | while not data.endswith(SYM_CRLF):
247 | # there's more data in the socket that we need
248 | self._read_from_socket()
249 | buf.seek(self.bytes_read)
250 | data = buf.readline()
251 |
252 | self.bytes_read += len(data)
253 |
254 | # purge the buffer when we've consumed it all so it doesn't
255 | # grow forever
256 | if self.bytes_read == self.bytes_written:
257 | self.purge()
258 |
259 | return data[:-2]
260 |
261 | def purge(self):
262 | self._buffer.seek(0)
263 | self._buffer.truncate()
264 | self.bytes_written = 0
265 | self.bytes_read = 0
266 |
267 | def close(self):
268 | try:
269 | self.purge()
270 | self._buffer.close()
271 | except Exception:
272 | # issue #633 suggests the purge/close somehow raised a
273 | # BadFileDescriptor error. Perhaps the client ran out of
274 | # memory or something else? It's probably OK to ignore
275 | # any error being raised from purge/close since we're
276 | # removing the reference to the instance below.
277 | pass
278 | self._buffer = None
279 | self._sock = None
280 |
281 |
282 | class PythonParser(BaseParser):
283 | "Plain Python parsing class"
284 | def __init__(self, socket_read_size):
285 | self.socket_read_size = socket_read_size
286 | self.encoder = None
287 | self._sock = None
288 | self._buffer = None
289 |
290 | def __del__(self):
291 | try:
292 | self.on_disconnect()
293 | except Exception:
294 | pass
295 |
296 | def on_connect(self, connection):
297 | "Called when the socket connects"
298 | self._sock = connection._sock
299 | self._buffer = SocketBuffer(self._sock,
300 | self.socket_read_size,
301 | connection.socket_timeout)
302 | self.encoder = connection.encoder
303 |
304 | def on_disconnect(self):
305 | "Called when the socket disconnects"
306 | self._sock = None
307 | if self._buffer is not None:
308 | self._buffer.close()
309 | self._buffer = None
310 | self.encoder = None
311 |
312 | def can_read(self, timeout):
313 | return self._buffer and self._buffer.can_read(timeout)
314 |
315 | def read_response(self):
316 | response = self._buffer.readline()
317 | if not response:
318 | raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
319 |
320 | byte, response = byte_to_chr(response[0]), response[1:]
321 |
322 | if byte not in ('-', '+', ':', '$', '*'):
323 | raise InvalidResponse("Protocol Error: %s, %s" %
324 | (str(byte), str(response)))
325 |
326 | # server returned an error
327 | if byte == '-':
328 | response = nativestr(response)
329 | error = self.parse_error(response)
330 | # if the error is a ConnectionError, raise immediately so the user
331 | # is notified
332 | if isinstance(error, ConnectionError):
333 | raise error
334 | # otherwise, we're dealing with a ResponseError that might belong
335 | # inside a pipeline response. the connection's read_response()
336 | # and/or the pipeline's execute() will raise this error if
337 | # necessary, so just return the exception instance here.
338 | return error
339 | # single value
340 | elif byte == '+':
341 | pass
342 | # int value
343 | elif byte == ':':
344 | response = long(response)
345 | # bulk response
346 | elif byte == '$':
347 | length = int(response)
348 | if length == -1:
349 | return None
350 | response = self._buffer.read(length)
351 | # multi-bulk response
352 | elif byte == '*':
353 | length = int(response)
354 | if length == -1:
355 | return None
356 | response = [self.read_response() for i in xrange(length)]
357 | if isinstance(response, bytes):
358 | response = self.encoder.decode(response)
359 | return response
360 |
361 |
362 | class HiredisParser(BaseParser):
363 | "Parser class for connections using Hiredis"
364 | def __init__(self, socket_read_size):
365 | if not HIREDIS_AVAILABLE:
366 | raise RedisError("Hiredis is not installed")
367 | self.socket_read_size = socket_read_size
368 |
369 | if HIREDIS_USE_BYTE_BUFFER:
370 | self._buffer = bytearray(socket_read_size)
371 |
372 | def __del__(self):
373 | try:
374 | self.on_disconnect()
375 | except Exception:
376 | pass
377 |
378 | def on_connect(self, connection):
379 | self._sock = connection._sock
380 | self._socket_timeout = connection.socket_timeout
381 | kwargs = {
382 | 'protocolError': InvalidResponse,
383 | 'replyError': self.parse_error,
384 | }
385 |
386 | # hiredis < 0.1.3 doesn't support functions that create exceptions
387 | if not HIREDIS_SUPPORTS_CALLABLE_ERRORS:
388 | kwargs['replyError'] = ResponseError
389 |
390 | if connection.encoder.decode_responses:
391 | kwargs['encoding'] = connection.encoder.encoding
392 | if HIREDIS_SUPPORTS_ENCODING_ERRORS:
393 | kwargs['errors'] = connection.encoder.encoding_errors
394 | self._reader = hiredis.Reader(**kwargs)
395 | self._next_response = False
396 |
397 | def on_disconnect(self):
398 | self._sock = None
399 | self._reader = None
400 | self._next_response = False
401 |
402 | def can_read(self, timeout):
403 | if not self._reader:
404 | raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
405 |
406 | if self._next_response is False:
407 | self._next_response = self._reader.gets()
408 | if self._next_response is False:
409 | return self.read_from_socket(timeout=timeout,
410 | raise_on_timeout=False)
411 | return True
412 |
413 | def read_from_socket(self, timeout=SENTINEL, raise_on_timeout=True):
414 | sock = self._sock
415 | custom_timeout = timeout is not SENTINEL
416 | try:
417 | if custom_timeout:
418 | sock.settimeout(timeout)
419 | if HIREDIS_USE_BYTE_BUFFER:
420 | bufflen = recv_into(self._sock, self._buffer)
421 | if bufflen == 0:
422 | raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
423 | self._reader.feed(self._buffer, 0, bufflen)
424 | else:
425 | buffer = recv(self._sock, self.socket_read_size)
426 | # an empty string indicates the server shutdown the socket
427 | if not isinstance(buffer, bytes) or len(buffer) == 0:
428 | raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
429 | self._reader.feed(buffer)
430 | # data was read from the socket and added to the buffer.
431 | # return True to indicate that data was read.
432 | return True
433 | except socket.timeout:
434 | if raise_on_timeout:
435 | raise TimeoutError("Timeout reading from socket")
436 | return False
437 | except NONBLOCKING_EXCEPTIONS as ex:
438 | # if we're in nonblocking mode and the recv raises a
439 | # blocking error, simply return False indicating that
440 | # there's no data to be read. otherwise raise the
441 | # original exception.
442 | allowed = NONBLOCKING_EXCEPTION_ERROR_NUMBERS.get(ex.__class__, -1)
443 | if not raise_on_timeout and ex.errno == allowed:
444 | return False
445 | raise ConnectionError("Error while reading from socket: %s" %
446 | (ex.args,))
447 | finally:
448 | if custom_timeout:
449 | sock.settimeout(self._socket_timeout)
450 |
451 | def read_response(self):
452 | if not self._reader:
453 | raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
454 |
455 | # _next_response might be cached from a can_read() call
456 | if self._next_response is not False:
457 | response = self._next_response
458 | self._next_response = False
459 | return response
460 |
461 | response = self._reader.gets()
462 | while response is False:
463 | self.read_from_socket()
464 | response = self._reader.gets()
465 | # if an older version of hiredis is installed, we need to attempt
466 | # to convert ResponseErrors to their appropriate types.
467 | if not HIREDIS_SUPPORTS_CALLABLE_ERRORS:
468 | if isinstance(response, ResponseError):
469 | response = self.parse_error(response.args[0])
470 | elif isinstance(response, list) and response and \
471 | isinstance(response[0], ResponseError):
472 | response[0] = self.parse_error(response[0].args[0])
473 | # if the response is a ConnectionError or the response is a list and
474 | # the first item is a ConnectionError, raise it as something bad
475 | # happened
476 | if isinstance(response, ConnectionError):
477 | raise response
478 | elif isinstance(response, list) and response and \
479 | isinstance(response[0], ConnectionError):
480 | raise response[0]
481 | return response
482 |
483 |
484 | if HIREDIS_AVAILABLE:
485 | DefaultParser = HiredisParser
486 | else:
487 | DefaultParser = PythonParser
488 |
489 |
490 | class Connection(object):
491 | "Manages TCP communication to and from a Redis server"
492 |
493 | def __init__(self, host='localhost', port=6379, db=0, password=None,
494 | socket_timeout=None, socket_connect_timeout=None,
495 | socket_keepalive=False, socket_keepalive_options=None,
496 | socket_type=0, retry_on_timeout=False, encoding='utf-8',
497 | encoding_errors='strict', decode_responses=False,
498 | parser_class=DefaultParser, socket_read_size=65536,
499 | health_check_interval=0, client_name=None, username=None):
500 | self.pid = os.getpid()
501 | self.host = host
502 | self.port = int(port)
503 | self.db = db
504 | self.username = username
505 | self.client_name = client_name
506 | self.password = password
507 | self.socket_timeout = socket_timeout
508 | self.socket_connect_timeout = socket_connect_timeout or socket_timeout
509 | self.socket_keepalive = socket_keepalive
510 | self.socket_keepalive_options = socket_keepalive_options or {}
511 | self.socket_type = socket_type
512 | self.retry_on_timeout = retry_on_timeout
513 | self.health_check_interval = health_check_interval
514 | self.next_health_check = 0
515 | self.encoder = Encoder(encoding, encoding_errors, decode_responses)
516 | self._sock = None
517 | self._parser = parser_class(socket_read_size=socket_read_size)
518 | self._connect_callbacks = []
519 | self._buffer_cutoff = 6000
520 |
521 | def __repr__(self):
522 | repr_args = ','.join(['%s=%s' % (k, v) for k, v in self.repr_pieces()])
523 | return '%s<%s>' % (self.__class__.__name__, repr_args)
524 |
525 | def repr_pieces(self):
526 | pieces = [
527 | ('host', self.host),
528 | ('port', self.port),
529 | ('db', self.db)
530 | ]
531 | if self.client_name:
532 | pieces.append(('client_name', self.client_name))
533 | return pieces
534 |
535 | def __del__(self):
536 | try:
537 | self.disconnect()
538 | except Exception:
539 | pass
540 |
541 | def register_connect_callback(self, callback):
542 | self._connect_callbacks.append(callback)
543 |
544 | def clear_connect_callbacks(self):
545 | self._connect_callbacks = []
546 |
547 | def connect(self):
548 | "Connects to the Redis server if not already connected"
549 | if self._sock:
550 | return
551 | try:
552 | sock = self._connect()
553 | except socket.timeout:
554 | raise TimeoutError("Timeout connecting to server")
555 | except socket.error:
556 | e = sys.exc_info()[1]
557 | raise ConnectionError(self._error_message(e))
558 |
559 | self._sock = sock
560 | try:
561 | self.on_connect()
562 | except RedisError:
563 | # clean up after any error in on_connect
564 | self.disconnect()
565 | raise
566 |
567 | # run any user callbacks. right now the only internal callback
568 | # is for pubsub channel/pattern resubscription
569 | for callback in self._connect_callbacks:
570 | callback(self)
571 |
572 | def _connect(self):
573 | "Create a TCP socket connection"
574 | # we want to mimic what socket.create_connection does to support
575 | # ipv4/ipv6, but we want to set options prior to calling
576 | # socket.connect()
577 | err = None
578 | for res in socket.getaddrinfo(self.host, self.port, self.socket_type,
579 | socket.SOCK_STREAM):
580 | family, socktype, proto, canonname, socket_address = res
581 | sock = None
582 | try:
583 | sock = socket.socket(family, socktype, proto)
584 | # TCP_NODELAY
585 | sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
586 |
587 | # TCP_KEEPALIVE
588 | if self.socket_keepalive:
589 | sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
590 | for k, v in iteritems(self.socket_keepalive_options):
591 | sock.setsockopt(socket.IPPROTO_TCP, k, v)
592 |
593 | # set the socket_connect_timeout before we connect
594 | sock.settimeout(self.socket_connect_timeout)
595 |
596 | # connect
597 | sock.connect(socket_address)
598 |
599 | # set the socket_timeout now that we're connected
600 | sock.settimeout(self.socket_timeout)
601 | return sock
602 |
603 | except socket.error as _:
604 | err = _
605 | if sock is not None:
606 | sock.close()
607 |
608 | if err is not None:
609 | raise err
610 | raise socket.error("socket.getaddrinfo returned an empty list")
611 |
612 | def _error_message(self, exception):
613 | # args for socket.error can either be (errno, "message")
614 | # or just "message"
615 | if len(exception.args) == 1:
616 | return "Error connecting to %s:%s. %s." % \
617 | (self.host, self.port, exception.args[0])
618 | else:
619 | return "Error %s connecting to %s:%s. %s." % \
620 | (exception.args[0], self.host, self.port, exception.args[1])
621 |
622 | def on_connect(self):
623 | "Initialize the connection, authenticate and select a database"
624 | self._parser.on_connect(self)
625 |
626 | # if username and/or password are set, authenticate
627 | if self.username or self.password:
628 | if self.username:
629 | auth_args = (self.username, self.password or '')
630 | else:
631 | auth_args = (self.password,)
632 | # avoid checking health here -- PING will fail if we try
633 | # to check the health prior to the AUTH
634 | self.send_command('AUTH', *auth_args, check_health=False)
635 |
636 | try:
637 | auth_response = self.read_response()
638 | except AuthenticationWrongNumberOfArgsError:
639 | # a username and password were specified but the Redis
640 | # server seems to be < 6.0.0 which expects a single password
641 | # arg. retry auth with just the password.
642 | # https://github.com/andymccurdy/redis-py/issues/1274
643 | self.send_command('AUTH', self.password, check_health=False)
644 | auth_response = self.read_response()
645 |
646 | if nativestr(auth_response) != 'OK':
647 | raise AuthenticationError('Invalid Username or Password')
648 |
649 | # if a client_name is given, set it
650 | if self.client_name:
651 | self.send_command('CLIENT', 'SETNAME', self.client_name)
652 | if nativestr(self.read_response()) != 'OK':
653 | raise ConnectionError('Error setting client name')
654 |
655 | # if a database is specified, switch to it
656 | if self.db:
657 | self.send_command('SELECT', self.db)
658 | if nativestr(self.read_response()) != 'OK':
659 | raise ConnectionError('Invalid Database')
660 |
661 | def disconnect(self):
662 | "Disconnects from the Redis server"
663 | self._parser.on_disconnect()
664 | if self._sock is None:
665 | return
666 | try:
667 | if os.getpid() == self.pid:
668 | shutdown(self._sock, socket.SHUT_RDWR)
669 | self._sock.close()
670 | except socket.error:
671 | pass
672 | self._sock = None
673 |
674 | def check_health(self):
675 | "Check the health of the connection with a PING/PONG"
676 | if self.health_check_interval and time() > self.next_health_check:
677 | try:
678 | self.send_command('PING', check_health=False)
679 | if nativestr(self.read_response()) != 'PONG':
680 | raise ConnectionError(
681 | 'Bad response from PING health check')
682 | except (ConnectionError, TimeoutError) as ex:
683 | self.disconnect()
684 | self.send_command('PING', check_health=False)
685 | if nativestr(self.read_response()) != 'PONG':
686 | raise ConnectionError(
687 | 'Bad response from PING health check')
688 |
689 | def send_packed_command(self, command, check_health=True):
690 | "Send an already packed command to the Redis server"
691 | if not self._sock:
692 | self.connect()
693 | # guard against health check recursion
694 | if check_health:
695 | self.check_health()
696 | try:
697 | if isinstance(command, str):
698 | command = [command]
699 | for item in command:
700 | sendall(self._sock, item)
701 | except socket.timeout:
702 | self.disconnect()
703 | raise TimeoutError("Timeout writing to socket")
704 | except socket.error:
705 | e = sys.exc_info()[1]
706 | self.disconnect()
707 | if len(e.args) == 1:
708 | errno, errmsg = 'UNKNOWN', e.args[0]
709 | else:
710 | errno = e.args[0]
711 | errmsg = e.args[1]
712 | raise ConnectionError("Error %s while writing to socket. %s." %
713 | (errno, errmsg))
714 | except: # noqa: E722
715 | self.disconnect()
716 | raise
717 |
718 | def send_command(self, *args, **kwargs):
719 | "Pack and send a command to the Redis server"
720 | self.send_packed_command(self.pack_command(*args),
721 | check_health=kwargs.get('check_health', True))
722 |
723 | def can_read(self, timeout=0):
724 | "Poll the socket to see if there's data that can be read."
725 | sock = self._sock
726 | if not sock:
727 | self.connect()
728 | sock = self._sock
729 | return self._parser.can_read(timeout)
730 |
731 | def read_response(self):
732 | "Read the response from a previously sent command"
733 | try:
734 | response = self._parser.read_response()
735 | except socket.timeout:
736 | self.disconnect()
737 | raise TimeoutError("Timeout reading from %s:%s" %
738 | (self.host, self.port))
739 | except socket.error:
740 | self.disconnect()
741 | e = sys.exc_info()[1]
742 | raise ConnectionError("Error while reading from %s:%s : %s" %
743 | (self.host, self.port, e.args))
744 | except: # noqa: E722
745 | self.disconnect()
746 | raise
747 |
748 | if self.health_check_interval:
749 | self.next_health_check = time() + self.health_check_interval
750 |
751 | if isinstance(response, ResponseError):
752 | raise response
753 | return response
754 |
755 | def pack_command(self, *args):
756 | "Pack a series of arguments into the Redis protocol"
757 | output = []
758 | # the client might have included 1 or more literal arguments in
759 | # the command name, e.g., 'CONFIG GET'. The Redis server expects these
760 | # arguments to be sent separately, so split the first argument
761 | # manually. These arguments should be bytestrings so that they are
762 | # not encoded.
763 | if isinstance(args[0], unicode):
764 | args = tuple(args[0].encode().split()) + args[1:]
765 | elif b' ' in args[0]:
766 | args = tuple(args[0].split()) + args[1:]
767 |
768 | buff = SYM_EMPTY.join((SYM_STAR, str(len(args)).encode(), SYM_CRLF))
769 |
770 | buffer_cutoff = self._buffer_cutoff
771 | for arg in imap(self.encoder.encode, args):
772 | # to avoid large string mallocs, chunk the command into the
773 | # output list if we're sending large values
774 | arg_length = len(arg)
775 | if len(buff) > buffer_cutoff or arg_length > buffer_cutoff:
776 | buff = SYM_EMPTY.join(
777 | (buff, SYM_DOLLAR, str(arg_length).encode(), SYM_CRLF))
778 | output.append(buff)
779 | output.append(arg)
780 | buff = SYM_CRLF
781 | else:
782 | buff = SYM_EMPTY.join(
783 | (buff, SYM_DOLLAR, str(arg_length).encode(),
784 | SYM_CRLF, arg, SYM_CRLF))
785 | output.append(buff)
786 | return output
787 |
788 | def pack_commands(self, commands):
789 | "Pack multiple commands into the Redis protocol"
790 | output = []
791 | pieces = []
792 | buffer_length = 0
793 | buffer_cutoff = self._buffer_cutoff
794 |
795 | for cmd in commands:
796 | for chunk in self.pack_command(*cmd):
797 | chunklen = len(chunk)
798 | if buffer_length > buffer_cutoff or chunklen > buffer_cutoff:
799 | output.append(SYM_EMPTY.join(pieces))
800 | buffer_length = 0
801 | pieces = []
802 |
803 | if chunklen > self._buffer_cutoff:
804 | output.append(chunk)
805 | else:
806 | pieces.append(chunk)
807 | buffer_length += chunklen
808 |
809 | if pieces:
810 | output.append(SYM_EMPTY.join(pieces))
811 | return output
812 |
813 |
814 | class SSLConnection(Connection):
815 |
816 | def __init__(self, ssl_keyfile=None, ssl_certfile=None,
817 | ssl_cert_reqs='required', ssl_ca_certs=None,
818 | ssl_check_hostname=False, **kwargs):
819 | if not ssl_available:
820 | raise RedisError("Python wasn't built with SSL support")
821 |
822 | super(SSLConnection, self).__init__(**kwargs)
823 |
824 | self.keyfile = ssl_keyfile
825 | self.certfile = ssl_certfile
826 | if ssl_cert_reqs is None:
827 | ssl_cert_reqs = ssl.CERT_NONE
828 | elif isinstance(ssl_cert_reqs, basestring):
829 | CERT_REQS = {
830 | 'none': ssl.CERT_NONE,
831 | 'optional': ssl.CERT_OPTIONAL,
832 | 'required': ssl.CERT_REQUIRED
833 | }
834 | if ssl_cert_reqs not in CERT_REQS:
835 | raise RedisError(
836 | "Invalid SSL Certificate Requirements Flag: %s" %
837 | ssl_cert_reqs)
838 | ssl_cert_reqs = CERT_REQS[ssl_cert_reqs]
839 | self.cert_reqs = ssl_cert_reqs
840 | self.ca_certs = ssl_ca_certs
841 | self.check_hostname = ssl_check_hostname
842 |
843 | def _connect(self):
844 | "Wrap the socket with SSL support"
845 | sock = super(SSLConnection, self)._connect()
846 | if hasattr(ssl, "create_default_context"):
847 | context = ssl.create_default_context()
848 | context.check_hostname = self.check_hostname
849 | context.verify_mode = self.cert_reqs
850 | if self.certfile and self.keyfile:
851 | context.load_cert_chain(certfile=self.certfile,
852 | keyfile=self.keyfile)
853 | if self.ca_certs:
854 | context.load_verify_locations(self.ca_certs)
855 | sock = ssl_wrap_socket(context, sock, server_hostname=self.host)
856 | else:
857 | # In case this code runs in a version which is older than 2.7.9,
858 | # we want to fall back to old code
859 | sock = ssl_wrap_socket(ssl,
860 | sock,
861 | cert_reqs=self.cert_reqs,
862 | keyfile=self.keyfile,
863 | certfile=self.certfile,
864 | ca_certs=self.ca_certs)
865 | return sock
866 |
867 |
868 | class UnixDomainSocketConnection(Connection):
869 |
870 | def __init__(self, path='', db=0, username=None, password=None,
871 | socket_timeout=None, encoding='utf-8',
872 | encoding_errors='strict', decode_responses=False,
873 | retry_on_timeout=False,
874 | parser_class=DefaultParser, socket_read_size=65536,
875 | health_check_interval=0, client_name=None):
876 | self.pid = os.getpid()
877 | self.path = path
878 | self.db = db
879 | self.username = username
880 | self.client_name = client_name
881 | self.password = password
882 | self.socket_timeout = socket_timeout
883 | self.retry_on_timeout = retry_on_timeout
884 | self.health_check_interval = health_check_interval
885 | self.next_health_check = 0
886 | self.encoder = Encoder(encoding, encoding_errors, decode_responses)
887 | self._sock = None
888 | self._parser = parser_class(socket_read_size=socket_read_size)
889 | self._connect_callbacks = []
890 | self._buffer_cutoff = 6000
891 |
892 | def repr_pieces(self):
893 | pieces = [
894 | ('path', self.path),
895 | ('db', self.db),
896 | ]
897 | if self.client_name:
898 | pieces.append(('client_name', self.client_name))
899 | return pieces
900 |
901 | def _connect(self):
902 | "Create a Unix domain socket connection"
903 | sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
904 | sock.settimeout(self.socket_timeout)
905 | sock.connect(self.path)
906 | return sock
907 |
908 | def _error_message(self, exception):
909 | # args for socket.error can either be (errno, "message")
910 | # or just "message"
911 | if len(exception.args) == 1:
912 | return "Error connecting to unix socket: %s. %s." % \
913 | (self.path, exception.args[0])
914 | else:
915 | return "Error %s connecting to unix socket: %s. %s." % \
916 | (exception.args[0], self.path, exception.args[1])
917 |
918 |
919 | FALSE_STRINGS = ('0', 'F', 'FALSE', 'N', 'NO')
920 |
921 |
922 | def to_bool(value):
923 | if value is None or value == '':
924 | return None
925 | if isinstance(value, basestring) and value.upper() in FALSE_STRINGS:
926 | return False
927 | return bool(value)
928 |
929 |
930 | URL_QUERY_ARGUMENT_PARSERS = {
931 | 'socket_timeout': float,
932 | 'socket_connect_timeout': float,
933 | 'socket_keepalive': to_bool,
934 | 'retry_on_timeout': to_bool,
935 | 'max_connections': int,
936 | 'health_check_interval': int,
937 | 'ssl_check_hostname': to_bool,
938 | }
939 |
940 |
941 | class ConnectionPool(object):
942 | "Generic connection pool"
943 | @classmethod
944 | def from_url(cls, url, db=None, decode_components=False, **kwargs):
945 | """
946 | Return a connection pool configured from the given URL.
947 |
948 | For example::
949 |
950 | redis://[[username]:[password]]@localhost:6379/0
951 | rediss://[[username]:[password]]@localhost:6379/0
952 | unix://[[username]:[password]]@/path/to/socket.sock?db=0
953 |
954 | Three URL schemes are supported:
955 |
956 | - ```redis://``
957 | `_ creates a
958 | normal TCP socket connection
959 | - ```rediss://``
960 | `_ creates
961 | a SSL wrapped TCP socket connection
962 | - ``unix://`` creates a Unix Domain Socket connection
963 |
964 | There are several ways to specify a database number. The parse function
965 | will return the first specified option:
966 | 1. A ``db`` querystring option, e.g. redis://localhost?db=0
967 | 2. If using the redis:// scheme, the path argument of the url, e.g.
968 | redis://localhost/0
969 | 3. The ``db`` argument to this function.
970 |
971 | If none of these options are specified, db=0 is used.
972 |
973 | The ``decode_components`` argument allows this function to work with
974 | percent-encoded URLs. If this argument is set to ``True`` all ``%xx``
975 | escapes will be replaced by their single-character equivalents after
976 | the URL has been parsed. This only applies to the ``hostname``,
977 | ``path``, ``username`` and ``password`` components.
978 |
979 | Any additional querystring arguments and keyword arguments will be
980 | passed along to the ConnectionPool class's initializer. The querystring
981 | arguments ``socket_connect_timeout`` and ``socket_timeout`` if supplied
982 | are parsed as float values. The arguments ``socket_keepalive`` and
983 | ``retry_on_timeout`` are parsed to boolean values that accept
984 | True/False, Yes/No values to indicate state. Invalid types cause a
985 | ``UserWarning`` to be raised. In the case of conflicting arguments,
986 | querystring arguments always win.
987 |
988 | """
989 | url = urlparse(url)
990 | url_options = {}
991 |
992 | for name, value in iteritems(parse_qs(url.query)):
993 | if value and len(value) > 0:
994 | parser = URL_QUERY_ARGUMENT_PARSERS.get(name)
995 | if parser:
996 | try:
997 | url_options[name] = parser(value[0])
998 | except (TypeError, ValueError):
999 | warnings.warn(UserWarning(
1000 | "Invalid value for `%s` in connection URL." % name
1001 | ))
1002 | else:
1003 | url_options[name] = value[0]
1004 |
1005 | if decode_components:
1006 | username = unquote(url.username) if url.username else None
1007 | password = unquote(url.password) if url.password else None
1008 | path = unquote(url.path) if url.path else None
1009 | hostname = unquote(url.hostname) if url.hostname else None
1010 | else:
1011 | username = url.username or None
1012 | password = url.password or None
1013 | path = url.path
1014 | hostname = url.hostname
1015 |
1016 | # We only support redis://, rediss:// and unix:// schemes.
1017 | if url.scheme == 'unix':
1018 | url_options.update({
1019 | 'username': username,
1020 | 'password': password,
1021 | 'path': path,
1022 | 'connection_class': UnixDomainSocketConnection,
1023 | })
1024 |
1025 | elif url.scheme in ('redis', 'rediss'):
1026 | url_options.update({
1027 | 'host': hostname,
1028 | 'port': int(url.port or 6379),
1029 | 'username': username,
1030 | 'password': password,
1031 | })
1032 |
1033 | # If there's a path argument, use it as the db argument if a
1034 | # querystring value wasn't specified
1035 | if 'db' not in url_options and path:
1036 | try:
1037 | url_options['db'] = int(path.replace('/', ''))
1038 | except (AttributeError, ValueError):
1039 | pass
1040 |
1041 | if url.scheme == 'rediss':
1042 | url_options['connection_class'] = SSLConnection
1043 | else:
1044 | valid_schemes = ', '.join(('redis://', 'rediss://', 'unix://'))
1045 | raise ValueError('Redis URL must specify one of the following'
1046 | 'schemes (%s)' % valid_schemes)
1047 |
1048 | # last shot at the db value
1049 | url_options['db'] = int(url_options.get('db', db or 0))
1050 |
1051 | # update the arguments from the URL values
1052 | kwargs.update(url_options)
1053 |
1054 | # backwards compatability
1055 | if 'charset' in kwargs:
1056 | warnings.warn(DeprecationWarning(
1057 | '"charset" is deprecated. Use "encoding" instead'))
1058 | kwargs['encoding'] = kwargs.pop('charset')
1059 | if 'errors' in kwargs:
1060 | warnings.warn(DeprecationWarning(
1061 | '"errors" is deprecated. Use "encoding_errors" instead'))
1062 | kwargs['encoding_errors'] = kwargs.pop('errors')
1063 |
1064 | return cls(**kwargs)
1065 |
1066 | def __init__(self, connection_class=Connection, max_connections=None,
1067 | **connection_kwargs):
1068 | """
1069 | Create a connection pool. If max_connections is set, then this
1070 | object raises redis.ConnectionError when the pool's limit is reached.
1071 |
1072 | By default, TCP connections are created unless connection_class is
1073 | specified. Use redis.UnixDomainSocketConnection for unix sockets.
1074 |
1075 | Any additional keyword arguments are passed to the constructor of
1076 | connection_class.
1077 | """
1078 | max_connections = max_connections or 2 ** 31
1079 | if not isinstance(max_connections, (int, long)) or max_connections < 0:
1080 | raise ValueError('"max_connections" must be a positive integer')
1081 |
1082 | self.connection_class = connection_class
1083 | self.connection_kwargs = connection_kwargs
1084 | self.max_connections = max_connections
1085 |
1086 | # a lock to protect the critical section in _checkpid().
1087 | # this lock is acquired when the process id changes, such as
1088 | # after a fork. during this time, multiple threads in the child
1089 | # process could attempt to acquire this lock. the first thread
1090 | # to acquire the lock will reset the data structures and lock
1091 | # object of this pool. subsequent threads acquiring this lock
1092 | # will notice the first thread already did the work and simply
1093 | # release the lock.
1094 | self._fork_lock = threading.Lock()
1095 | self.reset()
1096 |
1097 | def __repr__(self):
1098 | return "%s<%s>" % (
1099 | type(self).__name__,
1100 | repr(self.connection_class(**self.connection_kwargs)),
1101 | )
1102 |
1103 | def reset(self):
1104 | self._lock = threading.RLock()
1105 | self._created_connections = 0
1106 | self._available_connections = []
1107 | self._in_use_connections = set()
1108 |
1109 | # this must be the last operation in this method. while reset() is
1110 | # called when holding _fork_lock, other threads in this process
1111 | # can call _checkpid() which compares self.pid and os.getpid() without
1112 | # holding any lock (for performance reasons). keeping this assignment
1113 | # as the last operation ensures that those other threads will also
1114 | # notice a pid difference and block waiting for the first thread to
1115 | # release _fork_lock. when each of these threads eventually acquire
1116 | # _fork_lock, they will notice that another thread already called
1117 | # reset() and they will immediately release _fork_lock and continue on.
1118 | self.pid = os.getpid()
1119 |
1120 | def _checkpid(self):
1121 | # _checkpid() attempts to keep ConnectionPool fork-safe on modern
1122 | # systems. this is called by all ConnectionPool methods that
1123 | # manipulate the pool's state such as get_connection() and release().
1124 | #
1125 | # _checkpid() determines whether the process has forked by comparing
1126 | # the current process id to the process id saved on the ConnectionPool
1127 | # instance. if these values are the same, _checkpid() simply returns.
1128 | #
1129 | # when the process ids differ, _checkpid() assumes that the process
1130 | # has forked and that we're now running in the child process. the child
1131 | # process cannot use the parent's file descriptors (e.g., sockets).
1132 | # therefore, when _checkpid() sees the process id change, it calls
1133 | # reset() in order to reinitialize the child's ConnectionPool. this
1134 | # will cause the child to make all new connection objects.
1135 | #
1136 | # _checkpid() is protected by self._fork_lock to ensure that multiple
1137 | # threads in the child process do not call reset() multiple times.
1138 | #
1139 | # there is an extremely small chance this could fail in the following
1140 | # scenario:
1141 | # 1. process A calls _checkpid() for the first time and acquires
1142 | # self._fork_lock.
1143 | # 2. while holding self._fork_lock, process A forks (the fork()
1144 | # could happen in a different thread owned by process A)
1145 | # 3. process B (the forked child process) inherits the
1146 | # ConnectionPool's state from the parent. that state includes
1147 | # a locked _fork_lock. process B will not be notified when
1148 | # process A releases the _fork_lock and will thus never be
1149 | # able to acquire the _fork_lock.
1150 | #
1151 | # to mitigate this possible deadlock, _checkpid() will only wait 5
1152 | # seconds to acquire _fork_lock. if _fork_lock cannot be acquired in
1153 | # that time it is assumed that the child is deadlocked and a
1154 | # redis.ChildDeadlockedError error is raised.
1155 | if self.pid != os.getpid():
1156 | # python 2.7 doesn't support a timeout option to lock.acquire()
1157 | # we have to mimic lock timeouts ourselves.
1158 | timeout_at = time() + 5
1159 | acquired = False
1160 | while time() < timeout_at:
1161 | acquired = self._fork_lock.acquire(False)
1162 | if acquired:
1163 | break
1164 | if not acquired:
1165 | raise ChildDeadlockedError
1166 | # reset() the instance for the new process if another thread
1167 | # hasn't already done so
1168 | try:
1169 | if self.pid != os.getpid():
1170 | self.reset()
1171 | finally:
1172 | self._fork_lock.release()
1173 |
1174 | def get_connection(self, command_name, *keys, **options):
1175 | "Get a connection from the pool"
1176 | self._checkpid()
1177 | with self._lock:
1178 | try:
1179 | connection = self._available_connections.pop()
1180 | except IndexError:
1181 | connection = self.make_connection()
1182 | self._in_use_connections.add(connection)
1183 | try:
1184 | # ensure this connection is connected to Redis
1185 | connection.connect()
1186 | # connections that the pool provides should be ready to send
1187 | # a command. if not, the connection was either returned to the
1188 | # pool before all data has been read or the socket has been
1189 | # closed. either way, reconnect and verify everything is good.
1190 | try:
1191 | if connection.can_read():
1192 | raise ConnectionError('Connection has data')
1193 | except ConnectionError:
1194 | connection.disconnect()
1195 | connection.connect()
1196 | if connection.can_read():
1197 | raise ConnectionError('Connection not ready')
1198 | except: # noqa: E722
1199 | # release the connection back to the pool so that we don't
1200 | # leak it
1201 | self.release(connection)
1202 | raise
1203 |
1204 | return connection
1205 |
1206 | def get_encoder(self):
1207 | "Return an encoder based on encoding settings"
1208 | kwargs = self.connection_kwargs
1209 | return Encoder(
1210 | encoding=kwargs.get('encoding', 'utf-8'),
1211 | encoding_errors=kwargs.get('encoding_errors', 'strict'),
1212 | decode_responses=kwargs.get('decode_responses', False)
1213 | )
1214 |
1215 | def make_connection(self):
1216 | "Create a new connection"
1217 | if self._created_connections >= self.max_connections:
1218 | raise ConnectionError("Too many connections")
1219 | self._created_connections += 1
1220 | return self.connection_class(**self.connection_kwargs)
1221 |
1222 | def release(self, connection):
1223 | "Releases the connection back to the pool"
1224 | self._checkpid()
1225 | with self._lock:
1226 | if connection.pid != self.pid:
1227 | return
1228 | self._in_use_connections.remove(connection)
1229 | self._available_connections.append(connection)
1230 |
1231 | def disconnect(self):
1232 | "Disconnects all connections in the pool"
1233 | self._checkpid()
1234 | with self._lock:
1235 | all_conns = chain(self._available_connections,
1236 | self._in_use_connections)
1237 | for connection in all_conns:
1238 | connection.disconnect()
1239 |
1240 |
1241 | class BlockingConnectionPool(ConnectionPool):
1242 | """
1243 | Thread-safe blocking connection pool::
1244 |
1245 | >>> from redis.client import Redis
1246 | >>> client = Redis(connection_pool=BlockingConnectionPool())
1247 |
1248 | It performs the same function as the default
1249 | ``:py:class: ~redis.connection.ConnectionPool`` implementation, in that,
1250 | it maintains a pool of reusable connections that can be shared by
1251 | multiple redis clients (safely across threads if required).
1252 |
1253 | The difference is that, in the event that a client tries to get a
1254 | connection from the pool when all of connections are in use, rather than
1255 | raising a ``:py:class: ~redis.exceptions.ConnectionError`` (as the default
1256 | ``:py:class: ~redis.connection.ConnectionPool`` implementation does), it
1257 | makes the client wait ("blocks") for a specified number of seconds until
1258 | a connection becomes available.
1259 |
1260 | Use ``max_connections`` to increase / decrease the pool size::
1261 |
1262 | >>> pool = BlockingConnectionPool(max_connections=10)
1263 |
1264 | Use ``timeout`` to tell it either how many seconds to wait for a connection
1265 | to become available, or to block forever:
1266 |
1267 | # Block forever.
1268 | >>> pool = BlockingConnectionPool(timeout=None)
1269 |
1270 | # Raise a ``ConnectionError`` after five seconds if a connection is
1271 | # not available.
1272 | >>> pool = BlockingConnectionPool(timeout=5)
1273 | """
1274 | def __init__(self, max_connections=50, timeout=20,
1275 | connection_class=Connection, queue_class=LifoQueue,
1276 | **connection_kwargs):
1277 |
1278 | self.queue_class = queue_class
1279 | self.timeout = timeout
1280 | super(BlockingConnectionPool, self).__init__(
1281 | connection_class=connection_class,
1282 | max_connections=max_connections,
1283 | **connection_kwargs)
1284 |
1285 | def reset(self):
1286 | # Create and fill up a thread safe queue with ``None`` values.
1287 | self.pool = self.queue_class(self.max_connections)
1288 | while True:
1289 | try:
1290 | self.pool.put_nowait(None)
1291 | except Full:
1292 | break
1293 |
1294 | # Keep a list of actual connection instances so that we can
1295 | # disconnect them later.
1296 | self._connections = []
1297 |
1298 | # this must be the last operation in this method. while reset() is
1299 | # called when holding _fork_lock, other threads in this process
1300 | # can call _checkpid() which compares self.pid and os.getpid() without
1301 | # holding any lock (for performance reasons). keeping this assignment
1302 | # as the last operation ensures that those other threads will also
1303 | # notice a pid difference and block waiting for the first thread to
1304 | # release _fork_lock. when each of these threads eventually acquire
1305 | # _fork_lock, they will notice that another thread already called
1306 | # reset() and they will immediately release _fork_lock and continue on.
1307 | self.pid = os.getpid()
1308 |
1309 | def make_connection(self):
1310 | "Make a fresh connection."
1311 | connection = self.connection_class(**self.connection_kwargs)
1312 | self._connections.append(connection)
1313 | return connection
1314 |
1315 | def get_connection(self, command_name, *keys, **options):
1316 | """
1317 | Get a connection, blocking for ``self.timeout`` until a connection
1318 | is available from the pool.
1319 |
1320 | If the connection returned is ``None`` then creates a new connection.
1321 | Because we use a last-in first-out queue, the existing connections
1322 | (having been returned to the pool after the initial ``None`` values
1323 | were added) will be returned before ``None`` values. This means we only
1324 | create new connections when we need to, i.e.: the actual number of
1325 | connections will only increase in response to demand.
1326 | """
1327 | # Make sure we haven't changed process.
1328 | self._checkpid()
1329 |
1330 | # Try and get a connection from the pool. If one isn't available within
1331 | # self.timeout then raise a ``ConnectionError``.
1332 | connection = None
1333 | try:
1334 | connection = self.pool.get(block=True, timeout=self.timeout)
1335 | except Empty:
1336 | # Note that this is not caught by the redis client and will be
1337 | # raised unless handled by application code. If you want never to
1338 | raise ConnectionError("No connection available.")
1339 |
1340 | # If the ``connection`` is actually ``None`` then that's a cue to make
1341 | # a new connection to add to the pool.
1342 | if connection is None:
1343 | connection = self.make_connection()
1344 |
1345 | try:
1346 | # ensure this connection is connected to Redis
1347 | connection.connect()
1348 | # connections that the pool provides should be ready to send
1349 | # a command. if not, the connection was either returned to the
1350 | # pool before all data has been read or the socket has been
1351 | # closed. either way, reconnect and verify everything is good.
1352 | try:
1353 | if connection.can_read():
1354 | raise ConnectionError('Connection has data')
1355 | except ConnectionError:
1356 | connection.disconnect()
1357 | connection.connect()
1358 | if connection.can_read():
1359 | raise ConnectionError('Connection not ready')
1360 | except: # noqa: E722
1361 | # release the connection back to the pool so that we don't leak it
1362 | self.release(connection)
1363 | raise
1364 |
1365 | return connection
1366 |
1367 | def release(self, connection):
1368 | "Releases the connection back to the pool."
1369 | # Make sure we haven't changed process.
1370 | self._checkpid()
1371 | if connection.pid != self.pid:
1372 | return
1373 |
1374 | # Put the connection back into the pool.
1375 | try:
1376 | self.pool.put_nowait(connection)
1377 | except Full:
1378 | # perhaps the pool has been reset() after a fork? regardless,
1379 | # we don't want this connection
1380 | pass
1381 |
1382 | def disconnect(self):
1383 | "Disconnects all connections in the pool."
1384 | self._checkpid()
1385 | for connection in self._connections:
1386 | connection.disconnect()
1387 |
--------------------------------------------------------------------------------
/lamed/vendor/redis/exceptions.py:
--------------------------------------------------------------------------------
1 | "Core exceptions raised by the Redis client"
2 |
3 |
4 | class RedisError(Exception):
5 | pass
6 |
7 |
8 | class ConnectionError(RedisError):
9 | pass
10 |
11 |
12 | class TimeoutError(RedisError):
13 | pass
14 |
15 |
16 | class AuthenticationError(ConnectionError):
17 | pass
18 |
19 |
20 | class BusyLoadingError(ConnectionError):
21 | pass
22 |
23 |
24 | class InvalidResponse(RedisError):
25 | pass
26 |
27 |
28 | class ResponseError(RedisError):
29 | pass
30 |
31 |
32 | class DataError(RedisError):
33 | pass
34 |
35 |
36 | class PubSubError(RedisError):
37 | pass
38 |
39 |
40 | class WatchError(RedisError):
41 | pass
42 |
43 |
44 | class NoScriptError(ResponseError):
45 | pass
46 |
47 |
48 | class ExecAbortError(ResponseError):
49 | pass
50 |
51 |
52 | class ReadOnlyError(ResponseError):
53 | pass
54 |
55 |
56 | class NoPermissionError(ResponseError):
57 | pass
58 |
59 |
60 | class LockError(RedisError, ValueError):
61 | "Errors acquiring or releasing a lock"
62 | # NOTE: For backwards compatability, this class derives from ValueError.
63 | # This was originally chosen to behave like threading.Lock.
64 | pass
65 |
66 |
67 | class LockNotOwnedError(LockError):
68 | "Error trying to extend or release a lock that is (no longer) owned"
69 | pass
70 |
71 |
72 | class ChildDeadlockedError(Exception):
73 | "Error indicating that a child process is deadlocked after a fork()"
74 | pass
75 |
76 |
77 | class AuthenticationWrongNumberOfArgsError(ResponseError):
78 | """
79 | An error to indicate that the wrong number of args
80 | were sent to the AUTH command
81 | """
82 | pass
83 |
--------------------------------------------------------------------------------
/lamed/vendor/redis/lock.py:
--------------------------------------------------------------------------------
1 | import threading
2 | import time as mod_time
3 | import uuid
4 | from redis.exceptions import LockError, LockNotOwnedError
5 | from redis.utils import dummy
6 |
7 |
8 | class Lock(object):
9 | """
10 | A shared, distributed Lock. Using Redis for locking allows the Lock
11 | to be shared across processes and/or machines.
12 |
13 | It's left to the user to resolve deadlock issues and make sure
14 | multiple clients play nicely together.
15 | """
16 |
17 | lua_release = None
18 | lua_extend = None
19 | lua_reacquire = None
20 |
21 | # KEYS[1] - lock name
22 | # ARGS[1] - token
23 | # return 1 if the lock was released, otherwise 0
24 | LUA_RELEASE_SCRIPT = """
25 | local token = redis.call('get', KEYS[1])
26 | if not token or token ~= ARGV[1] then
27 | return 0
28 | end
29 | redis.call('del', KEYS[1])
30 | return 1
31 | """
32 |
33 | # KEYS[1] - lock name
34 | # ARGS[1] - token
35 | # ARGS[2] - additional milliseconds
36 | # return 1 if the locks time was extended, otherwise 0
37 | LUA_EXTEND_SCRIPT = """
38 | local token = redis.call('get', KEYS[1])
39 | if not token or token ~= ARGV[1] then
40 | return 0
41 | end
42 | local expiration = redis.call('pttl', KEYS[1])
43 | if not expiration then
44 | expiration = 0
45 | end
46 | if expiration < 0 then
47 | return 0
48 | end
49 | redis.call('pexpire', KEYS[1], expiration + ARGV[2])
50 | return 1
51 | """
52 |
53 | # KEYS[1] - lock name
54 | # ARGS[1] - token
55 | # ARGS[2] - milliseconds
56 | # return 1 if the locks time was reacquired, otherwise 0
57 | LUA_REACQUIRE_SCRIPT = """
58 | local token = redis.call('get', KEYS[1])
59 | if not token or token ~= ARGV[1] then
60 | return 0
61 | end
62 | redis.call('pexpire', KEYS[1], ARGV[2])
63 | return 1
64 | """
65 |
66 | def __init__(self, redis, name, timeout=None, sleep=0.1,
67 | blocking=True, blocking_timeout=None, thread_local=True):
68 | """
69 | Create a new Lock instance named ``name`` using the Redis client
70 | supplied by ``redis``.
71 |
72 | ``timeout`` indicates a maximum life for the lock.
73 | By default, it will remain locked until release() is called.
74 | ``timeout`` can be specified as a float or integer, both representing
75 | the number of seconds to wait.
76 |
77 | ``sleep`` indicates the amount of time to sleep per loop iteration
78 | when the lock is in blocking mode and another client is currently
79 | holding the lock.
80 |
81 | ``blocking`` indicates whether calling ``acquire`` should block until
82 | the lock has been acquired or to fail immediately, causing ``acquire``
83 | to return False and the lock not being acquired. Defaults to True.
84 | Note this value can be overridden by passing a ``blocking``
85 | argument to ``acquire``.
86 |
87 | ``blocking_timeout`` indicates the maximum amount of time in seconds to
88 | spend trying to acquire the lock. A value of ``None`` indicates
89 | continue trying forever. ``blocking_timeout`` can be specified as a
90 | float or integer, both representing the number of seconds to wait.
91 |
92 | ``thread_local`` indicates whether the lock token is placed in
93 | thread-local storage. By default, the token is placed in thread local
94 | storage so that a thread only sees its token, not a token set by
95 | another thread. Consider the following timeline:
96 |
97 | time: 0, thread-1 acquires `my-lock`, with a timeout of 5 seconds.
98 | thread-1 sets the token to "abc"
99 | time: 1, thread-2 blocks trying to acquire `my-lock` using the
100 | Lock instance.
101 | time: 5, thread-1 has not yet completed. redis expires the lock
102 | key.
103 | time: 5, thread-2 acquired `my-lock` now that it's available.
104 | thread-2 sets the token to "xyz"
105 | time: 6, thread-1 finishes its work and calls release(). if the
106 | token is *not* stored in thread local storage, then
107 | thread-1 would see the token value as "xyz" and would be
108 | able to successfully release the thread-2's lock.
109 |
110 | In some use cases it's necessary to disable thread local storage. For
111 | example, if you have code where one thread acquires a lock and passes
112 | that lock instance to a worker thread to release later. If thread
113 | local storage isn't disabled in this case, the worker thread won't see
114 | the token set by the thread that acquired the lock. Our assumption
115 | is that these cases aren't common and as such default to using
116 | thread local storage.
117 | """
118 | self.redis = redis
119 | self.name = name
120 | self.timeout = timeout
121 | self.sleep = sleep
122 | self.blocking = blocking
123 | self.blocking_timeout = blocking_timeout
124 | self.thread_local = bool(thread_local)
125 | self.local = threading.local() if self.thread_local else dummy()
126 | self.local.token = None
127 | if self.timeout and self.sleep > self.timeout:
128 | raise LockError("'sleep' must be less than 'timeout'")
129 | self.register_scripts()
130 |
131 | def register_scripts(self):
132 | cls = self.__class__
133 | client = self.redis
134 | if cls.lua_release is None:
135 | cls.lua_release = client.register_script(cls.LUA_RELEASE_SCRIPT)
136 | if cls.lua_extend is None:
137 | cls.lua_extend = client.register_script(cls.LUA_EXTEND_SCRIPT)
138 | if cls.lua_reacquire is None:
139 | cls.lua_reacquire = \
140 | client.register_script(cls.LUA_REACQUIRE_SCRIPT)
141 |
142 | def __enter__(self):
143 | # force blocking, as otherwise the user would have to check whether
144 | # the lock was actually acquired or not.
145 | if self.acquire(blocking=True):
146 | return self
147 | raise LockError("Unable to acquire lock within the time specified")
148 |
149 | def __exit__(self, exc_type, exc_value, traceback):
150 | self.release()
151 |
152 | def acquire(self, blocking=None, blocking_timeout=None, token=None):
153 | """
154 | Use Redis to hold a shared, distributed lock named ``name``.
155 | Returns True once the lock is acquired.
156 |
157 | If ``blocking`` is False, always return immediately. If the lock
158 | was acquired, return True, otherwise return False.
159 |
160 | ``blocking_timeout`` specifies the maximum number of seconds to
161 | wait trying to acquire the lock.
162 |
163 | ``token`` specifies the token value to be used. If provided, token
164 | must be a bytes object or a string that can be encoded to a bytes
165 | object with the default encoding. If a token isn't specified, a UUID
166 | will be generated.
167 | """
168 | sleep = self.sleep
169 | if token is None:
170 | token = uuid.uuid1().hex.encode()
171 | else:
172 | encoder = self.redis.connection_pool.get_encoder()
173 | token = encoder.encode(token)
174 | if blocking is None:
175 | blocking = self.blocking
176 | if blocking_timeout is None:
177 | blocking_timeout = self.blocking_timeout
178 | stop_trying_at = None
179 | if blocking_timeout is not None:
180 | stop_trying_at = mod_time.time() + blocking_timeout
181 | while True:
182 | if self.do_acquire(token):
183 | self.local.token = token
184 | return True
185 | if not blocking:
186 | return False
187 | if stop_trying_at is not None and mod_time.time() > stop_trying_at:
188 | return False
189 | mod_time.sleep(sleep)
190 |
191 | def do_acquire(self, token):
192 | if self.timeout:
193 | # convert to milliseconds
194 | timeout = int(self.timeout * 1000)
195 | else:
196 | timeout = None
197 | if self.redis.set(self.name, token, nx=True, px=timeout):
198 | return True
199 | return False
200 |
201 | def locked(self):
202 | """
203 | Returns True if this key is locked by any process, otherwise False.
204 | """
205 | return self.redis.get(self.name) is not None
206 |
207 | def owned(self):
208 | """
209 | Returns True if this key is locked by this lock, otherwise False.
210 | """
211 | stored_token = self.redis.get(self.name)
212 | # need to always compare bytes to bytes
213 | # TODO: this can be simplified when the context manager is finished
214 | if stored_token and not isinstance(stored_token, bytes):
215 | encoder = self.redis.connection_pool.get_encoder()
216 | stored_token = encoder.encode(stored_token)
217 | return self.local.token is not None and \
218 | stored_token == self.local.token
219 |
220 | def release(self):
221 | "Releases the already acquired lock"
222 | expected_token = self.local.token
223 | if expected_token is None:
224 | raise LockError("Cannot release an unlocked lock")
225 | self.local.token = None
226 | self.do_release(expected_token)
227 |
228 | def do_release(self, expected_token):
229 | if not bool(self.lua_release(keys=[self.name],
230 | args=[expected_token],
231 | client=self.redis)):
232 | raise LockNotOwnedError("Cannot release a lock"
233 | " that's no longer owned")
234 |
235 | def extend(self, additional_time):
236 | """
237 | Adds more time to an already acquired lock.
238 |
239 | ``additional_time`` can be specified as an integer or a float, both
240 | representing the number of seconds to add.
241 | """
242 | if self.local.token is None:
243 | raise LockError("Cannot extend an unlocked lock")
244 | if self.timeout is None:
245 | raise LockError("Cannot extend a lock with no timeout")
246 | return self.do_extend(additional_time)
247 |
248 | def do_extend(self, additional_time):
249 | additional_time = int(additional_time * 1000)
250 | if not bool(self.lua_extend(keys=[self.name],
251 | args=[self.local.token, additional_time],
252 | client=self.redis)):
253 | raise LockNotOwnedError("Cannot extend a lock that's"
254 | " no longer owned")
255 | return True
256 |
257 | def reacquire(self):
258 | """
259 | Resets a TTL of an already acquired lock back to a timeout value.
260 | """
261 | if self.local.token is None:
262 | raise LockError("Cannot reacquire an unlocked lock")
263 | if self.timeout is None:
264 | raise LockError("Cannot reacquire a lock with no timeout")
265 | return self.do_reacquire()
266 |
267 | def do_reacquire(self):
268 | timeout = int(self.timeout * 1000)
269 | if not bool(self.lua_reacquire(keys=[self.name],
270 | args=[self.local.token, timeout],
271 | client=self.redis)):
272 | raise LockNotOwnedError("Cannot reacquire a lock that's"
273 | " no longer owned")
274 | return True
275 |
--------------------------------------------------------------------------------
/lamed/vendor/redis/sentinel.py:
--------------------------------------------------------------------------------
1 | import random
2 | import weakref
3 |
4 | from redis.client import Redis
5 | from redis.connection import ConnectionPool, Connection
6 | from redis.exceptions import (ConnectionError, ResponseError, ReadOnlyError,
7 | TimeoutError)
8 | from redis._compat import iteritems, nativestr, xrange
9 |
10 |
11 | class MasterNotFoundError(ConnectionError):
12 | pass
13 |
14 |
15 | class SlaveNotFoundError(ConnectionError):
16 | pass
17 |
18 |
19 | class SentinelManagedConnection(Connection):
20 | def __init__(self, **kwargs):
21 | self.connection_pool = kwargs.pop('connection_pool')
22 | super(SentinelManagedConnection, self).__init__(**kwargs)
23 |
24 | def __repr__(self):
25 | pool = self.connection_pool
26 | s = '%s' % (type(self).__name__, pool.service_name)
27 | if self.host:
28 | host_info = ',host=%s,port=%s' % (self.host, self.port)
29 | s = s % host_info
30 | return s
31 |
32 | def connect_to(self, address):
33 | self.host, self.port = address
34 | super(SentinelManagedConnection, self).connect()
35 | if self.connection_pool.check_connection:
36 | self.send_command('PING')
37 | if nativestr(self.read_response()) != 'PONG':
38 | raise ConnectionError('PING failed')
39 |
40 | def connect(self):
41 | if self._sock:
42 | return # already connected
43 | if self.connection_pool.is_master:
44 | self.connect_to(self.connection_pool.get_master_address())
45 | else:
46 | for slave in self.connection_pool.rotate_slaves():
47 | try:
48 | return self.connect_to(slave)
49 | except ConnectionError:
50 | continue
51 | raise SlaveNotFoundError # Never be here
52 |
53 | def read_response(self):
54 | try:
55 | return super(SentinelManagedConnection, self).read_response()
56 | except ReadOnlyError:
57 | if self.connection_pool.is_master:
58 | # When talking to a master, a ReadOnlyError when likely
59 | # indicates that the previous master that we're still connected
60 | # to has been demoted to a slave and there's a new master.
61 | # calling disconnect will force the connection to re-query
62 | # sentinel during the next connect() attempt.
63 | self.disconnect()
64 | raise ConnectionError('The previous master is now a slave')
65 | raise
66 |
67 |
68 | class SentinelConnectionPool(ConnectionPool):
69 | """
70 | Sentinel backed connection pool.
71 |
72 | If ``check_connection`` flag is set to True, SentinelManagedConnection
73 | sends a PING command right after establishing the connection.
74 | """
75 |
76 | def __init__(self, service_name, sentinel_manager, **kwargs):
77 | kwargs['connection_class'] = kwargs.get(
78 | 'connection_class', SentinelManagedConnection)
79 | self.is_master = kwargs.pop('is_master', True)
80 | self.check_connection = kwargs.pop('check_connection', False)
81 | super(SentinelConnectionPool, self).__init__(**kwargs)
82 | self.connection_kwargs['connection_pool'] = weakref.proxy(self)
83 | self.service_name = service_name
84 | self.sentinel_manager = sentinel_manager
85 |
86 | def __repr__(self):
87 | return "%s>> from redis.sentinel import Sentinel
133 | >>> sentinel = Sentinel([('localhost', 26379)], socket_timeout=0.1)
134 | >>> master = sentinel.master_for('mymaster', socket_timeout=0.1)
135 | >>> master.set('foo', 'bar')
136 | >>> slave = sentinel.slave_for('mymaster', socket_timeout=0.1)
137 | >>> slave.get('foo')
138 | 'bar'
139 |
140 | ``sentinels`` is a list of sentinel nodes. Each node is represented by
141 | a pair (hostname, port).
142 |
143 | ``min_other_sentinels`` defined a minimum number of peers for a sentinel.
144 | When querying a sentinel, if it doesn't meet this threshold, responses
145 | from that sentinel won't be considered valid.
146 |
147 | ``sentinel_kwargs`` is a dictionary of connection arguments used when
148 | connecting to sentinel instances. Any argument that can be passed to
149 | a normal Redis connection can be specified here. If ``sentinel_kwargs`` is
150 | not specified, any socket_timeout and socket_keepalive options specified
151 | in ``connection_kwargs`` will be used.
152 |
153 | ``connection_kwargs`` are keyword arguments that will be used when
154 | establishing a connection to a Redis server.
155 | """
156 |
157 | def __init__(self, sentinels, min_other_sentinels=0, sentinel_kwargs=None,
158 | **connection_kwargs):
159 | # if sentinel_kwargs isn't defined, use the socket_* options from
160 | # connection_kwargs
161 | if sentinel_kwargs is None:
162 | sentinel_kwargs = {
163 | k: v
164 | for k, v in iteritems(connection_kwargs)
165 | if k.startswith('socket_')
166 | }
167 | self.sentinel_kwargs = sentinel_kwargs
168 |
169 | self.sentinels = [Redis(hostname, port, **self.sentinel_kwargs)
170 | for hostname, port in sentinels]
171 | self.min_other_sentinels = min_other_sentinels
172 | self.connection_kwargs = connection_kwargs
173 |
174 | def __repr__(self):
175 | sentinel_addresses = []
176 | for sentinel in self.sentinels:
177 | sentinel_addresses.append('%s:%s' % (
178 | sentinel.connection_pool.connection_kwargs['host'],
179 | sentinel.connection_pool.connection_kwargs['port'],
180 | ))
181 | return '%s' % (
182 | type(self).__name__,
183 | ','.join(sentinel_addresses))
184 |
185 | def check_master_state(self, state, service_name):
186 | if not state['is_master'] or state['is_sdown'] or state['is_odown']:
187 | return False
188 | # Check if our sentinel doesn't see other nodes
189 | if state['num-other-sentinels'] < self.min_other_sentinels:
190 | return False
191 | return True
192 |
193 | def discover_master(self, service_name):
194 | """
195 | Asks sentinel servers for the Redis master's address corresponding
196 | to the service labeled ``service_name``.
197 |
198 | Returns a pair (address, port) or raises MasterNotFoundError if no
199 | master is found.
200 | """
201 | for sentinel_no, sentinel in enumerate(self.sentinels):
202 | try:
203 | masters = sentinel.sentinel_masters()
204 | except (ConnectionError, TimeoutError):
205 | continue
206 | state = masters.get(service_name)
207 | if state and self.check_master_state(state, service_name):
208 | # Put this sentinel at the top of the list
209 | self.sentinels[0], self.sentinels[sentinel_no] = (
210 | sentinel, self.sentinels[0])
211 | return state['ip'], state['port']
212 | raise MasterNotFoundError("No master found for %r" % (service_name,))
213 |
214 | def filter_slaves(self, slaves):
215 | "Remove slaves that are in an ODOWN or SDOWN state"
216 | slaves_alive = []
217 | for slave in slaves:
218 | if slave['is_odown'] or slave['is_sdown']:
219 | continue
220 | slaves_alive.append((slave['ip'], slave['port']))
221 | return slaves_alive
222 |
223 | def discover_slaves(self, service_name):
224 | "Returns a list of alive slaves for service ``service_name``"
225 | for sentinel in self.sentinels:
226 | try:
227 | slaves = sentinel.sentinel_slaves(service_name)
228 | except (ConnectionError, ResponseError, TimeoutError):
229 | continue
230 | slaves = self.filter_slaves(slaves)
231 | if slaves:
232 | return slaves
233 | return []
234 |
235 | def master_for(self, service_name, redis_class=Redis,
236 | connection_pool_class=SentinelConnectionPool, **kwargs):
237 | """
238 | Returns a redis client instance for the ``service_name`` master.
239 |
240 | A SentinelConnectionPool class is used to retrive the master's
241 | address before establishing a new connection.
242 |
243 | NOTE: If the master's address has changed, any cached connections to
244 | the old master are closed.
245 |
246 | By default clients will be a redis.Redis instance. Specify a
247 | different class to the ``redis_class`` argument if you desire
248 | something different.
249 |
250 | The ``connection_pool_class`` specifies the connection pool to use.
251 | The SentinelConnectionPool will be used by default.
252 |
253 | All other keyword arguments are merged with any connection_kwargs
254 | passed to this class and passed to the connection pool as keyword
255 | arguments to be used to initialize Redis connections.
256 | """
257 | kwargs['is_master'] = True
258 | connection_kwargs = dict(self.connection_kwargs)
259 | connection_kwargs.update(kwargs)
260 | return redis_class(connection_pool=connection_pool_class(
261 | service_name, self, **connection_kwargs))
262 |
263 | def slave_for(self, service_name, redis_class=Redis,
264 | connection_pool_class=SentinelConnectionPool, **kwargs):
265 | """
266 | Returns redis client instance for the ``service_name`` slave(s).
267 |
268 | A SentinelConnectionPool class is used to retrive the slave's
269 | address before establishing a new connection.
270 |
271 | By default clients will be a redis.Redis instance. Specify a
272 | different class to the ``redis_class`` argument if you desire
273 | something different.
274 |
275 | The ``connection_pool_class`` specifies the connection pool to use.
276 | The SentinelConnectionPool will be used by default.
277 |
278 | All other keyword arguments are merged with any connection_kwargs
279 | passed to this class and passed to the connection pool as keyword
280 | arguments to be used to initialize Redis connections.
281 | """
282 | kwargs['is_master'] = False
283 | connection_kwargs = dict(self.connection_kwargs)
284 | connection_kwargs.update(kwargs)
285 | return redis_class(connection_pool=connection_pool_class(
286 | service_name, self, **connection_kwargs))
287 |
--------------------------------------------------------------------------------
/lamed/vendor/redis/utils.py:
--------------------------------------------------------------------------------
1 | from contextlib import contextmanager
2 |
3 |
4 | try:
5 | import hiredis
6 | HIREDIS_AVAILABLE = True
7 | except ImportError:
8 | HIREDIS_AVAILABLE = False
9 |
10 |
11 | def from_url(url, db=None, **kwargs):
12 | """
13 | Returns an active Redis client generated from the given database URL.
14 |
15 | Will attempt to extract the database id from the path url fragment, if
16 | none is provided.
17 | """
18 | from redis.client import Redis
19 | return Redis.from_url(url, db, **kwargs)
20 |
21 |
22 | @contextmanager
23 | def pipeline(redis_obj):
24 | p = redis_obj.pipeline()
25 | yield p
26 | p.execute()
27 |
28 |
29 | class dummy(object):
30 | """
31 | Instances of this class can be used as an attribute container.
32 | """
33 | pass
34 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | #
2 | # This file is autogenerated by pip-compile
3 | # To update, run:
4 | #
5 | # pip-compile --output-file=requirements.txt setup.py
6 | #
7 | awscli==1.19.62 # via lamed (setup.py)
8 | boto3==1.17.62 # via lamed (setup.py)
9 | botocore==1.20.62 # via awscli, boto3, s3transfer
10 | click==7.1.2 # via lamed (setup.py)
11 | colorama==0.4.3 # via awscli
12 | docutils==0.15.2 # via awscli
13 | jmespath==0.10.0 # via boto3, botocore, lamed (setup.py)
14 | pyasn1==0.4.8 # via rsa
15 | python-dateutil==2.8.1 # via botocore
16 | pyyaml==5.4.1 # via awscli
17 | redis==3.5.3 # via lamed (setup.py)
18 | rsa==4.7.2 # via awscli
19 | s3transfer==0.4.2 # via awscli, boto3
20 | six==1.15.0 # via python-dateutil
21 | urllib3==1.26.4 # via botocore
22 |
--------------------------------------------------------------------------------
/requirements_dev.txt:
--------------------------------------------------------------------------------
1 | pip-tools
2 | tox
3 | nose
4 | mock
5 | placebo
6 | check-manifest
7 | readme_renderer
8 |
9 | # md to rst for pypi
10 | pypandoc
--------------------------------------------------------------------------------
/setup.cfg:
--------------------------------------------------------------------------------
1 | # E128 continuation line under-indented for visual indent
2 | # E402 module level import not at top of file
3 | [flake8]
4 | ignore = E128,E402
5 | max-line-length = 100
6 | exclude = lamed/vendor/*
7 |
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 |
3 | from setuptools import setup, find_packages
4 | import os
5 |
6 | with open('README.md') as f:
7 | long_description = f.read()
8 |
9 | requires = [
10 | 'awscli>=1.18.36',
11 | 'jmespath>=0.9.5',
12 | 'boto3>=1.12.36',
13 | 'click>=7.1.1',
14 | 'redis>=3.4.1'
15 | ]
16 |
17 | setup(
18 | name='lamed',
19 | version=open(os.path.join('lamed', 'VERSION')).read().strip(),
20 | description='Run your own A/B testing backend on AWS Lambda',
21 | long_description=long_description,
22 | long_description_content_type='text/markdown',
23 | author='Yoav Aner',
24 | author_email='yoav@gingerlime.com',
25 | url='https://github.com/Alephbet/lamed',
26 | packages=find_packages(exclude=['tests*']),
27 | include_package_data=True,
28 | zip_safe=False,
29 | entry_points="""
30 | [console_scripts]
31 | lamed=lamed.cli:cli
32 | """,
33 | install_requires=requires,
34 | classifiers=[
35 | 'Development Status :: 5 - Production/Stable',
36 | 'Intended Audience :: Developers',
37 | 'Intended Audience :: System Administrators',
38 | 'Natural Language :: English',
39 | 'License :: OSI Approved :: MIT License',
40 | 'Programming Language :: Python',
41 | 'Programming Language :: Python :: 2.7',
42 | 'Programming Language :: Python :: 3',
43 | 'Programming Language :: Python :: 3.3',
44 | 'Programming Language :: Python :: 3.4',
45 | 'Programming Language :: Python :: 3.5',
46 | 'Programming Language :: Python :: 3.6',
47 | 'Programming Language :: Python :: 3.7'
48 | ],
49 | )
50 |
--------------------------------------------------------------------------------
/tox.ini:
--------------------------------------------------------------------------------
1 | [tox]
2 | skip_missing_interpreters = True
3 | skipsdist=True
4 | minversion = 1.8
5 | envlist =
6 | py2-pep8,
7 | py3-pep8,
8 | packaging,
9 | readme
10 |
11 | [testenv:packaging]
12 | deps =
13 | check-manifest
14 | commands =
15 | check-manifest
16 |
17 | [testenv:readme]
18 | deps =
19 | pypandoc
20 | readme_renderer
21 | commands =
22 | python setup.py check -m -r -s
23 |
24 | [testenv:py2-pep8]
25 | basepython = python2
26 | deps = flake8
27 | commands = flake8 {toxinidir}/lamed
28 |
29 | [testenv:py3-pep8]
30 | basepython = python3
31 | deps = flake8
32 | commands = flake8 {toxinidir}/lamed
33 |
--------------------------------------------------------------------------------