├── .github ├── ISSUE_TEMPLATE.md └── workflows │ └── tests.yml ├── .gitignore ├── CHANGELOG.txt ├── LICENSE ├── MANIFEST.in ├── README.rst ├── channels_redis ├── __init__.py ├── core.py ├── pubsub.py ├── serializers.py └── utils.py ├── setup.cfg ├── setup.py ├── tests ├── __init__.py ├── test_core.py ├── test_pubsub.py ├── test_pubsub_sentinel.py ├── test_sentinel.py ├── test_serializers.py └── test_utils.py └── tox.ini /.github/ISSUE_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support we have to direct you elsewhere. If you just have questions or support requests please use: 2 | 3 | - Stack Overflow 4 | - The Django Users mailing list django-users@googlegroups.com (https://groups.google.com/forum/#!forum/django-users) 5 | 6 | We have to limit this because of limited volunteer time to respond to issues! 7 | 8 | Please also try and include, if you can: 9 | 10 | - Your OS and runtime environment, and browser if applicable 11 | - A `pip freeze` output showing your package versions 12 | - What you expected to happen vs. what actually happened 13 | - How you're running Channels (runserver? daphne/runworker? Nginx/Apache in front?) 14 | - Console logs and full tracebacks of any errors 15 | -------------------------------------------------------------------------------- /.github/workflows/tests.yml: -------------------------------------------------------------------------------- 1 | name: Tests 2 | 3 | on: 4 | push: 5 | branches: 6 | - main 7 | pull_request: 8 | 9 | jobs: 10 | tests: 11 | name: Python ${{ matrix.python-version }} 12 | runs-on: ubuntu-latest 13 | timeout-minutes: 10 14 | strategy: 15 | fail-fast: false 16 | matrix: 17 | python-version: 18 | - "3.8" 19 | - "3.9" 20 | - "3.10" 21 | - "3.11" 22 | - "3.12" 23 | - "3.13" 24 | services: 25 | redis: 26 | image: redis 27 | ports: 28 | - 6379:6379 29 | options: >- 30 | --health-cmd "redis-cli ping" 31 | --health-interval 10s 32 | --health-timeout 5s 33 | --health-retries 5 34 | sentinel: 35 | image: bitnami/redis-sentinel 36 | ports: 37 | - 26379:26379 38 | options: >- 39 | --health-cmd "redis-cli -p 26379 ping" 40 | --health-interval 10s 41 | --health-timeout 5s 42 | --health-retries 5 43 | env: 44 | REDIS_MASTER_HOST: redis 45 | REDIS_MASTER_SET: sentinel 46 | REDIS_SENTINEL_QUORUM: "1" 47 | REDIS_SENTINEL_PASSWORD: channels_redis 48 | 49 | steps: 50 | - uses: actions/checkout@v3 51 | - name: Set up Python ${{ matrix.python-version }} 52 | uses: actions/setup-python@v4 53 | with: 54 | python-version: ${{ matrix.python-version }} 55 | - name: Install dependencies 56 | run: | 57 | python -m pip install --upgrade pip wheel setuptools tox 58 | - name: Run tox targets for ${{ matrix.python-version }} 59 | run: | 60 | ENV_PREFIX=$(tr -C -d "0-9" <<< "${{ matrix.python-version }}") 61 | TOXENV=$(tox --listenvs | grep "^py$ENV_PREFIX" | tr '\n' ',') python -m tox 62 | 63 | lint: 64 | name: Lint 65 | runs-on: ubuntu-latest 66 | steps: 67 | - uses: actions/checkout@v3 68 | - name: Set up Python 69 | uses: actions/setup-python@v4 70 | with: 71 | python-version: "3.11" 72 | - name: Install dependencies 73 | run: | 74 | python -m pip install --upgrade pip tox 75 | - name: Run lint 76 | run: tox -e qa 77 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.egg-info 2 | dist/ 3 | build/ 4 | .cache 5 | *.pyc 6 | /.tox 7 | .DS_Store 8 | .pytest_cache 9 | .vscode 10 | .idea 11 | -------------------------------------------------------------------------------- /CHANGELOG.txt: -------------------------------------------------------------------------------- 1 | 4.2.1 (2024-11-15) 2 | ------------------ 3 | 4 | * Added a way to register and use custom serializer formats. 5 | See README.rst. 6 | 7 | 4.2.0 (2024-01-12) 8 | ------------------ 9 | 10 | * Dropped support for end-of-life Python 3.7. 11 | 12 | * Added support for Python 3.11 and 3.12. 13 | 14 | * Upped the minimum version of redis-py to 4.6. 15 | 16 | * Added CI testing against redis-py versions 4.6, 5, and the development branch. 17 | 18 | * Added CI testing against Channels versions 3, 4, and the development branch. 19 | 20 | 4.1.0 (2023-03-28) 21 | ------------------ 22 | 23 | * Adjusted the way Redis connections are handled: 24 | 25 | * Connection handling is now shared between the two, core and pub-sub, layers. 26 | 27 | * Both layers now ensure that connections are closed when an event loop shuts down. 28 | 29 | In particular, redis-py 4.x requires that connections are manually closed. 30 | In 4.0 that wasn't done by the core layer, which led to warnings for people 31 | using `async_to_sync()`, without closing connections when updating from 32 | 3.x. 33 | 34 | * Updated the minimum redis-py version to 4.5.3 because of a security release there. 35 | Note that this is not a security issue in channels-redis: installing an 36 | earlier version will still use the latest redis-py, but by bumping the 37 | dependency we make sure you'll get redis-py too, when you install the update 38 | here. 39 | 40 | 4.0.0 (2022-10-07) 41 | ------------------ 42 | 43 | Version 4.0.0 migrates the underlying Redis library from ``aioredis`` to ``redis-py``. 44 | (``aioredis`` was retired and moved into ``redis-py``, which will host the ongoing development.) 45 | 46 | Version 4.0.0 should be compatible with existing Channels 3 projects, as well as Channels 4 47 | projects. 48 | 49 | * Migrated from ``aioredis`` to ``redis-py``. Specifying hosts as tuples is no longer supported. 50 | If hosts are specified as dicts, only the ``address`` key will be taken into account, i.e. 51 | a `password`` must be specified inline in the address. 52 | 53 | * Added support for passing kwargs to sentinel connections. 54 | 55 | * Updated dependencies and obsolete code. 56 | 57 | 3.4.1 (2022-07-12) 58 | ------------------ 59 | 60 | * Fixed RuntimeError when checking for stale connections. 61 | 62 | 63 | 3.4.0 (2022-03-10) 64 | ------------------ 65 | 66 | * Dropped support for Python 3.6, which is now end-of-life, and added CI 67 | testing for Python 3.10. (#301). 68 | 69 | * Added serialize and deserialize hooks to RedisPubSubChannelLayer (#281). 70 | 71 | * Fixed iscoroutine check for pubsub proxied methods (#297). 72 | 73 | * Fix workers support when using Redis PubSub layer (#298) 74 | 75 | 76 | 3.3.1 (2021-09-30) 77 | ------------------ 78 | 79 | Two bugfixes for the PubSub channel layer: 80 | 81 | * Scoped the channel layer per-event loop, in case multiple loops are in play 82 | (#262). 83 | 84 | * Ensured consistent hashing PubSub was maintained across processes, or process 85 | restarts (#274). 86 | 87 | 88 | 3.3.0 (2021-07-01) 89 | ------------------ 90 | 91 | Two important new features: 92 | 93 | * You can now connect using `Redis Sentinel 94 | `. Thanks to @qeternity. 95 | 96 | * There's a new ``RedisPubSubChannelLayer`` that uses Redis Pub/Sub to 97 | propagate messages, rather than managing channels and groups directly within 98 | the layer. For many use-cases this should be simpler, more robust, and more 99 | performant. 100 | 101 | Note though, the new ``RedisPubSubChannelLayer`` layer does not provide all 102 | the options of the existing layer, including ``expiry``, ``capacity``, and 103 | others. Please assess whether it's appropriate for your needs, particularly 104 | if you have an existing deployment. 105 | 106 | The ``RedisPubSubChannelLayer`` is currently marked as *Beta*. Please report 107 | any issues, and be prepared that there may be breaking changes whilst it 108 | matures. 109 | 110 | The ``RedisPubSubChannelLayer`` accepts ``on_disconnect`` and 111 | ``on_reconnect`` config options, providing callbacks to handle the relevant 112 | connection events to the Redis instance. 113 | 114 | Thanks to Ryan Henning @acu192. 115 | 116 | For both features see the README for more details. 117 | 118 | 119 | 3.2.0 (2020-10-29) 120 | ------------------ 121 | 122 | * Adjusted dependency specifiers to allow updating to the latest versions of 123 | ``asgiref`` and Channels. 124 | 125 | 126 | 3.1.0 (2020-09-06) 127 | ------------------ 128 | 129 | * Ensured per-channel queues are bounded in size to avoid a slow memory leak if 130 | consumers stop reading. 131 | 132 | Queues are bound to the channel layer's configured ``capacity``. You may 133 | adjust this to a suitably high value if you were relying on the previously 134 | unbounded behaviour. 135 | 136 | 137 | 3.0.1 (2020-07-15) 138 | ------------------ 139 | 140 | * Fixed error in Lua script introduced in 3.0.0. 141 | 142 | 143 | 3.0.0 (2020-07-03) 144 | ------------------ 145 | 146 | * Redis >= 5.0 is now required. 147 | 148 | * Updated msgpack requirement to `~=1.0`. 149 | 150 | * Ensured channel names are unique using UUIDs. 151 | 152 | * Ensured messages are expired even when channel is in constant activity. 153 | 154 | * Optimized Redis script caching. 155 | 156 | * Reduced group_send failure logging level to reduce log noise. 157 | 158 | * Removed trailing `:` from default channel layer `prefix` to avoid double 159 | `::` in group keys. (You can restore the old default specifying 160 | `prefix="asgi:"` if necessary.) 161 | 162 | 163 | 2.4.2 (2020-02-19) 164 | ------------------ 165 | 166 | * Fixed a bug where ``ConnectionPool.pop()`` might return an invalid 167 | connection. 168 | 169 | * Added logging for a group_send over capacity failure. 170 | 171 | 172 | 2.4.1 (2019-10-23) 173 | ------------------ 174 | 175 | * Fixed compatibility with Python 3.8. 176 | 177 | 178 | 2.4.0 (2019-04-14) 179 | ------------------ 180 | 181 | * Updated ASGI and Channels dependencies for ASGI v3. 182 | 183 | 184 | 2.3.3 (2019-01-10) 185 | ------------------ 186 | 187 | * Bumped msgpack to 0.6 188 | 189 | * Enforced Python 3.6 and up because 3.5 is too unreliable. 190 | 191 | 192 | 2.3.2 (2018-11-27) 193 | ------------------ 194 | 195 | * Fix memory leaks with receive_buffer 196 | 197 | * Prevent double-locking problems with cancelled tasks 198 | 199 | 200 | 2.3.1 (2018-10-17) 201 | ------------------ 202 | 203 | * Fix issue with leaking of connections and instability introduced in 2.3.0 204 | 205 | 206 | 2.3.0 (2018-08-16) 207 | ------------------ 208 | 209 | * Messages to the same process (with the same prefix) are now bundled together 210 | in a single message for efficiency. 211 | 212 | * Connections to Redis are now kept in a connection pool with significantly 213 | improved performance as a result. This change required lists to be changed 214 | from oldest-first to newest-first, so immediately after any upgrade, 215 | existing messages in Redis will be drained in reverse order until your 216 | expiry time (normally 60 seconds) has passed. After this, behaviour will 217 | be normal again. 218 | 219 | 220 | 2.2.1 (2018-05-17) 221 | ------------------ 222 | 223 | * Fixed a bug in group_send where it would not work if channel_capacity was set 224 | 225 | 226 | 2.2.0 (2018-05-13) 227 | ------------------ 228 | 229 | * The group_send method now uses Lua to massively increase the speed of sending 230 | to large groups. 231 | 232 | 233 | 2.1.1 (2018-03-21) 234 | ------------------ 235 | 236 | * Fixed bug where receiving messages would hang after a while or at high 237 | concurrency levels. 238 | 239 | * Fixed bug where the default host values were invalid. 240 | 241 | 242 | 2.1.0 (2018-02-21) 243 | ------------------ 244 | 245 | * Internals have been reworked to remove connection pooling and sharing. All 246 | operations will now open a fresh Redis connection, but the backend will no 247 | longer leak connections or Futures if used in multiple successive event loops 248 | (e.g. via multiple calls to sync_to_async) 249 | 250 | 251 | 2.0.3 (2018-02-14) 252 | ------------------ 253 | 254 | * Don't allow connection pools from other event loops to be re-used (fixes 255 | various RuntimeErrors seen previously) 256 | 257 | * channel_capacity is compiled in the constructor and now works again 258 | 259 | 260 | 2.0.2 (2018-02-04) 261 | ------------------ 262 | 263 | * Capacity enforcement was off by one; it's now correct 264 | 265 | * group_send no longer errors with the wrong ChannelFull exception 266 | 267 | 268 | 2.0.1 (2018-02-02) 269 | ------------------ 270 | 271 | * Dependency fix in packaging so asgiref is set to ~=2.1, not ~=2.0.0 272 | 273 | 274 | 2.0.0 (2018-02-01) 275 | ------------------ 276 | 277 | * Rewrite and rename to channels_redis to be based on asyncio and the 278 | Channels 2 channel layer specification. 279 | 280 | 281 | 1.4.2 (2017-06-20) 282 | ------------------ 283 | 284 | * receive() no longer blocks indefinitely, just for a while. 285 | 286 | * Built-in lua scripts have their SHA pre-set to avoid a guaranteed cache miss 287 | on their first usage. 288 | 289 | 290 | 1.4.1 (2017-06-15) 291 | ------------------ 292 | 293 | * A keyspace leak has been fixed where message body keys were not deleted after 294 | receive, and instead left to expire. 295 | 296 | 297 | 1.4.0 (2017-05-18) 298 | ------------------ 299 | 300 | * Sharded mode support is now more robust with send/receive deterministically 301 | moving around the shard ring rather than picking random connections. This 302 | means there is no longer a slight chance of messages being missed when there 303 | are not significantly more readers on a channel than shards. Tests have 304 | also been updated so they run fully on sharded mode thanks to this. 305 | 306 | * Sentinel support has been considerably improved, with connection caching 307 | (via sentinal_refresh_interval), and automatic service discovery. 308 | 309 | * The Twisted backend now picks up the Redis password if one is configured. 310 | 311 | 312 | 1.3.0 (2017-04-07) 313 | ------------------ 314 | 315 | * Change format of connection arguments to be a single dict called 316 | ``connection_kwargs`` rather than individual options, as they change by 317 | connection type. You will need to change your settings if you have any of 318 | socket_connect_timeout, socket_timeout, socket_keepalive or 319 | socket_keepalive_options set to move them into a ``connection_kwargs`` dict. 320 | 321 | 1.2.1 (2017-04-02) 322 | ------------------ 323 | 324 | * Error with sending to multi-process channels with the same message fixed 325 | 326 | 1.2.0 (2017-04-01) 327 | ------------------ 328 | 329 | * Process-specific channel behaviour changed to match new spec 330 | * Redis Sentinel channel layer added 331 | 332 | 1.1.0 (2017-03-18) 333 | ------------------ 334 | 335 | * Support for the ASGI statistics extension 336 | * Distribution of items over multiple servers using consistent hashing is improved 337 | * Handles timeout exceptions in newer redis-py library versions correctly 338 | * Support for configuring the socket_connect_timeout, socket_timeout, socket_keepalive and socket_keepalive_options 339 | options that are passed to redis-py. 340 | 341 | 1.0.0 (2016-11-05) 342 | ------------------ 343 | 344 | * Renamed "receive_many" to "receive" 345 | * Improved (more explicit) error handling for Redis errors/old versions 346 | * Bad hosts (string not lost) configuration now errors explicitly 347 | 348 | 0.14.1 (2016-08-24) 349 | ------------------- 350 | 351 | * Removed unused reverse channels-to-groups mapping keys as they were not 352 | cleaned up proactively and quickly filled up databases. 353 | 354 | 0.14.0 (2016-07-16) 355 | ------------------- 356 | 357 | * Implemented group_channels method. 358 | 359 | 0.13.0 (2016-06-09) 360 | ------------------- 361 | 362 | * Added local-and-remote backend option (uses asgi_ipc) 363 | 364 | 0.12.0 (2016-05-25) 365 | ------------------- 366 | 367 | * Added symmetric encryption for messages and at-rest data with key rotation. 368 | 369 | 0.11.0 (2016-05-07) 370 | ------------------- 371 | 372 | * Implement backpressure with per-channel and default capacities. 373 | 374 | 0.10.0 (2016-03-27) 375 | ------------------- 376 | 377 | * Group expiry code re-added and fixed. 378 | 379 | 0.9.1 (2016-03-23) 380 | ------------------ 381 | 382 | * Remove old group expiry code that was killing groups after 60 seconds. 383 | 384 | 0.9.0 (2016-03-21) 385 | ------------------ 386 | 387 | * Connections now pooled per backend shard 388 | * Random portion of channel names now 12 characters 389 | * Implements new ASGI single-response-channel pattern spec 390 | 391 | 0.8.3 (2016-02-28) 392 | ------------------ 393 | 394 | * Nonblocking receive_many now uses Lua script rather than for loop. 395 | 396 | 0.8.2 (2016-02-22) 397 | ------------------ 398 | 399 | * Nonblocking receive_many now works, but is inefficient 400 | * Python 3 fixes 401 | 402 | 0.8.1 (2016-02-22) 403 | ------------------ 404 | 405 | * Fixed packaging issues 406 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) Django Software Foundation and individual contributors. 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | 1. Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 10 | 2. Redistributions in binary form must reproduce the above copyright 11 | notice, this list of conditions and the following disclaimer in the 12 | documentation and/or other materials provided with the distribution. 13 | 14 | 3. Neither the name of Django nor the names of its contributors may be used 15 | to endorse or promote products derived from this software without 16 | specific prior written permission. 17 | 18 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 19 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 20 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 21 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR 22 | ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 23 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 24 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 25 | ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 26 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 27 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 28 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include LICENSE 2 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | channels_redis 2 | ============== 3 | 4 | .. image:: https://github.com/django/channels_redis/workflows/Tests/badge.svg 5 | :target: https://github.com/django/channels_redis/actions?query=workflow%3ATests 6 | 7 | .. image:: https://img.shields.io/pypi/v/channels_redis.svg 8 | :target: https://pypi.python.org/pypi/channels_redis 9 | 10 | Provides Django Channels channel layers that use Redis as a backing store. 11 | 12 | There are two available implementations: 13 | 14 | * ``RedisChannelLayer`` is the original layer, and implements channel and group 15 | handling itself. 16 | * ``RedisPubSubChannelLayer`` is newer and leverages Redis Pub/Sub for message 17 | dispatch. This layer is currently at *Beta* status, meaning it may be subject 18 | to breaking changes whilst it matures. 19 | 20 | Both layers support a single-server and sharded configurations. 21 | 22 | `channels_redis` is tested against Python 3.8 to 3.13, `redis-py` versions 4.6, 23 | 5.0, and the development branch, and Channels versions 3, 4 and the development 24 | branch there. 25 | 26 | Installation 27 | ------------ 28 | 29 | .. code-block:: 30 | 31 | pip install channels-redis 32 | 33 | **Note:** Prior versions of this package were called ``asgi_redis`` and are 34 | still available under PyPI as that name if you need them for Channels 1.x projects. 35 | This package is for Channels 2 projects only. 36 | 37 | 38 | Usage 39 | ----- 40 | 41 | Set up the channel layer in your Django settings file like so: 42 | 43 | .. code-block:: python 44 | 45 | CHANNEL_LAYERS = { 46 | "default": { 47 | "BACKEND": "channels_redis.core.RedisChannelLayer", 48 | "CONFIG": { 49 | "hosts": [("localhost", 6379)], 50 | }, 51 | }, 52 | } 53 | 54 | Or, you can use the alternate implementation which uses Redis Pub/Sub: 55 | 56 | .. code-block:: python 57 | 58 | CHANNEL_LAYERS = { 59 | "default": { 60 | "BACKEND": "channels_redis.pubsub.RedisPubSubChannelLayer", 61 | "CONFIG": { 62 | "hosts": [("localhost", 6379)], 63 | }, 64 | }, 65 | } 66 | 67 | Possible options for ``CONFIG`` are listed below. 68 | 69 | ``hosts`` 70 | ~~~~~~~~~ 71 | 72 | The server(s) to connect to, as either URIs, ``(host, port)`` tuples, or dicts conforming to `redis Connection `_. 73 | Defaults to ``redis://localhost:6379``. Pass multiple hosts to enable sharding, 74 | but note that changing the host list will lose some sharded data. 75 | 76 | SSL connections that are self-signed (ex: Heroku): 77 | 78 | .. code-block:: python 79 | 80 | "default": { 81 | "BACKEND": "channels_redis.pubsub.RedisPubSubChannelLayer", 82 | "CONFIG": { 83 | "hosts":[{ 84 | "address": "rediss://user@host:port", # "REDIS_TLS_URL" 85 | "ssl_cert_reqs": None, 86 | }] 87 | } 88 | } 89 | 90 | Sentinel connections require dicts conforming to: 91 | 92 | .. code-block:: 93 | 94 | { 95 | "sentinels": [ 96 | ("localhost", 26379), 97 | ], 98 | "master_name": SENTINEL_MASTER_SET, 99 | **kwargs 100 | } 101 | 102 | note the additional ``master_name`` key specifying the Sentinel master set and any additional connection kwargs can also be passed. Plain Redis and Sentinel connections can be mixed and matched if 103 | sharding. 104 | 105 | If your server is listening on a UNIX domain socket, you can also use that to connect: ``["unix:///path/to/redis.sock"]``. 106 | This should be slightly faster than a loopback TCP connection. 107 | 108 | ``prefix`` 109 | ~~~~~~~~~~ 110 | 111 | Prefix to add to all Redis keys. Defaults to ``asgi``. If you're running 112 | two or more entirely separate channel layers through the same Redis instance, 113 | make sure they have different prefixes. All servers talking to the same layer 114 | should have the same prefix, though. 115 | 116 | ``expiry`` 117 | ~~~~~~~~~~ 118 | 119 | Message expiry in seconds. Defaults to ``60``. You generally shouldn't need 120 | to change this, but you may want to turn it down if you have peaky traffic you 121 | wish to drop, or up if you have peaky traffic you want to backlog until you 122 | get to it. 123 | 124 | ``group_expiry`` 125 | ~~~~~~~~~~~~~~~~ 126 | 127 | Group expiry in seconds. Defaults to ``86400``. Channels will be removed 128 | from the group after this amount of time; it's recommended you reduce it 129 | for a healthier system that encourages disconnections. This value should 130 | not be lower than the relevant timeouts in the interface server (e.g. 131 | the ``--websocket_timeout`` to `daphne 132 | `_). 133 | 134 | ``capacity`` 135 | ~~~~~~~~~~~~ 136 | 137 | Default channel capacity. Defaults to ``100``. Once a channel is at capacity, 138 | it will refuse more messages. How this affects different parts of the system 139 | varies; a HTTP server will refuse connections, for example, while Django 140 | sending a response will just wait until there's space. 141 | 142 | ``channel_capacity`` 143 | ~~~~~~~~~~~~~~~~~~~~ 144 | 145 | Per-channel capacity configuration. This lets you tweak the channel capacity 146 | based on the channel name, and supports both globbing and regular expressions. 147 | 148 | It should be a dict mapping channel name pattern to desired capacity; if the 149 | dict key is a string, it's intepreted as a glob, while if it's a compiled 150 | ``re`` object, it's treated as a regular expression. 151 | 152 | This example sets ``http.request`` to 200, all ``http.response!`` channels 153 | to 10, and all ``websocket.send!`` channels to 20: 154 | 155 | .. code-block:: python 156 | 157 | CHANNEL_LAYERS = { 158 | "default": { 159 | "BACKEND": "channels_redis.core.RedisChannelLayer", 160 | "CONFIG": { 161 | "hosts": [("localhost", 6379)], 162 | "channel_capacity": { 163 | "http.request": 200, 164 | "http.response!*": 10, 165 | re.compile(r"^websocket.send\!.+"): 20, 166 | }, 167 | }, 168 | }, 169 | } 170 | 171 | If you want to enforce a matching order, use an ``OrderedDict`` as the 172 | argument; channels will then be matched in the order the dict provides them. 173 | 174 | .. _encryption: 175 | 176 | ``symmetric_encryption_keys`` 177 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 178 | 179 | Pass this to enable the optional symmetric encryption mode of the backend. To 180 | use it, make sure you have the ``cryptography`` package installed, or specify 181 | the ``cryptography`` extra when you install ``channels-redis``:: 182 | 183 | pip install channels-redis[cryptography] 184 | 185 | ``symmetric_encryption_keys`` should be a list of strings, with each string 186 | being an encryption key. The first key is always used for encryption; all are 187 | considered for decryption, so you can rotate keys without downtime - just add 188 | a new key at the start and move the old one down, then remove the old one 189 | after the message expiry time has passed. 190 | 191 | Data is encrypted both on the wire and at rest in Redis, though we advise 192 | you also route your Redis connections over TLS for higher security; the Redis 193 | protocol is still unencrypted, and the channel and group key names could 194 | potentially contain metadata patterns of use to attackers. 195 | 196 | Keys **should have at least 32 bytes of entropy** - they are passed through 197 | the SHA256 hash function before being used as an encryption key. Any string 198 | will work, but the shorter the string, the easier the encryption is to break. 199 | 200 | If you're using Django, you may also wish to set this to your site's 201 | ``SECRET_KEY`` setting via the ``CHANNEL_LAYERS`` setting: 202 | 203 | .. code-block:: python 204 | 205 | CHANNEL_LAYERS = { 206 | "default": { 207 | "BACKEND": "channels_redis.core.RedisChannelLayer", 208 | "CONFIG": { 209 | "hosts": ["redis://:password@127.0.0.1:6379/0"], 210 | "symmetric_encryption_keys": [SECRET_KEY], 211 | }, 212 | }, 213 | } 214 | 215 | ``on_disconnect`` / ``on_reconnect`` 216 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 217 | 218 | The PubSub layer, which maintains long-running connections to Redis, can drop messages in the event of a network partition. 219 | To handle such situations the PubSub layer accepts optional arguments which will notify consumers of Redis disconnect/reconnect events. 220 | A common use-case is for consumers to ensure that they perform a full state re-sync to ensure that no messages have been missed. 221 | 222 | .. code-block:: python 223 | 224 | CHANNEL_LAYERS = { 225 | "default": { 226 | "BACKEND": "channels_redis.pubsub.RedisPubSubChannelLayer", 227 | "CONFIG": { 228 | "hosts": [...], 229 | "on_disconnect": "redis.disconnect", 230 | }, 231 | }, 232 | } 233 | 234 | 235 | And then in your channels consumer, you can implement the handler: 236 | 237 | .. code-block:: python 238 | 239 | async def redis_disconnect(self, *args): 240 | # Handle disconnect 241 | 242 | 243 | 244 | ``serializer_format`` 245 | ~~~~~~~~~~~~~~~~~~~~~ 246 | 247 | By default every message sent to redis is encoded using `msgpack `_ (_currently ``msgpack`` is a mandatory dependency of this package, it may become optional in a future release). 248 | It is also possible to switch to `JSON `_: 249 | 250 | .. code-block:: python 251 | 252 | CHANNEL_LAYERS = { 253 | "default": { 254 | "BACKEND": "channels_redis.core.RedisChannelLayer", 255 | "CONFIG": { 256 | "hosts": ["redis://:password@127.0.0.1:6379/0"], 257 | "serializer_format": "json", 258 | }, 259 | }, 260 | } 261 | 262 | 263 | Custom serializers can be defined by: 264 | 265 | - extending ``channels_redis.serializers.BaseMessageSerializer``, implementing ``as_bytes `` and ``from_bytes`` methods 266 | - using any class which accepts generic keyword arguments and provides ``serialize``/``deserialize`` methods 267 | 268 | Then it may be registered (or can be overriden) by using ``channels_redis.serializers.registry``: 269 | 270 | .. code-block:: python 271 | 272 | from channels_redis.serializers import registry 273 | 274 | class MyFormatSerializer: 275 | def serialize(self, message): 276 | ... 277 | def deserialize(self, message): 278 | ... 279 | 280 | registry.register_serializer('myformat', MyFormatSerializer) 281 | 282 | **NOTE**: the registry allows you to override the serializer class used for a specific format without any check nor constraint. Thus it is recommended that to pay particular attention to the order-of-imports when using third-party serializers which may override a built-in format. 283 | 284 | 285 | Serializers are also responsible for encryption using *symmetric_encryption_keys*. When extending ``channels_redis.serializers.BaseMessageSerializer`` encryption is already configured in the base class, unless you override the ``serialize``/``deserialize`` methods: in this case you should call ``self.crypter.encrypt`` in serialization and ``self.crypter.decrypt`` in deserialization process. When using a fully custom serializer, expect an optional sequence of keys to be passed via ``symmetric_encryption_keys``. 286 | 287 | 288 | Dependencies 289 | ------------ 290 | 291 | Redis server >= 5.0 is required for `channels-redis`. Python 3.8 or higher is required. 292 | 293 | 294 | Used commands 295 | ~~~~~~~~~~~~~ 296 | 297 | Your Redis server must support the following commands: 298 | 299 | * ``RedisChannelLayer`` uses ``BZPOPMIN``, ``DEL``, ``EVAL``, ``EXPIRE``, 300 | ``KEYS``, ``PIPELINE``, ``ZADD``, ``ZCOUNT``, ``ZPOPMIN``, ``ZRANGE``, 301 | ``ZREM``, ``ZREMRANGEBYSCORE`` 302 | 303 | * ``RedisPubSubChannelLayer`` uses ``PUBLISH``, ``SUBSCRIBE``, ``UNSUBSCRIBE`` 304 | 305 | Local Development 306 | ----------------- 307 | 308 | You can run the necessary Redis instances in Docker with the following commands: 309 | 310 | .. code-block:: shell 311 | 312 | $ docker network create redis-network 313 | $ docker run --rm \ 314 | --network=redis-network \ 315 | --name=redis-server \ 316 | -p 6379:6379 \ 317 | redis 318 | $ docker run --rm \ 319 | --network redis-network \ 320 | --name redis-sentinel \ 321 | -e REDIS_MASTER_HOST=redis-server \ 322 | -e REDIS_MASTER_SET=sentinel \ 323 | -e REDIS_SENTINEL_QUORUM=1 \ 324 | -p 26379:26379 \ 325 | bitnami/redis-sentinel 326 | 327 | Contributing 328 | ------------ 329 | 330 | Please refer to the 331 | `main Channels contributing docs `_. 332 | That also contains advice on how to set up the development environment and run the tests. 333 | 334 | Maintenance and Security 335 | ------------------------ 336 | 337 | To report security issues, please contact security@djangoproject.com. For GPG 338 | signatures and more security process information, see 339 | https://docs.djangoproject.com/en/dev/internals/security/. 340 | 341 | To report bugs or request new features, please open a new GitHub issue. 342 | 343 | This repository is part of the Channels project. For the shepherd and maintenance team, please see the 344 | `main Channels readme `_. 345 | -------------------------------------------------------------------------------- /channels_redis/__init__.py: -------------------------------------------------------------------------------- 1 | __version__ = "4.2.1" 2 | -------------------------------------------------------------------------------- /channels_redis/core.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import collections 3 | import functools 4 | import itertools 5 | import logging 6 | import time 7 | import uuid 8 | 9 | from redis import asyncio as aioredis 10 | 11 | from channels.exceptions import ChannelFull 12 | from channels.layers import BaseChannelLayer 13 | 14 | from .serializers import registry 15 | from .utils import ( 16 | _close_redis, 17 | _consistent_hash, 18 | _wrap_close, 19 | create_pool, 20 | decode_hosts, 21 | ) 22 | 23 | logger = logging.getLogger(__name__) 24 | 25 | 26 | class ChannelLock: 27 | """ 28 | Helper class for per-channel locking. 29 | 30 | Once a lock is released and has no waiters, it will also be deleted, 31 | to mitigate multi-event loop problems. 32 | """ 33 | 34 | def __init__(self): 35 | self.locks = collections.defaultdict(asyncio.Lock) 36 | self.wait_counts = collections.defaultdict(int) 37 | 38 | async def acquire(self, channel): 39 | """ 40 | Acquire the lock for the given channel. 41 | """ 42 | self.wait_counts[channel] += 1 43 | return await self.locks[channel].acquire() 44 | 45 | def locked(self, channel): 46 | """ 47 | Return ``True`` if the lock for the given channel is acquired. 48 | """ 49 | return self.locks[channel].locked() 50 | 51 | def release(self, channel): 52 | """ 53 | Release the lock for the given channel. 54 | """ 55 | self.locks[channel].release() 56 | self.wait_counts[channel] -= 1 57 | if self.wait_counts[channel] < 1: 58 | del self.locks[channel] 59 | del self.wait_counts[channel] 60 | 61 | 62 | class BoundedQueue(asyncio.Queue): 63 | def put_nowait(self, item): 64 | if self.full(): 65 | # see: https://github.com/django/channels_redis/issues/212 66 | # if we actually get into this code block, it likely means that 67 | # this specific consumer has stopped reading 68 | # if we get into this code block, it's better to drop messages 69 | # that exceed the channel layer capacity than to continue to 70 | # malloc() forever 71 | self.get_nowait() 72 | return super(BoundedQueue, self).put_nowait(item) 73 | 74 | 75 | class RedisLoopLayer: 76 | def __init__(self, channel_layer): 77 | self._lock = asyncio.Lock() 78 | self.channel_layer = channel_layer 79 | self._connections = {} 80 | 81 | def get_connection(self, index): 82 | if index not in self._connections: 83 | pool = self.channel_layer.create_pool(index) 84 | self._connections[index] = aioredis.Redis(connection_pool=pool) 85 | 86 | return self._connections[index] 87 | 88 | async def flush(self): 89 | async with self._lock: 90 | for index in list(self._connections): 91 | connection = self._connections.pop(index) 92 | await _close_redis(connection) 93 | 94 | 95 | class RedisChannelLayer(BaseChannelLayer): 96 | """ 97 | Redis channel layer. 98 | 99 | It routes all messages into remote Redis server. Support for 100 | sharding among different Redis installations and message 101 | encryption are provided. 102 | """ 103 | 104 | brpop_timeout = 5 105 | 106 | def __init__( 107 | self, 108 | hosts=None, 109 | prefix="asgi", 110 | expiry=60, 111 | group_expiry=86400, 112 | capacity=100, 113 | channel_capacity=None, 114 | symmetric_encryption_keys=None, 115 | random_prefix_length=12, 116 | serializer_format="msgpack", 117 | ): 118 | # Store basic information 119 | self.expiry = expiry 120 | self.group_expiry = group_expiry 121 | self.capacity = capacity 122 | self.channel_capacity = self.compile_capacities(channel_capacity or {}) 123 | self.prefix = prefix 124 | assert isinstance(self.prefix, str), "Prefix must be unicode" 125 | # Configure the host objects 126 | self.hosts = decode_hosts(hosts) 127 | self.ring_size = len(self.hosts) 128 | # serialization 129 | self._serializer = registry.get_serializer( 130 | serializer_format, 131 | # As we use an sorted set to expire messages we need to guarantee uniqueness, with 12 bytes. 132 | random_prefix_length=random_prefix_length, 133 | expiry=self.expiry, 134 | symmetric_encryption_keys=symmetric_encryption_keys, 135 | ) 136 | # Cached redis connection pools and the event loop they are from 137 | self._layers = {} 138 | # Normal channels choose a host index by cycling through the available hosts 139 | self._receive_index_generator = itertools.cycle(range(len(self.hosts))) 140 | self._send_index_generator = itertools.cycle(range(len(self.hosts))) 141 | # Decide on a unique client prefix to use in ! sections 142 | self.client_prefix = uuid.uuid4().hex 143 | # Number of coroutines trying to receive right now 144 | self.receive_count = 0 145 | # The receive lock 146 | self.receive_lock = None 147 | # Event loop they are trying to receive on 148 | self.receive_event_loop = None 149 | # Buffered messages by process-local channel name 150 | self.receive_buffer = collections.defaultdict( 151 | functools.partial(BoundedQueue, self.capacity) 152 | ) 153 | # Detached channel cleanup tasks 154 | self.receive_cleaners = [] 155 | # Per-channel cleanup locks to prevent a receive starting and moving 156 | # a message back into the main queue before its cleanup has completed 157 | self.receive_clean_locks = ChannelLock() 158 | 159 | def create_pool(self, index): 160 | return create_pool(self.hosts[index]) 161 | 162 | ### Channel layer API ### 163 | 164 | extensions = ["groups", "flush"] 165 | 166 | async def send(self, channel, message): 167 | """ 168 | Send a message onto a (general or specific) channel. 169 | """ 170 | # Typecheck 171 | assert isinstance(message, dict), "message is not a dict" 172 | assert self.valid_channel_name(channel), "Channel name not valid" 173 | # Make sure the message does not contain reserved keys 174 | assert "__asgi_channel__" not in message 175 | # If it's a process-local channel, strip off local part and stick full name in message 176 | channel_non_local_name = channel 177 | if "!" in channel: 178 | message = dict(message.items()) 179 | message["__asgi_channel__"] = channel 180 | channel_non_local_name = self.non_local_name(channel) 181 | # Write out message into expiring key (avoids big items in list) 182 | channel_key = self.prefix + channel_non_local_name 183 | # Pick a connection to the right server - consistent for specific 184 | # channels, random for general channels 185 | if "!" in channel: 186 | index = self.consistent_hash(channel) 187 | else: 188 | index = next(self._send_index_generator) 189 | connection = self.connection(index) 190 | # Discard old messages based on expiry 191 | await connection.zremrangebyscore( 192 | channel_key, min=0, max=int(time.time()) - int(self.expiry) 193 | ) 194 | 195 | # Check the length of the list before send 196 | # This can allow the list to leak slightly over capacity, but that's fine. 197 | if await connection.zcount(channel_key, "-inf", "+inf") >= self.get_capacity( 198 | channel 199 | ): 200 | raise ChannelFull() 201 | 202 | # Push onto the list then set it to expire in case it's not consumed 203 | await connection.zadd(channel_key, {self.serialize(message): time.time()}) 204 | await connection.expire(channel_key, int(self.expiry)) 205 | 206 | def _backup_channel_name(self, channel): 207 | """ 208 | Construct the key used as a backup queue for the given channel. 209 | """ 210 | return channel + "$inflight" 211 | 212 | async def _brpop_with_clean(self, index, channel, timeout): 213 | """ 214 | Perform a Redis BRPOP and manage the backup processing queue. 215 | In case of cancellation, make sure the message is not lost. 216 | """ 217 | # The script will pop messages from the processing queue and push them in front 218 | # of the main message queue in the proper order; BRPOP must *not* be called 219 | # because that would deadlock the server 220 | cleanup_script = """ 221 | local backed_up = redis.call('ZRANGE', ARGV[2], 0, -1, 'WITHSCORES') 222 | for i = #backed_up, 1, -2 do 223 | redis.call('ZADD', ARGV[1], backed_up[i], backed_up[i - 1]) 224 | end 225 | redis.call('DEL', ARGV[2]) 226 | """ 227 | backup_queue = self._backup_channel_name(channel) 228 | connection = self.connection(index) 229 | # Cancellation here doesn't matter, we're not doing anything destructive 230 | # and the script executes atomically... 231 | await connection.eval(cleanup_script, 0, channel, backup_queue) 232 | # ...and it doesn't matter here either, the message will be safe in the backup. 233 | result = await connection.bzpopmin(channel, timeout=timeout) 234 | 235 | if result is not None: 236 | _, member, timestamp = result 237 | await connection.zadd(backup_queue, {member: float(timestamp)}) 238 | else: 239 | member = None 240 | 241 | return member 242 | 243 | async def _clean_receive_backup(self, index, channel): 244 | """ 245 | Pop the oldest message off the channel backup queue. 246 | The result isn't interesting as it was already processed. 247 | """ 248 | connection = self.connection(index) 249 | await connection.zpopmin(self._backup_channel_name(channel)) 250 | 251 | async def receive(self, channel): 252 | """ 253 | Receive the first message that arrives on the channel. 254 | If more than one coroutine waits on the same channel, the first waiter 255 | will be given the message when it arrives. 256 | """ 257 | # Make sure the channel name is valid then get the non-local part 258 | # and thus its index 259 | assert self.valid_channel_name(channel) 260 | if "!" in channel: 261 | real_channel = self.non_local_name(channel) 262 | assert real_channel.endswith( 263 | self.client_prefix + "!" 264 | ), "Wrong client prefix" 265 | # Enter receiving section 266 | loop = asyncio.get_running_loop() 267 | self.receive_count += 1 268 | try: 269 | if self.receive_count == 1: 270 | # If we're the first coroutine in, create the receive lock! 271 | self.receive_lock = asyncio.Lock() 272 | self.receive_event_loop = loop 273 | else: 274 | # Otherwise, check our event loop matches 275 | if self.receive_event_loop != loop: 276 | raise RuntimeError( 277 | "Two event loops are trying to receive() on one channel layer at once!" 278 | ) 279 | 280 | # Wait for our message to appear 281 | message = None 282 | while self.receive_buffer[channel].empty(): 283 | tasks = [ 284 | self.receive_lock.acquire(), 285 | self.receive_buffer[channel].get(), 286 | ] 287 | tasks = [asyncio.ensure_future(task) for task in tasks] 288 | try: 289 | done, pending = await asyncio.wait( 290 | tasks, return_when=asyncio.FIRST_COMPLETED 291 | ) 292 | for task in pending: 293 | # Cancel all pending tasks. 294 | task.cancel() 295 | except asyncio.CancelledError: 296 | # Ensure all tasks are cancelled if we are cancelled. 297 | # Also see: https://bugs.python.org/issue23859 298 | del self.receive_buffer[channel] 299 | for task in tasks: 300 | if not task.cancel(): 301 | assert task.done() 302 | if task.result() is True: 303 | self.receive_lock.release() 304 | 305 | raise 306 | 307 | message = token = exception = None 308 | for task in done: 309 | try: 310 | result = task.result() 311 | except BaseException as error: # NOQA 312 | # We should not propagate exceptions immediately as otherwise this may cause 313 | # the lock to be held and never be released. 314 | exception = error 315 | continue 316 | 317 | if result is True: 318 | token = result 319 | else: 320 | assert isinstance(result, dict) 321 | message = result 322 | 323 | if message or exception: 324 | if token: 325 | # We will not be receving as we already have the message. 326 | self.receive_lock.release() 327 | 328 | if exception: 329 | raise exception 330 | else: 331 | break 332 | else: 333 | assert token 334 | 335 | # We hold the receive lock, receive and then release it. 336 | try: 337 | # There is no interruption point from when the message is 338 | # unpacked in receive_single to when we get back here, so 339 | # the following lines are essentially atomic. 340 | message_channel, message = await self.receive_single( 341 | real_channel 342 | ) 343 | if isinstance(message_channel, list): 344 | for chan in message_channel: 345 | self.receive_buffer[chan].put_nowait(message) 346 | else: 347 | self.receive_buffer[message_channel].put_nowait(message) 348 | message = None 349 | except Exception: 350 | del self.receive_buffer[channel] 351 | raise 352 | finally: 353 | self.receive_lock.release() 354 | 355 | # We know there's a message available, because there 356 | # couldn't have been any interruption between empty() and here 357 | if message is None: 358 | message = self.receive_buffer[channel].get_nowait() 359 | 360 | if self.receive_buffer[channel].empty(): 361 | del self.receive_buffer[channel] 362 | return message 363 | 364 | finally: 365 | self.receive_count -= 1 366 | # If we were the last out, drop the receive lock 367 | if self.receive_count == 0: 368 | assert not self.receive_lock.locked() 369 | self.receive_lock = None 370 | self.receive_event_loop = None 371 | else: 372 | # Do a plain direct receive 373 | return (await self.receive_single(channel))[1] 374 | 375 | async def receive_single(self, channel): 376 | """ 377 | Receives a single message off of the channel and returns it. 378 | """ 379 | # Check channel name 380 | assert self.valid_channel_name(channel, receive=True), "Channel name invalid" 381 | # Work out the connection to use 382 | if "!" in channel: 383 | assert channel.endswith("!") 384 | index = self.consistent_hash(channel) 385 | else: 386 | index = next(self._receive_index_generator) 387 | 388 | channel_key = self.prefix + channel 389 | content = None 390 | await self.receive_clean_locks.acquire(channel_key) 391 | try: 392 | while content is None: 393 | # Nothing is lost here by cancellations, messages will still 394 | # be in the backup queue. 395 | content = await self._brpop_with_clean( 396 | index, channel_key, timeout=self.brpop_timeout 397 | ) 398 | 399 | # Fire off a task to clean the message from its backup queue. 400 | # Per-channel locking isn't needed, because the backup is a queue 401 | # and additionally, we don't care about the order; all processed 402 | # messages need to be removed, no matter if the current one is 403 | # removed after the next one. 404 | # NOTE: Duplicate messages will be received eventually if any 405 | # of these cleaners are cancelled. 406 | cleaner = asyncio.ensure_future( 407 | self._clean_receive_backup(index, channel_key) 408 | ) 409 | self.receive_cleaners.append(cleaner) 410 | 411 | def _cleanup_done(cleaner): 412 | self.receive_cleaners.remove(cleaner) 413 | self.receive_clean_locks.release(channel_key) 414 | 415 | cleaner.add_done_callback(_cleanup_done) 416 | 417 | except BaseException: 418 | self.receive_clean_locks.release(channel_key) 419 | raise 420 | 421 | # Message decode 422 | message = self.deserialize(content) 423 | # TODO: message expiry? 424 | # If there is a full channel name stored in the message, unpack it. 425 | if "__asgi_channel__" in message: 426 | channel = message["__asgi_channel__"] 427 | del message["__asgi_channel__"] 428 | return channel, message 429 | 430 | async def new_channel(self, prefix="specific"): 431 | """ 432 | Returns a new channel name that can be used by something in our 433 | process as a specific channel. 434 | """ 435 | return f"{prefix}.{self.client_prefix}!{uuid.uuid4().hex}" 436 | 437 | ### Flush extension ### 438 | 439 | async def flush(self): 440 | """ 441 | Deletes all messages and groups on all shards. 442 | """ 443 | # Make sure all channel cleaners have finished before removing 444 | # keys from under their feet. 445 | await self.wait_received() 446 | 447 | # Lua deletion script 448 | delete_prefix = """ 449 | local keys = redis.call('keys', ARGV[1]) 450 | for i=1,#keys,5000 do 451 | redis.call('del', unpack(keys, i, math.min(i+4999, #keys))) 452 | end 453 | """ 454 | # Go through each connection and remove all with prefix 455 | for i in range(self.ring_size): 456 | connection = self.connection(i) 457 | await connection.eval(delete_prefix, 0, self.prefix + "*") 458 | # Now clear the pools as well 459 | await self.close_pools() 460 | 461 | async def close_pools(self): 462 | """ 463 | Close all connections in the event loop pools. 464 | """ 465 | # Flush all cleaners, in case somebody just wanted to close the 466 | # pools without flushing first. 467 | await self.wait_received() 468 | for layer in self._layers.values(): 469 | await layer.flush() 470 | 471 | async def wait_received(self): 472 | """ 473 | Wait for all channel cleanup functions to finish. 474 | """ 475 | if self.receive_cleaners: 476 | await asyncio.wait(self.receive_cleaners[:]) 477 | 478 | ### Groups extension ### 479 | 480 | async def group_add(self, group, channel): 481 | """ 482 | Adds the channel name to a group. 483 | """ 484 | # Check the inputs 485 | assert self.valid_group_name(group), "Group name not valid" 486 | assert self.valid_channel_name(channel), "Channel name not valid" 487 | # Get a connection to the right shard 488 | group_key = self._group_key(group) 489 | connection = self.connection(self.consistent_hash(group)) 490 | # Add to group sorted set with creation time as timestamp 491 | await connection.zadd(group_key, {channel: time.time()}) 492 | # Set expiration to be group_expiry, since everything in 493 | # it at this point is guaranteed to expire before that 494 | await connection.expire(group_key, self.group_expiry) 495 | 496 | async def group_discard(self, group, channel): 497 | """ 498 | Removes the channel from the named group if it is in the group; 499 | does nothing otherwise (does not error) 500 | """ 501 | assert self.valid_group_name(group), "Group name not valid" 502 | assert self.valid_channel_name(channel), "Channel name not valid" 503 | key = self._group_key(group) 504 | connection = self.connection(self.consistent_hash(group)) 505 | await connection.zrem(key, channel) 506 | 507 | async def group_send(self, group, message): 508 | """ 509 | Sends a message to the entire group. 510 | """ 511 | assert self.valid_group_name(group), "Group name not valid" 512 | # Retrieve list of all channel names 513 | key = self._group_key(group) 514 | connection = self.connection(self.consistent_hash(group)) 515 | # Discard old channels based on group_expiry 516 | await connection.zremrangebyscore( 517 | key, min=0, max=int(time.time()) - self.group_expiry 518 | ) 519 | 520 | channel_names = [x.decode("utf8") for x in await connection.zrange(key, 0, -1)] 521 | 522 | ( 523 | connection_to_channel_keys, 524 | channel_keys_to_message, 525 | channel_keys_to_capacity, 526 | ) = self._map_channel_keys_to_connection(channel_names, message) 527 | 528 | for connection_index, channel_redis_keys in connection_to_channel_keys.items(): 529 | # Discard old messages based on expiry 530 | pipe = connection.pipeline() 531 | for key in channel_redis_keys: 532 | pipe.zremrangebyscore( 533 | key, min=0, max=int(time.time()) - int(self.expiry) 534 | ) 535 | await pipe.execute() 536 | 537 | # Create a LUA script specific for this connection. 538 | # Make sure to use the message specific to this channel, it is 539 | # stored in channel_to_message dict and contains the 540 | # __asgi_channel__ key. 541 | 542 | group_send_lua = """ 543 | local over_capacity = 0 544 | local current_time = ARGV[#ARGV - 1] 545 | local expiry = ARGV[#ARGV] 546 | for i=1,#KEYS do 547 | if redis.call('ZCOUNT', KEYS[i], '-inf', '+inf') < tonumber(ARGV[i + #KEYS]) then 548 | redis.call('ZADD', KEYS[i], current_time, ARGV[i]) 549 | redis.call('EXPIRE', KEYS[i], expiry) 550 | else 551 | over_capacity = over_capacity + 1 552 | end 553 | end 554 | return over_capacity 555 | """ 556 | 557 | # We need to filter the messages to keep those related to the connection 558 | args = [ 559 | channel_keys_to_message[channel_key] 560 | for channel_key in channel_redis_keys 561 | ] 562 | 563 | # We need to send the capacity for each channel 564 | args += [ 565 | channel_keys_to_capacity[channel_key] 566 | for channel_key in channel_redis_keys 567 | ] 568 | 569 | args += [time.time(), self.expiry] 570 | 571 | # channel_keys does not contain a single redis key more than once 572 | connection = self.connection(connection_index) 573 | channels_over_capacity = await connection.eval( 574 | group_send_lua, len(channel_redis_keys), *channel_redis_keys, *args 575 | ) 576 | if channels_over_capacity > 0: 577 | logger.info( 578 | "%s of %s channels over capacity in group %s", 579 | channels_over_capacity, 580 | len(channel_names), 581 | group, 582 | ) 583 | 584 | def _map_channel_keys_to_connection(self, channel_names, message): 585 | """ 586 | For a list of channel names, GET 587 | 588 | 1. list of their redis keys bucket each one to a dict keyed by the connection index 589 | 590 | 2. for each unique channel redis key create a serialized message specific to that redis key, by adding 591 | the list of channels mapped to that redis key in __asgi_channel__ key to the message 592 | 593 | 3. returns a mapping of redis channels keys to their capacity 594 | """ 595 | 596 | # Connection dict keyed by index to list of redis keys mapped on that index 597 | connection_to_channel_keys = collections.defaultdict(list) 598 | # Message dict maps redis key to the message that needs to be send on that key 599 | channel_key_to_message = dict() 600 | # Channel key mapped to its capacity 601 | channel_key_to_capacity = dict() 602 | 603 | # For each channel 604 | for channel in channel_names: 605 | channel_non_local_name = channel 606 | if "!" in channel: 607 | channel_non_local_name = self.non_local_name(channel) 608 | # Get its redis key 609 | channel_key = self.prefix + channel_non_local_name 610 | # Have we come across the same redis key? 611 | if channel_key not in channel_key_to_message: 612 | # If not, fill the corresponding dicts 613 | message = dict(message.items()) 614 | message["__asgi_channel__"] = [channel] 615 | channel_key_to_message[channel_key] = message 616 | channel_key_to_capacity[channel_key] = self.get_capacity(channel) 617 | idx = self.consistent_hash(channel_non_local_name) 618 | connection_to_channel_keys[idx].append(channel_key) 619 | else: 620 | # Yes, Append the channel in message dict 621 | channel_key_to_message[channel_key]["__asgi_channel__"].append(channel) 622 | 623 | # Now that we know what message needs to be send on a redis key we serialize it 624 | for key, value in channel_key_to_message.items(): 625 | # Serialize the message stored for each redis key 626 | channel_key_to_message[key] = self.serialize(value) 627 | 628 | return ( 629 | connection_to_channel_keys, 630 | channel_key_to_message, 631 | channel_key_to_capacity, 632 | ) 633 | 634 | def _group_key(self, group): 635 | """ 636 | Common function to make the storage key for the group. 637 | """ 638 | return f"{self.prefix}:group:{group}".encode("utf8") 639 | 640 | ### Serialization ### 641 | 642 | def serialize(self, message): 643 | """ 644 | Serializes message to a byte string. 645 | """ 646 | return self._serializer.serialize(message) 647 | 648 | def deserialize(self, message): 649 | """ 650 | Deserializes from a byte string. 651 | """ 652 | return self._serializer.deserialize(message) 653 | 654 | ### Internal functions ### 655 | 656 | def consistent_hash(self, value): 657 | return _consistent_hash(value, self.ring_size) 658 | 659 | def __str__(self): 660 | return f"{self.__class__.__name__}(hosts={self.hosts})" 661 | 662 | ### Connection handling ### 663 | 664 | def connection(self, index): 665 | """ 666 | Returns the correct connection for the index given. 667 | Lazily instantiates pools. 668 | """ 669 | # Catch bad indexes 670 | if not 0 <= index < self.ring_size: 671 | raise ValueError( 672 | f"There are only {self.ring_size} hosts - you asked for {index}!" 673 | ) 674 | 675 | loop = asyncio.get_running_loop() 676 | try: 677 | layer = self._layers[loop] 678 | except KeyError: 679 | _wrap_close(self, loop) 680 | layer = self._layers[loop] = RedisLoopLayer(self) 681 | 682 | return layer.get_connection(index) 683 | -------------------------------------------------------------------------------- /channels_redis/pubsub.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import functools 3 | import logging 4 | import uuid 5 | 6 | from redis import asyncio as aioredis 7 | 8 | from .serializers import registry 9 | from .utils import ( 10 | _close_redis, 11 | _consistent_hash, 12 | _wrap_close, 13 | create_pool, 14 | decode_hosts, 15 | ) 16 | 17 | logger = logging.getLogger(__name__) 18 | 19 | 20 | async def _async_proxy(obj, name, *args, **kwargs): 21 | # Must be defined as a function and not a method due to 22 | # https://bugs.python.org/issue38364 23 | layer = obj._get_layer() 24 | return await getattr(layer, name)(*args, **kwargs) 25 | 26 | 27 | class RedisPubSubChannelLayer: 28 | def __init__( 29 | self, 30 | *args, 31 | symmetric_encryption_keys=None, 32 | serializer_format="msgpack", 33 | **kwargs, 34 | ) -> None: 35 | self._args = args 36 | self._kwargs = kwargs 37 | self._layers = {} 38 | # serialization 39 | self._serializer = registry.get_serializer( 40 | serializer_format, 41 | symmetric_encryption_keys=symmetric_encryption_keys, 42 | ) 43 | 44 | def __getattr__(self, name): 45 | if name in ( 46 | "new_channel", 47 | "send", 48 | "receive", 49 | "group_add", 50 | "group_discard", 51 | "group_send", 52 | "flush", 53 | ): 54 | return functools.partial(_async_proxy, self, name) 55 | else: 56 | return getattr(self._get_layer(), name) 57 | 58 | def serialize(self, message): 59 | """ 60 | Serializes message to a byte string. 61 | """ 62 | return self._serializer.serialize(message) 63 | 64 | def deserialize(self, message): 65 | """ 66 | Deserializes from a byte string. 67 | """ 68 | return self._serializer.deserialize(message) 69 | 70 | def _get_layer(self): 71 | loop = asyncio.get_running_loop() 72 | 73 | try: 74 | layer = self._layers[loop] 75 | except KeyError: 76 | layer = RedisPubSubLoopLayer( 77 | *self._args, 78 | **self._kwargs, 79 | channel_layer=self, 80 | ) 81 | self._layers[loop] = layer 82 | _wrap_close(self, loop) 83 | 84 | return layer 85 | 86 | 87 | class RedisPubSubLoopLayer: 88 | """ 89 | Channel Layer that uses Redis's pub/sub functionality. 90 | """ 91 | 92 | def __init__( 93 | self, 94 | hosts=None, 95 | prefix="asgi", 96 | on_disconnect=None, 97 | on_reconnect=None, 98 | channel_layer=None, 99 | **kwargs, 100 | ): 101 | self.prefix = prefix 102 | 103 | self.on_disconnect = on_disconnect 104 | self.on_reconnect = on_reconnect 105 | self.channel_layer = channel_layer 106 | 107 | # Each consumer gets its own *specific* channel, created with the `new_channel()` method. 108 | # This dict maps `channel_name` to a queue of messages for that channel. 109 | self.channels = {} 110 | 111 | # A channel can subscribe to zero or more groups. 112 | # This dict maps `group_name` to set of channel names who are subscribed to that group. 113 | self.groups = {} 114 | 115 | # For each host, we create a `RedisSingleShardConnection` to manage the connection to that host. 116 | self._shards = [ 117 | RedisSingleShardConnection(host, self) for host in decode_hosts(hosts) 118 | ] 119 | 120 | def _get_shard(self, channel_or_group_name): 121 | """ 122 | Return the shard that is used exclusively for this channel or group. 123 | """ 124 | return self._shards[_consistent_hash(channel_or_group_name, len(self._shards))] 125 | 126 | def _get_group_channel_name(self, group): 127 | """ 128 | Return the channel name used by a group. 129 | Includes '__group__' in the returned 130 | string so that these names are distinguished 131 | from those returned by `new_channel()`. 132 | Technically collisions are possible, but it 133 | takes what I believe is intentional abuse in 134 | order to have colliding names. 135 | """ 136 | return f"{self.prefix}__group__{group}" 137 | 138 | async def _subscribe_to_channel(self, channel): 139 | self.channels[channel] = asyncio.Queue() 140 | shard = self._get_shard(channel) 141 | await shard.subscribe(channel) 142 | 143 | extensions = ["groups", "flush"] 144 | 145 | ################################################################################ 146 | # Channel layer API 147 | ################################################################################ 148 | 149 | async def send(self, channel, message): 150 | """ 151 | Send a message onto a (general or specific) channel. 152 | """ 153 | shard = self._get_shard(channel) 154 | await shard.publish(channel, self.channel_layer.serialize(message)) 155 | 156 | async def new_channel(self, prefix="specific."): 157 | """ 158 | Returns a new channel name that can be used by a consumer in our 159 | process as a specific channel. 160 | """ 161 | channel = f"{self.prefix}{prefix}{uuid.uuid4().hex}" 162 | await self._subscribe_to_channel(channel) 163 | return channel 164 | 165 | async def receive(self, channel): 166 | """ 167 | Receive the first message that arrives on the channel. 168 | If more than one coroutine waits on the same channel, a random one 169 | of the waiting coroutines will get the result. 170 | """ 171 | if channel not in self.channels: 172 | await self._subscribe_to_channel(channel) 173 | 174 | q = self.channels[channel] 175 | try: 176 | message = await q.get() 177 | except (asyncio.CancelledError, asyncio.TimeoutError, GeneratorExit): 178 | # We assume here that the reason we are cancelled is because the consumer 179 | # is exiting, therefore we need to cleanup by unsubscribe below. Indeed, 180 | # currently the way that Django Channels works, this is a safe assumption. 181 | # In the future, Django Channels could change to call a *new* method that 182 | # would serve as the antithesis of `new_channel()`; this new method might 183 | # be named `delete_channel()`. If that were the case, we would do the 184 | # following cleanup from that new `delete_channel()` method, but, since 185 | # that's not how Django Channels works (yet), we do the cleanup below: 186 | if channel in self.channels: 187 | del self.channels[channel] 188 | try: 189 | shard = self._get_shard(channel) 190 | await shard.unsubscribe(channel) 191 | except BaseException: 192 | logger.exception("Unexpected exception while cleaning-up channel:") 193 | # We don't re-raise here because we want the CancelledError to be the one re-raised. 194 | raise 195 | 196 | return self.channel_layer.deserialize(message) 197 | 198 | ################################################################################ 199 | # Groups extension 200 | ################################################################################ 201 | 202 | async def group_add(self, group, channel): 203 | """ 204 | Adds the channel name to a group. 205 | """ 206 | if channel not in self.channels: 207 | raise RuntimeError( 208 | "You can only call group_add() on channels that exist in-process.\n" 209 | "Consumers are encouraged to use the common pattern:\n" 210 | f" self.channel_layer.group_add({repr(group)}, self.channel_name)" 211 | ) 212 | group_channel = self._get_group_channel_name(group) 213 | if group_channel not in self.groups: 214 | self.groups[group_channel] = set() 215 | group_channels = self.groups[group_channel] 216 | if channel not in group_channels: 217 | group_channels.add(channel) 218 | shard = self._get_shard(group_channel) 219 | await shard.subscribe(group_channel) 220 | 221 | async def group_discard(self, group, channel): 222 | """ 223 | Removes the channel from a group if it is in the group; 224 | does nothing otherwise (does not error) 225 | """ 226 | group_channel = self._get_group_channel_name(group) 227 | group_channels = self.groups.get(group_channel, set()) 228 | if channel not in group_channels: 229 | return 230 | 231 | group_channels.remove(channel) 232 | if len(group_channels) == 0: 233 | del self.groups[group_channel] 234 | shard = self._get_shard(group_channel) 235 | await shard.unsubscribe(group_channel) 236 | 237 | async def group_send(self, group, message): 238 | """ 239 | Send the message to all subscribers of the group. 240 | """ 241 | group_channel = self._get_group_channel_name(group) 242 | shard = self._get_shard(group_channel) 243 | await shard.publish(group_channel, self.channel_layer.serialize(message)) 244 | 245 | ################################################################################ 246 | # Flush extension 247 | ################################################################################ 248 | 249 | async def flush(self): 250 | """ 251 | Flush the layer, making it like new. It can continue to be used as if it 252 | was just created. This also closes connections, serving as a clean-up 253 | method; connections will be re-opened if you continue using this layer. 254 | """ 255 | self.channels = {} 256 | self.groups = {} 257 | for shard in self._shards: 258 | await shard.flush() 259 | 260 | 261 | class RedisSingleShardConnection: 262 | def __init__(self, host, channel_layer): 263 | self.host = host 264 | self.channel_layer = channel_layer 265 | self._subscribed_to = set() 266 | self._lock = asyncio.Lock() 267 | self._redis = None 268 | self._pubsub = None 269 | self._receive_task = None 270 | 271 | async def publish(self, channel, message): 272 | async with self._lock: 273 | self._ensure_redis() 274 | await self._redis.publish(channel, message) 275 | 276 | async def subscribe(self, channel): 277 | async with self._lock: 278 | if channel not in self._subscribed_to: 279 | self._ensure_redis() 280 | self._ensure_receiver() 281 | await self._pubsub.subscribe(channel) 282 | self._subscribed_to.add(channel) 283 | 284 | async def unsubscribe(self, channel): 285 | async with self._lock: 286 | if channel in self._subscribed_to: 287 | self._ensure_redis() 288 | self._ensure_receiver() 289 | await self._pubsub.unsubscribe(channel) 290 | self._subscribed_to.remove(channel) 291 | 292 | async def flush(self): 293 | async with self._lock: 294 | if self._receive_task is not None: 295 | self._receive_task.cancel() 296 | try: 297 | await self._receive_task 298 | except asyncio.CancelledError: 299 | pass 300 | self._receive_task = None 301 | if self._redis is not None: 302 | # The pool was created just for this client, so make sure it is closed, 303 | # otherwise it will schedule the connection to be closed inside the 304 | # __del__ method, which doesn't have a loop running anymore. 305 | await _close_redis(self._redis) 306 | self._redis = None 307 | self._pubsub = None 308 | self._subscribed_to = set() 309 | 310 | async def _do_receiving(self): 311 | while True: 312 | try: 313 | if self._pubsub and self._pubsub.subscribed: 314 | message = await self._pubsub.get_message( 315 | ignore_subscribe_messages=True, timeout=0.1 316 | ) 317 | self._receive_message(message) 318 | else: 319 | await asyncio.sleep(0.1) 320 | except ( 321 | asyncio.CancelledError, 322 | asyncio.TimeoutError, 323 | GeneratorExit, 324 | ): 325 | raise 326 | except BaseException: 327 | logger.exception("Unexpected exception in receive task") 328 | await asyncio.sleep(1) 329 | 330 | def _receive_message(self, message): 331 | if message is not None: 332 | name = message["channel"] 333 | data = message["data"] 334 | if isinstance(name, bytes): 335 | name = name.decode() 336 | if name in self.channel_layer.channels: 337 | self.channel_layer.channels[name].put_nowait(data) 338 | elif name in self.channel_layer.groups: 339 | for channel_name in self.channel_layer.groups[name]: 340 | if channel_name in self.channel_layer.channels: 341 | self.channel_layer.channels[channel_name].put_nowait(data) 342 | 343 | def _ensure_redis(self): 344 | if self._redis is None: 345 | pool = create_pool(self.host) 346 | self._redis = aioredis.Redis(connection_pool=pool) 347 | self._pubsub = self._redis.pubsub() 348 | 349 | def _ensure_receiver(self): 350 | if self._receive_task is None: 351 | self._receive_task = asyncio.ensure_future(self._do_receiving()) 352 | -------------------------------------------------------------------------------- /channels_redis/serializers.py: -------------------------------------------------------------------------------- 1 | import abc 2 | import base64 3 | import hashlib 4 | import json 5 | import random 6 | 7 | try: 8 | from cryptography.fernet import Fernet, MultiFernet 9 | except ImportError: 10 | MultiFernet = Fernet = None 11 | 12 | 13 | class SerializerDoesNotExist(KeyError): 14 | """The requested serializer was not found.""" 15 | 16 | 17 | class BaseMessageSerializer(abc.ABC): 18 | def __init__( 19 | self, 20 | symmetric_encryption_keys=None, 21 | random_prefix_length=0, 22 | expiry=None, 23 | ): 24 | self.random_prefix_length = random_prefix_length 25 | self.expiry = expiry 26 | # Set up any encryption objects 27 | self._setup_encryption(symmetric_encryption_keys) 28 | 29 | def _setup_encryption(self, symmetric_encryption_keys): 30 | # See if we can do encryption if they asked 31 | if symmetric_encryption_keys: 32 | if isinstance(symmetric_encryption_keys, (str, bytes)): 33 | raise ValueError( 34 | "symmetric_encryption_keys must be a list of possible keys" 35 | ) 36 | if MultiFernet is None: 37 | raise ValueError( 38 | "Cannot run with encryption without 'cryptography' installed." 39 | ) 40 | sub_fernets = [self.make_fernet(key) for key in symmetric_encryption_keys] 41 | self.crypter = MultiFernet(sub_fernets) 42 | else: 43 | self.crypter = None 44 | 45 | def make_fernet(self, key): 46 | """ 47 | Given a single encryption key, returns a Fernet instance using it. 48 | """ 49 | if Fernet is None: 50 | raise ValueError( 51 | "Cannot run with encryption without 'cryptography' installed." 52 | ) 53 | 54 | if isinstance(key, str): 55 | key = key.encode("utf-8") 56 | formatted_key = base64.urlsafe_b64encode(hashlib.sha256(key).digest()) 57 | return Fernet(formatted_key) 58 | 59 | @abc.abstractmethod 60 | def as_bytes(self, message, *args, **kwargs): 61 | raise NotImplementedError 62 | 63 | @abc.abstractmethod 64 | def from_bytes(self, message, *args, **kwargs): 65 | raise NotImplementedError 66 | 67 | def serialize(self, message): 68 | """ 69 | Serializes message to a byte string. 70 | """ 71 | message = self.as_bytes(message) 72 | if self.crypter: 73 | message = self.crypter.encrypt(message) 74 | 75 | if self.random_prefix_length > 0: 76 | # provide random prefix 77 | message = ( 78 | random.getrandbits(8 * self.random_prefix_length).to_bytes( 79 | self.random_prefix_length, "big" 80 | ) 81 | + message 82 | ) 83 | return message 84 | 85 | def deserialize(self, message): 86 | """ 87 | Deserializes from a byte string. 88 | """ 89 | if self.random_prefix_length > 0: 90 | # Removes the random prefix 91 | message = message[self.random_prefix_length :] # noqa: E203 92 | 93 | if self.crypter: 94 | ttl = self.expiry if self.expiry is None else self.expiry + 10 95 | message = self.crypter.decrypt(message, ttl) 96 | return self.from_bytes(message) 97 | 98 | 99 | class MissingSerializer(BaseMessageSerializer): 100 | exception = None 101 | 102 | def __init__(self, *args, **kwargs): 103 | raise self.exception 104 | 105 | 106 | class JSONSerializer(BaseMessageSerializer): 107 | # json module by default always produces str while loads accepts bytes 108 | # thus we must force bytes conversion 109 | # we use UTF-8 since it is the recommended encoding for interoperability 110 | # see https://docs.python.org/3/library/json.html#character-encodings 111 | def as_bytes(self, message, *args, **kwargs): 112 | message = json.dumps(message, *args, **kwargs) 113 | return message.encode("utf-8") 114 | 115 | from_bytes = staticmethod(json.loads) 116 | 117 | 118 | # code ready for a future in which msgpack may become an optional dependency 119 | try: 120 | import msgpack 121 | except ImportError as exc: 122 | 123 | class MsgPackSerializer(MissingSerializer): 124 | exception = exc 125 | 126 | else: 127 | 128 | class MsgPackSerializer(BaseMessageSerializer): 129 | as_bytes = staticmethod(msgpack.packb) 130 | from_bytes = staticmethod(msgpack.unpackb) 131 | 132 | 133 | class SerializersRegistry: 134 | """ 135 | Serializers registry inspired by that of ``django.core.serializers``. 136 | """ 137 | 138 | def __init__(self): 139 | self._registry = {} 140 | 141 | def register_serializer(self, format, serializer_class): 142 | """ 143 | Register a new serializer for given format 144 | """ 145 | assert isinstance(serializer_class, type) and ( 146 | issubclass(serializer_class, BaseMessageSerializer) 147 | or ( 148 | hasattr(serializer_class, "serialize") 149 | and hasattr(serializer_class, "deserialize") 150 | ) 151 | ), """ 152 | `serializer_class` should be a class which implements `serialize` and `deserialize` method 153 | or a subclass of `channels_redis.serializers.BaseMessageSerializer` 154 | """ 155 | 156 | self._registry[format] = serializer_class 157 | 158 | def get_serializer(self, format, *args, **kwargs): 159 | try: 160 | serializer_class = self._registry[format] 161 | except KeyError: 162 | raise SerializerDoesNotExist(format) 163 | 164 | return serializer_class(*args, **kwargs) 165 | 166 | 167 | registry = SerializersRegistry() 168 | registry.register_serializer("json", JSONSerializer) 169 | registry.register_serializer("msgpack", MsgPackSerializer) 170 | -------------------------------------------------------------------------------- /channels_redis/utils.py: -------------------------------------------------------------------------------- 1 | import binascii 2 | import types 3 | 4 | from redis import asyncio as aioredis 5 | 6 | 7 | def _consistent_hash(value, ring_size): 8 | """ 9 | Maps the value to a node value between 0 and 4095 10 | using CRC, then down to one of the ring nodes. 11 | """ 12 | if ring_size == 1: 13 | # Avoid the overhead of hashing and modulo when it is unnecessary. 14 | return 0 15 | 16 | if isinstance(value, str): 17 | value = value.encode("utf8") 18 | bigval = binascii.crc32(value) & 0xFFF 19 | ring_divisor = 4096 / float(ring_size) 20 | return int(bigval / ring_divisor) 21 | 22 | 23 | def _wrap_close(proxy, loop): 24 | original_impl = loop.close 25 | 26 | def _wrapper(self, *args, **kwargs): 27 | if loop in proxy._layers: 28 | layer = proxy._layers[loop] 29 | del proxy._layers[loop] 30 | loop.run_until_complete(layer.flush()) 31 | 32 | self.close = original_impl 33 | return self.close(*args, **kwargs) 34 | 35 | loop.close = types.MethodType(_wrapper, loop) 36 | 37 | 38 | async def _close_redis(connection): 39 | """ 40 | Handle compatibility with redis-py 4.x and 5.x close methods 41 | """ 42 | try: 43 | await connection.aclose(close_connection_pool=True) 44 | except AttributeError: 45 | await connection.close(close_connection_pool=True) 46 | 47 | 48 | def decode_hosts(hosts): 49 | """ 50 | Takes the value of the "hosts" argument and returns 51 | a list of kwargs to use for the Redis connection constructor. 52 | """ 53 | # If no hosts were provided, return a default value 54 | if not hosts: 55 | return [{"address": "redis://localhost:6379"}] 56 | # If they provided just a string, scold them. 57 | if isinstance(hosts, (str, bytes)): 58 | raise ValueError( 59 | "You must pass a list of Redis hosts, even if there is only one." 60 | ) 61 | 62 | # Decode each hosts entry into a kwargs dict 63 | result = [] 64 | for entry in hosts: 65 | if isinstance(entry, dict): 66 | result.append(entry) 67 | elif isinstance(entry, (tuple, list)): 68 | result.append({"host": entry[0], "port": entry[1]}) 69 | else: 70 | result.append({"address": entry}) 71 | return result 72 | 73 | 74 | def create_pool(host): 75 | """ 76 | Takes the value of the "host" argument and returns a suited connection pool to 77 | the corresponding redis instance. 78 | """ 79 | # avoid side-effects from modifying host 80 | host = host.copy() 81 | if "address" in host: 82 | address = host.pop("address") 83 | return aioredis.ConnectionPool.from_url(address, **host) 84 | 85 | master_name = host.pop("master_name", None) 86 | if master_name is not None: 87 | sentinels = host.pop("sentinels") 88 | sentinel_kwargs = host.pop("sentinel_kwargs", None) 89 | return aioredis.sentinel.SentinelConnectionPool( 90 | master_name, 91 | aioredis.sentinel.Sentinel(sentinels, sentinel_kwargs=sentinel_kwargs), 92 | **host 93 | ) 94 | 95 | return aioredis.ConnectionPool(**host) 96 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [tool:pytest] 2 | addopts = -p no:django 3 | testpaths = tests 4 | asyncio_mode = auto 5 | timeout = 10 6 | 7 | [flake8] 8 | exclude = venv/*,tox/*,specs/*,build/* 9 | ignore = E123,E128,E266,E402,W503,E731,W601 10 | max-line-length = 119 11 | 12 | [isort] 13 | profile = black 14 | known_first_party = channels, asgiref, channels_redis, daphne 15 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from os.path import dirname, join 2 | 3 | from setuptools import find_packages, setup 4 | 5 | from channels_redis import __version__ 6 | 7 | # We use the README as the long_description 8 | readme = open(join(dirname(__file__), "README.rst")).read() 9 | 10 | crypto_requires = ["cryptography>=1.3.0"] 11 | 12 | test_requires = crypto_requires + [ 13 | "pytest", 14 | "pytest-asyncio", 15 | "async-timeout", 16 | "pytest-timeout", 17 | ] 18 | 19 | 20 | setup( 21 | name="channels_redis", 22 | version=__version__, 23 | url="http://github.com/django/channels_redis/", 24 | author="Django Software Foundation", 25 | author_email="foundation@djangoproject.com", 26 | description="Redis-backed ASGI channel layer implementation", 27 | long_description=readme, 28 | license="BSD", 29 | zip_safe=False, 30 | packages=find_packages(exclude=["tests"]), 31 | include_package_data=True, 32 | python_requires=">=3.8", 33 | install_requires=[ 34 | "redis>=4.6", 35 | "msgpack~=1.0", 36 | "asgiref>=3.2.10,<4", 37 | "channels", 38 | ], 39 | extras_require={"cryptography": crypto_requires, "tests": test_requires}, 40 | ) 41 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/django/channels_redis/662b90dce03e7592a03b86a8c7bd36f281d2c7a3/tests/__init__.py -------------------------------------------------------------------------------- /tests/test_core.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import random 3 | 4 | import async_timeout 5 | import pytest 6 | 7 | from asgiref.sync import async_to_sync 8 | from channels_redis.core import ChannelFull, RedisChannelLayer 9 | 10 | TEST_HOSTS = ["redis://localhost:6379"] 11 | 12 | MULTIPLE_TEST_HOSTS = [ 13 | "redis://localhost:6379/0", 14 | "redis://localhost:6379/1", 15 | "redis://localhost:6379/2", 16 | "redis://localhost:6379/3", 17 | "redis://localhost:6379/4", 18 | "redis://localhost:6379/5", 19 | "redis://localhost:6379/6", 20 | "redis://localhost:6379/7", 21 | "redis://localhost:6379/8", 22 | "redis://localhost:6379/9", 23 | ] 24 | 25 | 26 | async def send_three_messages_with_delay(channel_name, channel_layer, delay): 27 | await channel_layer.send(channel_name, {"type": "test.message", "text": "First!"}) 28 | 29 | await asyncio.sleep(delay) 30 | 31 | await channel_layer.send(channel_name, {"type": "test.message", "text": "Second!"}) 32 | 33 | await asyncio.sleep(delay) 34 | 35 | await channel_layer.send(channel_name, {"type": "test.message", "text": "Third!"}) 36 | 37 | 38 | async def group_send_three_messages_with_delay(group_name, channel_layer, delay): 39 | await channel_layer.group_send( 40 | group_name, {"type": "test.message", "text": "First!"} 41 | ) 42 | 43 | await asyncio.sleep(delay) 44 | 45 | await channel_layer.group_send( 46 | group_name, {"type": "test.message", "text": "Second!"} 47 | ) 48 | 49 | await asyncio.sleep(delay) 50 | 51 | await channel_layer.group_send( 52 | group_name, {"type": "test.message", "text": "Third!"} 53 | ) 54 | 55 | 56 | @pytest.fixture() 57 | async def channel_layer(): 58 | """ 59 | Channel layer fixture that flushes automatically. 60 | """ 61 | channel_layer = RedisChannelLayer( 62 | hosts=TEST_HOSTS, capacity=3, channel_capacity={"tiny": 1} 63 | ) 64 | yield channel_layer 65 | await channel_layer.flush() 66 | 67 | 68 | @pytest.fixture() 69 | async def channel_layer_multiple_hosts(): 70 | """ 71 | Channel layer fixture that flushes automatically. 72 | """ 73 | channel_layer = RedisChannelLayer(hosts=MULTIPLE_TEST_HOSTS, capacity=3) 74 | yield channel_layer 75 | await channel_layer.flush() 76 | 77 | 78 | @pytest.mark.asyncio 79 | async def test_send_receive(channel_layer): 80 | """ 81 | Makes sure we can send a message to a normal channel then receive it. 82 | """ 83 | await channel_layer.send( 84 | "test-channel-1", {"type": "test.message", "text": "Ahoy-hoy!"} 85 | ) 86 | message = await channel_layer.receive("test-channel-1") 87 | assert message["type"] == "test.message" 88 | assert message["text"] == "Ahoy-hoy!" 89 | 90 | 91 | @pytest.mark.parametrize("channel_layer", [None]) # Fixture can't handle sync 92 | def test_double_receive(channel_layer): 93 | """ 94 | Makes sure we can receive from two different event loops using 95 | process-local channel names. 96 | """ 97 | channel_layer = RedisChannelLayer(hosts=TEST_HOSTS, capacity=3) 98 | 99 | # Aioredis connections can't be used from different event loops, so 100 | # send and close need to be done in the same async_to_sync call. 101 | async def send_and_close(*args, **kwargs): 102 | await channel_layer.send(*args, **kwargs) 103 | await channel_layer.close_pools() 104 | 105 | channel_name_1 = async_to_sync(channel_layer.new_channel)() 106 | channel_name_2 = async_to_sync(channel_layer.new_channel)() 107 | async_to_sync(send_and_close)(channel_name_1, {"type": "test.message.1"}) 108 | async_to_sync(send_and_close)(channel_name_2, {"type": "test.message.2"}) 109 | 110 | # Make things to listen on the loops 111 | async def listen1(): 112 | message = await channel_layer.receive(channel_name_1) 113 | assert message["type"] == "test.message.1" 114 | await channel_layer.close_pools() 115 | 116 | async def listen2(): 117 | message = await channel_layer.receive(channel_name_2) 118 | assert message["type"] == "test.message.2" 119 | await channel_layer.close_pools() 120 | 121 | # Run them inside threads 122 | async_to_sync(listen2)() 123 | async_to_sync(listen1)() 124 | # Clean up 125 | async_to_sync(channel_layer.flush)() 126 | 127 | 128 | @pytest.mark.asyncio 129 | async def test_send_capacity(channel_layer): 130 | """ 131 | Makes sure we get ChannelFull when we hit the send capacity 132 | """ 133 | await channel_layer.send("test-channel-1", {"type": "test.message"}) 134 | await channel_layer.send("test-channel-1", {"type": "test.message"}) 135 | await channel_layer.send("test-channel-1", {"type": "test.message"}) 136 | with pytest.raises(ChannelFull): 137 | await channel_layer.send("test-channel-1", {"type": "test.message"}) 138 | 139 | 140 | @pytest.mark.asyncio 141 | async def test_send_specific_capacity(channel_layer): 142 | """ 143 | Makes sure we get ChannelFull when we hit the send capacity on a specific channel 144 | """ 145 | custom_channel_layer = RedisChannelLayer( 146 | hosts=TEST_HOSTS, capacity=3, channel_capacity={"one": 1} 147 | ) 148 | await custom_channel_layer.send("one", {"type": "test.message"}) 149 | with pytest.raises(ChannelFull): 150 | await custom_channel_layer.send("one", {"type": "test.message"}) 151 | await custom_channel_layer.flush() 152 | 153 | 154 | @pytest.mark.asyncio 155 | async def test_process_local_send_receive(channel_layer): 156 | """ 157 | Makes sure we can send a message to a process-local channel then receive it. 158 | """ 159 | channel_name = await channel_layer.new_channel() 160 | await channel_layer.send( 161 | channel_name, {"type": "test.message", "text": "Local only please"} 162 | ) 163 | message = await channel_layer.receive(channel_name) 164 | assert message["type"] == "test.message" 165 | assert message["text"] == "Local only please" 166 | 167 | 168 | @pytest.mark.asyncio 169 | async def test_multi_send_receive(channel_layer): 170 | """ 171 | Tests overlapping sends and receives, and ordering. 172 | """ 173 | channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) 174 | await channel_layer.send("test-channel-3", {"type": "message.1"}) 175 | await channel_layer.send("test-channel-3", {"type": "message.2"}) 176 | await channel_layer.send("test-channel-3", {"type": "message.3"}) 177 | assert (await channel_layer.receive("test-channel-3"))["type"] == "message.1" 178 | assert (await channel_layer.receive("test-channel-3"))["type"] == "message.2" 179 | assert (await channel_layer.receive("test-channel-3"))["type"] == "message.3" 180 | await channel_layer.flush() 181 | 182 | 183 | @pytest.mark.asyncio 184 | async def test_reject_bad_channel(channel_layer): 185 | """ 186 | Makes sure sending/receiving on an invalic channel name fails. 187 | """ 188 | with pytest.raises(TypeError): 189 | await channel_layer.send("=+135!", {"type": "foom"}) 190 | with pytest.raises(TypeError): 191 | await channel_layer.receive("=+135!") 192 | 193 | 194 | @pytest.mark.asyncio 195 | async def test_reject_bad_client_prefix(channel_layer): 196 | """ 197 | Makes sure receiving on a non-prefixed local channel is not allowed. 198 | """ 199 | with pytest.raises(AssertionError): 200 | await channel_layer.receive("not-client-prefix!local_part") 201 | 202 | 203 | @pytest.mark.asyncio 204 | async def test_groups_basic(channel_layer): 205 | """ 206 | Tests basic group operation. 207 | """ 208 | channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) 209 | channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan-1") 210 | channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan-2") 211 | channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan-3") 212 | await channel_layer.group_add("test-group", channel_name1) 213 | await channel_layer.group_add("test-group", channel_name2) 214 | await channel_layer.group_add("test-group", channel_name3) 215 | await channel_layer.group_discard("test-group", channel_name2) 216 | await channel_layer.group_send("test-group", {"type": "message.1"}) 217 | # Make sure we get the message on the two channels that were in 218 | async with async_timeout.timeout(1): 219 | assert (await channel_layer.receive(channel_name1))["type"] == "message.1" 220 | assert (await channel_layer.receive(channel_name3))["type"] == "message.1" 221 | # Make sure the removed channel did not get the message 222 | with pytest.raises(asyncio.TimeoutError): 223 | async with async_timeout.timeout(1): 224 | await channel_layer.receive(channel_name2) 225 | await channel_layer.flush() 226 | 227 | 228 | @pytest.mark.asyncio 229 | async def test_groups_channel_full(channel_layer): 230 | """ 231 | Tests that group_send ignores ChannelFull 232 | """ 233 | channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) 234 | await channel_layer.group_add("test-group", "test-gr-chan-1") 235 | await channel_layer.group_send("test-group", {"type": "message.1"}) 236 | await channel_layer.group_send("test-group", {"type": "message.1"}) 237 | await channel_layer.group_send("test-group", {"type": "message.1"}) 238 | await channel_layer.group_send("test-group", {"type": "message.1"}) 239 | await channel_layer.group_send("test-group", {"type": "message.1"}) 240 | await channel_layer.flush() 241 | 242 | 243 | @pytest.mark.asyncio 244 | async def test_groups_multiple_hosts(channel_layer_multiple_hosts): 245 | """ 246 | Tests advanced group operation with multiple hosts. 247 | """ 248 | channel_layer = RedisChannelLayer(hosts=MULTIPLE_TEST_HOSTS, capacity=100) 249 | channel_name1 = await channel_layer.new_channel(prefix="channel1") 250 | channel_name2 = await channel_layer.new_channel(prefix="channel2") 251 | channel_name3 = await channel_layer.new_channel(prefix="channel3") 252 | await channel_layer.group_add("test-group", channel_name1) 253 | await channel_layer.group_add("test-group", channel_name2) 254 | await channel_layer.group_add("test-group", channel_name3) 255 | await channel_layer.group_discard("test-group", channel_name2) 256 | await channel_layer.group_send("test-group", {"type": "message.1"}) 257 | await channel_layer.group_send("test-group", {"type": "message.1"}) 258 | 259 | # Make sure we get the message on the two channels that were in 260 | async with async_timeout.timeout(1): 261 | assert (await channel_layer.receive(channel_name1))["type"] == "message.1" 262 | assert (await channel_layer.receive(channel_name3))["type"] == "message.1" 263 | 264 | with pytest.raises(asyncio.TimeoutError): 265 | async with async_timeout.timeout(1): 266 | await channel_layer.receive(channel_name2) 267 | 268 | await channel_layer.flush() 269 | 270 | 271 | @pytest.mark.asyncio 272 | async def test_groups_same_prefix(channel_layer): 273 | """ 274 | Tests group_send with multiple channels with same channel prefix 275 | """ 276 | channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) 277 | channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan") 278 | channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan") 279 | channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan") 280 | await channel_layer.group_add("test-group", channel_name1) 281 | await channel_layer.group_add("test-group", channel_name2) 282 | await channel_layer.group_add("test-group", channel_name3) 283 | await channel_layer.group_send("test-group", {"type": "message.1"}) 284 | 285 | # Make sure we get the message on the channels that were in 286 | async with async_timeout.timeout(1): 287 | assert (await channel_layer.receive(channel_name1))["type"] == "message.1" 288 | assert (await channel_layer.receive(channel_name2))["type"] == "message.1" 289 | assert (await channel_layer.receive(channel_name3))["type"] == "message.1" 290 | 291 | await channel_layer.flush() 292 | 293 | 294 | @pytest.mark.parametrize( 295 | "num_channels,timeout", 296 | [ 297 | (1, 1), # Edge cases - make sure we can send to a single channel 298 | (10, 1), 299 | (100, 10), 300 | ], 301 | ) 302 | @pytest.mark.asyncio 303 | async def test_groups_multiple_hosts_performance( 304 | channel_layer_multiple_hosts, num_channels, timeout 305 | ): 306 | """ 307 | Tests advanced group operation: can send efficiently to multiple channels 308 | with multiple hosts within a certain timeout 309 | """ 310 | channel_layer = RedisChannelLayer(hosts=MULTIPLE_TEST_HOSTS, capacity=100) 311 | 312 | channels = [] 313 | for i in range(0, num_channels): 314 | channel = await channel_layer.new_channel(prefix="channel%s" % i) 315 | await channel_layer.group_add("test-group", channel) 316 | channels.append(channel) 317 | 318 | async with async_timeout.timeout(timeout): 319 | await channel_layer.group_send("test-group", {"type": "message.1"}) 320 | 321 | # Make sure we get the message all the channels 322 | async with async_timeout.timeout(timeout): 323 | for channel in channels: 324 | assert (await channel_layer.receive(channel))["type"] == "message.1" 325 | 326 | await channel_layer.flush() 327 | 328 | 329 | @pytest.mark.asyncio 330 | async def test_group_send_capacity(channel_layer, caplog): 331 | """ 332 | Makes sure we dont group_send messages to channels that are over capacity. 333 | Make sure number of channels with full capacity are logged as an exception to help debug errors. 334 | """ 335 | 336 | channel = await channel_layer.new_channel() 337 | await channel_layer.group_add("test-group", channel) 338 | 339 | await channel_layer.group_send("test-group", {"type": "message.1"}) 340 | await channel_layer.group_send("test-group", {"type": "message.2"}) 341 | await channel_layer.group_send("test-group", {"type": "message.3"}) 342 | await channel_layer.group_send("test-group", {"type": "message.4"}) 343 | 344 | # We should receive the first 3 messages 345 | assert (await channel_layer.receive(channel))["type"] == "message.1" 346 | assert (await channel_layer.receive(channel))["type"] == "message.2" 347 | assert (await channel_layer.receive(channel))["type"] == "message.3" 348 | 349 | # Make sure we do NOT receive message 4 350 | with pytest.raises(asyncio.TimeoutError): 351 | async with async_timeout.timeout(1): 352 | await channel_layer.receive(channel) 353 | 354 | # Make sure number of channels over capacity are logged 355 | for record in caplog.records: 356 | assert record.levelname == "INFO" 357 | assert ( 358 | record.getMessage() == "1 of 1 channels over capacity in group test-group" 359 | ) 360 | 361 | 362 | @pytest.mark.asyncio 363 | async def test_group_send_capacity_multiple_channels(channel_layer, caplog): 364 | """ 365 | Makes sure we dont group_send messages to channels that are over capacity 366 | Make sure number of channels with full capacity are logged as an exception to help debug errors. 367 | """ 368 | 369 | channel_1 = await channel_layer.new_channel() 370 | channel_2 = await channel_layer.new_channel(prefix="channel_2") 371 | await channel_layer.group_add("test-group", channel_1) 372 | await channel_layer.group_add("test-group", channel_2) 373 | 374 | # Let's put channel_2 over capacity 375 | await channel_layer.send(channel_2, {"type": "message.0"}) 376 | 377 | await channel_layer.group_send("test-group", {"type": "message.1"}) 378 | await channel_layer.group_send("test-group", {"type": "message.2"}) 379 | await channel_layer.group_send("test-group", {"type": "message.3"}) 380 | 381 | # Channel_1 should receive all 3 group messages 382 | assert (await channel_layer.receive(channel_1))["type"] == "message.1" 383 | assert (await channel_layer.receive(channel_1))["type"] == "message.2" 384 | assert (await channel_layer.receive(channel_1))["type"] == "message.3" 385 | 386 | # Channel_2 should receive the first message + 2 group messages 387 | assert (await channel_layer.receive(channel_2))["type"] == "message.0" 388 | assert (await channel_layer.receive(channel_2))["type"] == "message.1" 389 | assert (await channel_layer.receive(channel_2))["type"] == "message.2" 390 | 391 | # Make sure channel_2 does not receive the 3rd group message 392 | with pytest.raises(asyncio.TimeoutError): 393 | async with async_timeout.timeout(1): 394 | await channel_layer.receive(channel_2) 395 | 396 | # Make sure number of channels over capacity are logged 397 | for record in caplog.records: 398 | assert record.levelname == "INFO" 399 | assert ( 400 | record.getMessage() == "1 of 2 channels over capacity in group test-group" 401 | ) 402 | 403 | 404 | def test_repeated_group_send_with_async_to_sync(channel_layer): 405 | """ 406 | Makes sure repeated group_send calls wrapped in async_to_sync 407 | process-local channel names. 408 | """ 409 | channel_layer = RedisChannelLayer(hosts=TEST_HOSTS, capacity=3) 410 | 411 | try: 412 | async_to_sync(channel_layer.group_send)( 413 | "channel_name_1", {"type": "test.message.1"} 414 | ) 415 | async_to_sync(channel_layer.group_send)( 416 | "channel_name_2", {"type": "test.message.2"} 417 | ) 418 | except RuntimeError as exc: 419 | pytest.fail(f"repeated async_to_sync wrapped group_send calls raised {exc}") 420 | 421 | 422 | @pytest.mark.xfail( 423 | reason=""" 424 | Fails with error in redis-py: int() argument must be a string, a bytes-like 425 | object or a real number, not 'NoneType'. Refs: #348 426 | """ 427 | ) 428 | @pytest.mark.asyncio 429 | async def test_receive_cancel(channel_layer): 430 | """ 431 | Makes sure we can cancel a receive without blocking 432 | """ 433 | channel_layer = RedisChannelLayer(capacity=30) 434 | channel = await channel_layer.new_channel() 435 | delay = 0 436 | while delay < 0.01: 437 | await channel_layer.send(channel, {"type": "test.message", "text": "Ahoy-hoy!"}) 438 | 439 | task = asyncio.ensure_future(channel_layer.receive(channel)) 440 | await asyncio.sleep(delay) 441 | task.cancel() 442 | delay += 0.0001 443 | 444 | try: 445 | await asyncio.wait_for(task, None) 446 | except asyncio.CancelledError: 447 | pass 448 | 449 | 450 | @pytest.mark.asyncio 451 | async def test_random_reset__channel_name(channel_layer): 452 | """ 453 | Makes sure resetting random seed does not make us reuse channel names. 454 | """ 455 | 456 | channel_layer = RedisChannelLayer() 457 | random.seed(1) 458 | channel_name_1 = await channel_layer.new_channel() 459 | random.seed(1) 460 | channel_name_2 = await channel_layer.new_channel() 461 | 462 | assert channel_name_1 != channel_name_2 463 | 464 | 465 | @pytest.mark.asyncio 466 | async def test_random_reset__client_prefix(channel_layer): 467 | """ 468 | Makes sure resetting random seed does not make us reuse client_prefixes. 469 | """ 470 | 471 | random.seed(1) 472 | channel_layer_1 = RedisChannelLayer() 473 | random.seed(1) 474 | channel_layer_2 = RedisChannelLayer() 475 | assert channel_layer_1.client_prefix != channel_layer_2.client_prefix 476 | 477 | 478 | @pytest.mark.asyncio 479 | async def test_message_expiry__earliest_message_expires(channel_layer): 480 | expiry = 3 481 | delay = 2 482 | channel_layer = RedisChannelLayer(expiry=expiry) 483 | channel_name = await channel_layer.new_channel() 484 | 485 | task = asyncio.ensure_future( 486 | send_three_messages_with_delay(channel_name, channel_layer, delay) 487 | ) 488 | await asyncio.wait_for(task, None) 489 | 490 | # the first message should have expired, we should only see the second message and the third 491 | message = await channel_layer.receive(channel_name) 492 | assert message["type"] == "test.message" 493 | assert message["text"] == "Second!" 494 | 495 | message = await channel_layer.receive(channel_name) 496 | assert message["type"] == "test.message" 497 | assert message["text"] == "Third!" 498 | 499 | # Make sure there's no third message even out of order 500 | with pytest.raises(asyncio.TimeoutError): 501 | async with async_timeout.timeout(1): 502 | await channel_layer.receive(channel_name) 503 | 504 | 505 | @pytest.mark.asyncio 506 | async def test_message_expiry__all_messages_under_expiration_time(channel_layer): 507 | expiry = 3 508 | delay = 1 509 | channel_layer = RedisChannelLayer(expiry=expiry) 510 | channel_name = await channel_layer.new_channel() 511 | 512 | task = asyncio.ensure_future( 513 | send_three_messages_with_delay(channel_name, channel_layer, delay) 514 | ) 515 | await asyncio.wait_for(task, None) 516 | 517 | # expiry = 3, total delay under 3, all messages there 518 | message = await channel_layer.receive(channel_name) 519 | assert message["type"] == "test.message" 520 | assert message["text"] == "First!" 521 | 522 | message = await channel_layer.receive(channel_name) 523 | assert message["type"] == "test.message" 524 | assert message["text"] == "Second!" 525 | 526 | message = await channel_layer.receive(channel_name) 527 | assert message["type"] == "test.message" 528 | assert message["text"] == "Third!" 529 | 530 | 531 | @pytest.mark.asyncio 532 | async def test_message_expiry__group_send(channel_layer): 533 | expiry = 3 534 | delay = 2 535 | channel_layer = RedisChannelLayer(expiry=expiry) 536 | channel_name = await channel_layer.new_channel() 537 | 538 | await channel_layer.group_add("test-group", channel_name) 539 | 540 | task = asyncio.ensure_future( 541 | group_send_three_messages_with_delay("test-group", channel_layer, delay) 542 | ) 543 | await asyncio.wait_for(task, None) 544 | 545 | # the first message should have expired, we should only see the second message and the third 546 | message = await channel_layer.receive(channel_name) 547 | assert message["type"] == "test.message" 548 | assert message["text"] == "Second!" 549 | 550 | message = await channel_layer.receive(channel_name) 551 | assert message["type"] == "test.message" 552 | assert message["text"] == "Third!" 553 | 554 | # Make sure there's no third message even out of order 555 | with pytest.raises(asyncio.TimeoutError): 556 | async with async_timeout.timeout(1): 557 | await channel_layer.receive(channel_name) 558 | 559 | 560 | @pytest.mark.xfail(reason="Fails with timeout. Refs: #348") 561 | @pytest.mark.asyncio 562 | async def test_message_expiry__group_send__one_channel_expires_message(channel_layer): 563 | expiry = 3 564 | delay = 1 565 | 566 | channel_layer = RedisChannelLayer(expiry=expiry) 567 | channel_1 = await channel_layer.new_channel() 568 | channel_2 = await channel_layer.new_channel(prefix="channel_2") 569 | 570 | await channel_layer.group_add("test-group", channel_1) 571 | await channel_layer.group_add("test-group", channel_2) 572 | 573 | # Let's give channel_1 one additional message and then sleep 574 | await channel_layer.send(channel_1, {"type": "test.message", "text": "Zero!"}) 575 | await asyncio.sleep(2) 576 | 577 | task = asyncio.ensure_future( 578 | group_send_three_messages_with_delay("test-group", channel_layer, delay) 579 | ) 580 | await asyncio.wait_for(task, None) 581 | 582 | # message Zero! was sent about 2 + 1 + 1 seconds ago and it should have expired 583 | message = await channel_layer.receive(channel_1) 584 | assert message["type"] == "test.message" 585 | assert message["text"] == "First!" 586 | 587 | message = await channel_layer.receive(channel_1) 588 | assert message["type"] == "test.message" 589 | assert message["text"] == "Second!" 590 | 591 | message = await channel_layer.receive(channel_1) 592 | assert message["type"] == "test.message" 593 | assert message["text"] == "Third!" 594 | 595 | # Make sure there's no fourth message even out of order 596 | with pytest.raises(asyncio.TimeoutError): 597 | async with async_timeout.timeout(1): 598 | await channel_layer.receive(channel_1) 599 | 600 | # channel_2 should receive all three messages from group_send 601 | message = await channel_layer.receive(channel_2) 602 | assert message["type"] == "test.message" 603 | assert message["text"] == "First!" 604 | 605 | # the first message should have expired, we should only see the second message and the third 606 | message = await channel_layer.receive(channel_2) 607 | assert message["type"] == "test.message" 608 | assert message["text"] == "Second!" 609 | 610 | message = await channel_layer.receive(channel_2) 611 | assert message["type"] == "test.message" 612 | assert message["text"] == "Third!" 613 | 614 | 615 | def test_default_group_key_format(): 616 | channel_layer = RedisChannelLayer() 617 | group_name = channel_layer._group_key("test_group") 618 | assert group_name == b"asgi:group:test_group" 619 | 620 | 621 | def test_custom_group_key_format(): 622 | channel_layer = RedisChannelLayer(prefix="test_prefix") 623 | group_name = channel_layer._group_key("test_group") 624 | assert group_name == b"test_prefix:group:test_group" 625 | 626 | 627 | def test_receive_buffer_respects_capacity(): 628 | channel_layer = RedisChannelLayer() 629 | buff = channel_layer.receive_buffer["test-group"] 630 | for i in range(10000): 631 | buff.put_nowait(i) 632 | 633 | capacity = 100 634 | assert channel_layer.capacity == capacity 635 | assert buff.full() is True 636 | assert buff.qsize() == capacity 637 | messages = [buff.get_nowait() for _ in range(capacity)] 638 | assert list(range(9900, 10000)) == messages 639 | -------------------------------------------------------------------------------- /tests/test_pubsub.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import inspect 3 | import random 4 | import sys 5 | 6 | import async_timeout 7 | import pytest 8 | 9 | from asgiref.sync import async_to_sync 10 | from channels_redis.pubsub import RedisPubSubChannelLayer 11 | from channels_redis.utils import _close_redis 12 | 13 | TEST_HOSTS = ["redis://localhost:6379"] 14 | 15 | 16 | @pytest.fixture() 17 | async def channel_layer(): 18 | """ 19 | Channel layer fixture that flushes automatically. 20 | """ 21 | channel_layer = RedisPubSubChannelLayer(hosts=TEST_HOSTS) 22 | yield channel_layer 23 | async with async_timeout.timeout(1): 24 | await channel_layer.flush() 25 | 26 | 27 | @pytest.fixture() 28 | async def other_channel_layer(): 29 | """ 30 | Channel layer fixture that flushes automatically. 31 | """ 32 | channel_layer = RedisPubSubChannelLayer(hosts=TEST_HOSTS) 33 | yield channel_layer 34 | await channel_layer.flush() 35 | 36 | 37 | def test_layer_close(): 38 | """ 39 | If the channel layer does not close properly there will be a "Task was destroyed but it is pending!" warning at 40 | process exit. 41 | """ 42 | 43 | async def do_something_with_layer(): 44 | channel_layer = RedisPubSubChannelLayer(hosts=TEST_HOSTS) 45 | await channel_layer.send( 46 | "TestChannel", {"type": "test.message", "text": "Ahoy-hoy!"} 47 | ) 48 | 49 | async_to_sync(do_something_with_layer)() 50 | 51 | 52 | @pytest.mark.asyncio 53 | async def test_send_receive(channel_layer): 54 | """ 55 | Makes sure we can send a message to a normal channel then receive it. 56 | """ 57 | channel = await channel_layer.new_channel() 58 | await channel_layer.send(channel, {"type": "test.message", "text": "Ahoy-hoy!"}) 59 | message = await channel_layer.receive(channel) 60 | assert message["type"] == "test.message" 61 | assert message["text"] == "Ahoy-hoy!" 62 | 63 | 64 | def test_send_receive_sync(channel_layer, event_loop): 65 | _await = event_loop.run_until_complete 66 | channel = _await(channel_layer.new_channel()) 67 | async_to_sync(channel_layer.send, force_new_loop=True)( 68 | channel, {"type": "test.message", "text": "Ahoy-hoy!"} 69 | ) 70 | message = _await(channel_layer.receive(channel)) 71 | assert message["type"] == "test.message" 72 | assert message["text"] == "Ahoy-hoy!" 73 | 74 | 75 | @pytest.mark.asyncio 76 | async def test_multi_send_receive(channel_layer): 77 | """ 78 | Tests overlapping sends and receives, and ordering. 79 | """ 80 | channel = await channel_layer.new_channel() 81 | await channel_layer.send(channel, {"type": "message.1"}) 82 | await channel_layer.send(channel, {"type": "message.2"}) 83 | await channel_layer.send(channel, {"type": "message.3"}) 84 | assert (await channel_layer.receive(channel))["type"] == "message.1" 85 | assert (await channel_layer.receive(channel))["type"] == "message.2" 86 | assert (await channel_layer.receive(channel))["type"] == "message.3" 87 | 88 | 89 | def test_multi_send_receive_sync(channel_layer, event_loop): 90 | _await = event_loop.run_until_complete 91 | channel = _await(channel_layer.new_channel()) 92 | send = async_to_sync(channel_layer.send) 93 | send(channel, {"type": "message.1"}) 94 | send(channel, {"type": "message.2"}) 95 | send(channel, {"type": "message.3"}) 96 | assert _await(channel_layer.receive(channel))["type"] == "message.1" 97 | assert _await(channel_layer.receive(channel))["type"] == "message.2" 98 | assert _await(channel_layer.receive(channel))["type"] == "message.3" 99 | 100 | 101 | @pytest.mark.asyncio 102 | async def test_groups_basic(channel_layer): 103 | """ 104 | Tests basic group operation. 105 | """ 106 | channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan-1") 107 | channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan-2") 108 | channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan-3") 109 | await channel_layer.group_add("test-group", channel_name1) 110 | await channel_layer.group_add("test-group", channel_name2) 111 | await channel_layer.group_add("test-group", channel_name3) 112 | await channel_layer.group_discard("test-group", channel_name2) 113 | await channel_layer.group_send("test-group", {"type": "message.1"}) 114 | # Make sure we get the message on the two channels that were in 115 | async with async_timeout.timeout(1): 116 | assert (await channel_layer.receive(channel_name1))["type"] == "message.1" 117 | assert (await channel_layer.receive(channel_name3))["type"] == "message.1" 118 | # Make sure the removed channel did not get the message 119 | with pytest.raises(asyncio.TimeoutError): 120 | async with async_timeout.timeout(1): 121 | await channel_layer.receive(channel_name2) 122 | 123 | 124 | @pytest.mark.asyncio 125 | async def test_groups_same_prefix(channel_layer): 126 | """ 127 | Tests group_send with multiple channels with same channel prefix 128 | """ 129 | channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan") 130 | channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan") 131 | channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan") 132 | await channel_layer.group_add("test-group", channel_name1) 133 | await channel_layer.group_add("test-group", channel_name2) 134 | await channel_layer.group_add("test-group", channel_name3) 135 | await channel_layer.group_send("test-group", {"type": "message.1"}) 136 | 137 | # Make sure we get the message on the channels that were in 138 | async with async_timeout.timeout(1): 139 | assert (await channel_layer.receive(channel_name1))["type"] == "message.1" 140 | assert (await channel_layer.receive(channel_name2))["type"] == "message.1" 141 | assert (await channel_layer.receive(channel_name3))["type"] == "message.1" 142 | 143 | 144 | @pytest.mark.asyncio 145 | async def test_receive_on_non_owned_general_channel(channel_layer, other_channel_layer): 146 | """ 147 | Tests receive with general channel that is not owned by the layer 148 | """ 149 | receive_started = asyncio.Event() 150 | 151 | async def receive(): 152 | receive_started.set() 153 | return await other_channel_layer.receive("test-channel") 154 | 155 | receive_task = asyncio.create_task(receive()) 156 | await receive_started.wait() 157 | await asyncio.sleep(0.1) # Need to give time for "receive" to subscribe 158 | await channel_layer.send("test-channel", "message.1") 159 | 160 | try: 161 | # Make sure we get the message on the channels that were in 162 | async with async_timeout.timeout(1): 163 | assert await receive_task == "message.1" 164 | finally: 165 | receive_task.cancel() 166 | 167 | 168 | @pytest.mark.asyncio 169 | async def test_random_reset__channel_name(channel_layer): 170 | """ 171 | Makes sure resetting random seed does not make us reuse channel names. 172 | """ 173 | random.seed(1) 174 | channel_name_1 = await channel_layer.new_channel() 175 | random.seed(1) 176 | channel_name_2 = await channel_layer.new_channel() 177 | 178 | assert channel_name_1 != channel_name_2 179 | 180 | 181 | @pytest.mark.asyncio 182 | async def test_loop_instance_channel_layer_reference(channel_layer): 183 | redis_pub_sub_loop_layer = channel_layer._get_layer() 184 | 185 | assert redis_pub_sub_loop_layer.channel_layer == channel_layer 186 | 187 | 188 | def test_serialize(channel_layer): 189 | """ 190 | Test default serialization method 191 | """ 192 | message = {"a": True, "b": None, "c": {"d": []}} 193 | serialized = channel_layer.serialize(message) 194 | assert isinstance(serialized, bytes) 195 | assert serialized == b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" 196 | 197 | 198 | def test_deserialize(channel_layer): 199 | """ 200 | Test default deserialization method 201 | """ 202 | message = b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" 203 | deserialized = channel_layer.deserialize(message) 204 | 205 | assert isinstance(deserialized, dict) 206 | assert deserialized == {"a": True, "b": None, "c": {"d": []}} 207 | 208 | 209 | def test_multi_event_loop_garbage_collection(channel_layer): 210 | """ 211 | Test loop closure layer flushing and garbage collection 212 | """ 213 | assert len(channel_layer._layers.values()) == 0 214 | async_to_sync(test_send_receive)(channel_layer) 215 | assert len(channel_layer._layers.values()) == 0 216 | 217 | 218 | @pytest.mark.asyncio 219 | async def test_proxied_methods_coroutine_check(channel_layer): 220 | # inspect.iscoroutinefunction does not work for partial functions 221 | # below Python 3.8. 222 | if sys.version_info >= (3, 8): 223 | assert inspect.iscoroutinefunction(channel_layer.send) 224 | 225 | 226 | @pytest.mark.asyncio 227 | async def test_receive_hang(channel_layer): 228 | channel_name = await channel_layer.new_channel(prefix="test-channel") 229 | with pytest.raises(asyncio.TimeoutError): 230 | await asyncio.wait_for(channel_layer.receive(channel_name), timeout=1) 231 | 232 | 233 | @pytest.mark.asyncio 234 | async def test_auto_reconnect(channel_layer): 235 | """ 236 | Tests redis-py reconnect and resubscribe 237 | """ 238 | channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan-1") 239 | channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan-2") 240 | channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan-3") 241 | await channel_layer.group_add("test-group", channel_name1) 242 | await channel_layer.group_add("test-group", channel_name2) 243 | await _close_redis(channel_layer._shards[0]._redis) 244 | await channel_layer.group_add("test-group", channel_name3) 245 | await channel_layer.group_discard("test-group", channel_name2) 246 | await _close_redis(channel_layer._shards[0]._redis) 247 | await asyncio.sleep(1) 248 | await channel_layer.group_send("test-group", {"type": "message.1"}) 249 | # Make sure we get the message on the two channels that were in 250 | async with async_timeout.timeout(5): 251 | assert (await channel_layer.receive(channel_name1))["type"] == "message.1" 252 | assert (await channel_layer.receive(channel_name3))["type"] == "message.1" 253 | # Make sure the removed channel did not get the message 254 | with pytest.raises(asyncio.TimeoutError): 255 | async with async_timeout.timeout(1): 256 | await channel_layer.receive(channel_name2) 257 | 258 | 259 | @pytest.mark.asyncio 260 | async def test_discard_before_add(channel_layer): 261 | channel_name = await channel_layer.new_channel(prefix="test-channel") 262 | # Make sure that we can remove a group before it was ever added without crashing. 263 | await channel_layer.group_discard("test-group", channel_name) 264 | -------------------------------------------------------------------------------- /tests/test_pubsub_sentinel.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import random 3 | 4 | import async_timeout 5 | import pytest 6 | 7 | from asgiref.sync import async_to_sync 8 | from channels_redis.pubsub import RedisPubSubChannelLayer 9 | from channels_redis.utils import _close_redis 10 | 11 | SENTINEL_MASTER = "sentinel" 12 | SENTINEL_KWARGS = {"password": "channels_redis"} 13 | TEST_HOSTS = [ 14 | { 15 | "sentinels": [("localhost", 26379)], 16 | "master_name": SENTINEL_MASTER, 17 | "sentinel_kwargs": SENTINEL_KWARGS, 18 | } 19 | ] 20 | 21 | 22 | @pytest.fixture() 23 | async def channel_layer(): 24 | """ 25 | Channel layer fixture that flushes automatically. 26 | """ 27 | channel_layer = RedisPubSubChannelLayer(hosts=TEST_HOSTS) 28 | yield channel_layer 29 | async with async_timeout.timeout(1): 30 | await channel_layer.flush() 31 | 32 | 33 | @pytest.mark.asyncio 34 | async def test_send_receive(channel_layer): 35 | """ 36 | Makes sure we can send a message to a normal channel then receive it. 37 | """ 38 | channel = await channel_layer.new_channel() 39 | await channel_layer.send(channel, {"type": "test.message", "text": "Ahoy-hoy!"}) 40 | message = await channel_layer.receive(channel) 41 | assert message["type"] == "test.message" 42 | assert message["text"] == "Ahoy-hoy!" 43 | 44 | 45 | def test_send_receive_sync(channel_layer, event_loop): 46 | _await = event_loop.run_until_complete 47 | channel = _await(channel_layer.new_channel()) 48 | async_to_sync(channel_layer.send, force_new_loop=True)( 49 | channel, {"type": "test.message", "text": "Ahoy-hoy!"} 50 | ) 51 | message = _await(channel_layer.receive(channel)) 52 | assert message["type"] == "test.message" 53 | assert message["text"] == "Ahoy-hoy!" 54 | 55 | 56 | @pytest.mark.asyncio 57 | async def test_multi_send_receive(channel_layer): 58 | """ 59 | Tests overlapping sends and receives, and ordering. 60 | """ 61 | channel = await channel_layer.new_channel() 62 | await channel_layer.send(channel, {"type": "message.1"}) 63 | await channel_layer.send(channel, {"type": "message.2"}) 64 | await channel_layer.send(channel, {"type": "message.3"}) 65 | assert (await channel_layer.receive(channel))["type"] == "message.1" 66 | assert (await channel_layer.receive(channel))["type"] == "message.2" 67 | assert (await channel_layer.receive(channel))["type"] == "message.3" 68 | 69 | 70 | def test_multi_send_receive_sync(channel_layer, event_loop): 71 | _await = event_loop.run_until_complete 72 | channel = _await(channel_layer.new_channel()) 73 | send = async_to_sync(channel_layer.send) 74 | send(channel, {"type": "message.1"}) 75 | send(channel, {"type": "message.2"}) 76 | send(channel, {"type": "message.3"}) 77 | assert _await(channel_layer.receive(channel))["type"] == "message.1" 78 | assert _await(channel_layer.receive(channel))["type"] == "message.2" 79 | assert _await(channel_layer.receive(channel))["type"] == "message.3" 80 | 81 | 82 | @pytest.mark.asyncio 83 | async def test_groups_basic(channel_layer): 84 | """ 85 | Tests basic group operation. 86 | """ 87 | channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan-1") 88 | channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan-2") 89 | channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan-3") 90 | await channel_layer.group_add("test-group", channel_name1) 91 | await channel_layer.group_add("test-group", channel_name2) 92 | await channel_layer.group_add("test-group", channel_name3) 93 | await channel_layer.group_discard("test-group", channel_name2) 94 | await channel_layer.group_send("test-group", {"type": "message.1"}) 95 | # Make sure we get the message on the two channels that were in 96 | async with async_timeout.timeout(1): 97 | assert (await channel_layer.receive(channel_name1))["type"] == "message.1" 98 | assert (await channel_layer.receive(channel_name3))["type"] == "message.1" 99 | # Make sure the removed channel did not get the message 100 | with pytest.raises(asyncio.TimeoutError): 101 | async with async_timeout.timeout(1): 102 | await channel_layer.receive(channel_name2) 103 | 104 | 105 | @pytest.mark.asyncio 106 | async def test_groups_same_prefix(channel_layer): 107 | """ 108 | Tests group_send with multiple channels with same channel prefix 109 | """ 110 | channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan") 111 | channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan") 112 | channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan") 113 | await channel_layer.group_add("test-group", channel_name1) 114 | await channel_layer.group_add("test-group", channel_name2) 115 | await channel_layer.group_add("test-group", channel_name3) 116 | await channel_layer.group_send("test-group", {"type": "message.1"}) 117 | 118 | # Make sure we get the message on the channels that were in 119 | async with async_timeout.timeout(1): 120 | assert (await channel_layer.receive(channel_name1))["type"] == "message.1" 121 | assert (await channel_layer.receive(channel_name2))["type"] == "message.1" 122 | assert (await channel_layer.receive(channel_name3))["type"] == "message.1" 123 | 124 | 125 | @pytest.mark.asyncio 126 | async def test_random_reset__channel_name(channel_layer): 127 | """ 128 | Makes sure resetting random seed does not make us reuse channel names. 129 | """ 130 | random.seed(1) 131 | channel_name_1 = await channel_layer.new_channel() 132 | random.seed(1) 133 | channel_name_2 = await channel_layer.new_channel() 134 | 135 | assert channel_name_1 != channel_name_2 136 | 137 | 138 | @pytest.mark.asyncio 139 | async def test_loop_instance_channel_layer_reference(channel_layer): 140 | redis_pub_sub_loop_layer = channel_layer._get_layer() 141 | 142 | assert redis_pub_sub_loop_layer.channel_layer == channel_layer 143 | 144 | 145 | def test_serialize(channel_layer): 146 | """ 147 | Test default serialization method 148 | """ 149 | message = {"a": True, "b": None, "c": {"d": []}} 150 | serialized = channel_layer.serialize(message) 151 | assert isinstance(serialized, bytes) 152 | assert serialized == b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" 153 | 154 | 155 | def test_deserialize(channel_layer): 156 | """ 157 | Test default deserialization method 158 | """ 159 | message = b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" 160 | deserialized = channel_layer.deserialize(message) 161 | 162 | assert isinstance(deserialized, dict) 163 | assert deserialized == {"a": True, "b": None, "c": {"d": []}} 164 | 165 | 166 | def test_multi_event_loop_garbage_collection(channel_layer): 167 | """ 168 | Test loop closure layer flushing and garbage collection 169 | """ 170 | assert len(channel_layer._layers.values()) == 0 171 | async_to_sync(test_send_receive)(channel_layer) 172 | assert len(channel_layer._layers.values()) == 0 173 | 174 | 175 | @pytest.mark.asyncio 176 | async def test_receive_hang(channel_layer): 177 | channel_name = await channel_layer.new_channel(prefix="test-channel") 178 | with pytest.raises(asyncio.TimeoutError): 179 | await asyncio.wait_for(channel_layer.receive(channel_name), timeout=1) 180 | 181 | 182 | @pytest.mark.asyncio 183 | async def test_auto_reconnect(channel_layer): 184 | """ 185 | Tests redis-py reconnect and resubscribe 186 | """ 187 | channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan-1") 188 | channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan-2") 189 | channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan-3") 190 | await channel_layer.group_add("test-group", channel_name1) 191 | await channel_layer.group_add("test-group", channel_name2) 192 | await _close_redis(channel_layer._shards[0]._redis) 193 | await channel_layer.group_add("test-group", channel_name3) 194 | await channel_layer.group_discard("test-group", channel_name2) 195 | await _close_redis(channel_layer._shards[0]._redis) 196 | await asyncio.sleep(1) 197 | await channel_layer.group_send("test-group", {"type": "message.1"}) 198 | # Make sure we get the message on the two channels that were in 199 | async with async_timeout.timeout(5): 200 | assert (await channel_layer.receive(channel_name1))["type"] == "message.1" 201 | assert (await channel_layer.receive(channel_name3))["type"] == "message.1" 202 | # Make sure the removed channel did not get the message 203 | with pytest.raises(asyncio.TimeoutError): 204 | async with async_timeout.timeout(1): 205 | await channel_layer.receive(channel_name2) 206 | -------------------------------------------------------------------------------- /tests/test_sentinel.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import random 3 | 4 | import async_timeout 5 | import pytest 6 | 7 | from asgiref.sync import async_to_sync 8 | from channels_redis.core import ChannelFull, RedisChannelLayer 9 | 10 | SENTINEL_MASTER = "sentinel" 11 | SENTINEL_KWARGS = {"password": "channels_redis"} 12 | 13 | TEST_HOSTS = [ 14 | { 15 | "sentinels": [("localhost", 26379)], 16 | "master_name": SENTINEL_MASTER, 17 | "sentinel_kwargs": SENTINEL_KWARGS, 18 | } 19 | ] 20 | MULTIPLE_TEST_HOSTS = [ 21 | { 22 | "sentinels": [("localhost", 26379)], 23 | "master_name": SENTINEL_MASTER, 24 | "sentinel_kwargs": SENTINEL_KWARGS, 25 | "db": 0, 26 | }, 27 | { 28 | "sentinels": [("localhost", 26379)], 29 | "master_name": SENTINEL_MASTER, 30 | "sentinel_kwargs": SENTINEL_KWARGS, 31 | "db": 1, 32 | }, 33 | { 34 | "sentinels": [("localhost", 26379)], 35 | "master_name": SENTINEL_MASTER, 36 | "sentinel_kwargs": SENTINEL_KWARGS, 37 | "db": 2, 38 | }, 39 | { 40 | "sentinels": [("localhost", 26379)], 41 | "master_name": SENTINEL_MASTER, 42 | "sentinel_kwargs": SENTINEL_KWARGS, 43 | "db": 3, 44 | }, 45 | { 46 | "sentinels": [("localhost", 26379)], 47 | "master_name": SENTINEL_MASTER, 48 | "sentinel_kwargs": SENTINEL_KWARGS, 49 | "db": 4, 50 | }, 51 | { 52 | "sentinels": [("localhost", 26379)], 53 | "master_name": SENTINEL_MASTER, 54 | "sentinel_kwargs": SENTINEL_KWARGS, 55 | "db": 5, 56 | }, 57 | { 58 | "sentinels": [("localhost", 26379)], 59 | "master_name": SENTINEL_MASTER, 60 | "sentinel_kwargs": SENTINEL_KWARGS, 61 | "db": 6, 62 | }, 63 | { 64 | "sentinels": [("localhost", 26379)], 65 | "master_name": SENTINEL_MASTER, 66 | "sentinel_kwargs": SENTINEL_KWARGS, 67 | "db": 7, 68 | }, 69 | { 70 | "sentinels": [("localhost", 26379)], 71 | "master_name": SENTINEL_MASTER, 72 | "sentinel_kwargs": SENTINEL_KWARGS, 73 | "db": 8, 74 | }, 75 | { 76 | "sentinels": [("localhost", 26379)], 77 | "master_name": SENTINEL_MASTER, 78 | "sentinel_kwargs": SENTINEL_KWARGS, 79 | "db": 9, 80 | }, 81 | ] 82 | 83 | 84 | async def send_three_messages_with_delay(channel_name, channel_layer, delay): 85 | await channel_layer.send(channel_name, {"type": "test.message", "text": "First!"}) 86 | 87 | await asyncio.sleep(delay) 88 | 89 | await channel_layer.send(channel_name, {"type": "test.message", "text": "Second!"}) 90 | 91 | await asyncio.sleep(delay) 92 | 93 | await channel_layer.send(channel_name, {"type": "test.message", "text": "Third!"}) 94 | 95 | 96 | async def group_send_three_messages_with_delay(group_name, channel_layer, delay): 97 | await channel_layer.group_send( 98 | group_name, {"type": "test.message", "text": "First!"} 99 | ) 100 | 101 | await asyncio.sleep(delay) 102 | 103 | await channel_layer.group_send( 104 | group_name, {"type": "test.message", "text": "Second!"} 105 | ) 106 | 107 | await asyncio.sleep(delay) 108 | 109 | await channel_layer.group_send( 110 | group_name, {"type": "test.message", "text": "Third!"} 111 | ) 112 | 113 | 114 | @pytest.fixture() 115 | async def channel_layer(): 116 | """ 117 | Channel layer fixture that flushes automatically. 118 | """ 119 | channel_layer = RedisChannelLayer( 120 | hosts=TEST_HOSTS, capacity=3, channel_capacity={"tiny": 1} 121 | ) 122 | yield channel_layer 123 | await channel_layer.flush() 124 | 125 | 126 | @pytest.fixture() 127 | async def channel_layer_multiple_hosts(): 128 | """ 129 | Channel layer fixture that flushes automatically. 130 | """ 131 | channel_layer = RedisChannelLayer(hosts=MULTIPLE_TEST_HOSTS, capacity=3) 132 | yield channel_layer 133 | await channel_layer.flush() 134 | 135 | 136 | @pytest.mark.asyncio 137 | async def test_send_receive(channel_layer): 138 | """ 139 | Makes sure we can send a message to a normal channel then receive it. 140 | """ 141 | await channel_layer.send( 142 | "test-channel-1", {"type": "test.message", "text": "Ahoy-hoy!"} 143 | ) 144 | message = await channel_layer.receive("test-channel-1") 145 | assert message["type"] == "test.message" 146 | assert message["text"] == "Ahoy-hoy!" 147 | 148 | 149 | @pytest.mark.parametrize("channel_layer", [None]) # Fixture can't handle sync 150 | def test_double_receive(channel_layer): 151 | """ 152 | Makes sure we can receive from two different event loops using 153 | process-local channel names. 154 | """ 155 | channel_layer = RedisChannelLayer(hosts=TEST_HOSTS, capacity=3) 156 | 157 | # Aioredis connections can't be used from different event loops, so 158 | # send and close need to be done in the same async_to_sync call. 159 | async def send_and_close(*args, **kwargs): 160 | await channel_layer.send(*args, **kwargs) 161 | await channel_layer.close_pools() 162 | 163 | channel_name_1 = async_to_sync(channel_layer.new_channel)() 164 | channel_name_2 = async_to_sync(channel_layer.new_channel)() 165 | async_to_sync(send_and_close)(channel_name_1, {"type": "test.message.1"}) 166 | async_to_sync(send_and_close)(channel_name_2, {"type": "test.message.2"}) 167 | 168 | # Make things to listen on the loops 169 | async def listen1(): 170 | message = await channel_layer.receive(channel_name_1) 171 | assert message["type"] == "test.message.1" 172 | await channel_layer.close_pools() 173 | 174 | async def listen2(): 175 | message = await channel_layer.receive(channel_name_2) 176 | assert message["type"] == "test.message.2" 177 | await channel_layer.close_pools() 178 | 179 | # Run them inside threads 180 | async_to_sync(listen2)() 181 | async_to_sync(listen1)() 182 | # Clean up 183 | async_to_sync(channel_layer.flush)() 184 | 185 | 186 | @pytest.mark.asyncio 187 | async def test_send_capacity(channel_layer): 188 | """ 189 | Makes sure we get ChannelFull when we hit the send capacity 190 | """ 191 | await channel_layer.send("test-channel-1", {"type": "test.message"}) 192 | await channel_layer.send("test-channel-1", {"type": "test.message"}) 193 | await channel_layer.send("test-channel-1", {"type": "test.message"}) 194 | with pytest.raises(ChannelFull): 195 | await channel_layer.send("test-channel-1", {"type": "test.message"}) 196 | 197 | 198 | @pytest.mark.asyncio 199 | async def test_send_specific_capacity(channel_layer): 200 | """ 201 | Makes sure we get ChannelFull when we hit the send capacity on a specific channel 202 | """ 203 | custom_channel_layer = RedisChannelLayer( 204 | hosts=TEST_HOSTS, 205 | capacity=3, 206 | channel_capacity={"one": 1}, 207 | ) 208 | await custom_channel_layer.send("one", {"type": "test.message"}) 209 | with pytest.raises(ChannelFull): 210 | await custom_channel_layer.send("one", {"type": "test.message"}) 211 | await custom_channel_layer.flush() 212 | 213 | 214 | @pytest.mark.asyncio 215 | async def test_process_local_send_receive(channel_layer): 216 | """ 217 | Makes sure we can send a message to a process-local channel then receive it. 218 | """ 219 | channel_name = await channel_layer.new_channel() 220 | await channel_layer.send( 221 | channel_name, {"type": "test.message", "text": "Local only please"} 222 | ) 223 | message = await channel_layer.receive(channel_name) 224 | assert message["type"] == "test.message" 225 | assert message["text"] == "Local only please" 226 | 227 | 228 | @pytest.mark.asyncio 229 | async def test_multi_send_receive(channel_layer): 230 | """ 231 | Tests overlapping sends and receives, and ordering. 232 | """ 233 | channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) 234 | await channel_layer.send("test-channel-3", {"type": "message.1"}) 235 | await channel_layer.send("test-channel-3", {"type": "message.2"}) 236 | await channel_layer.send("test-channel-3", {"type": "message.3"}) 237 | assert (await channel_layer.receive("test-channel-3"))["type"] == "message.1" 238 | assert (await channel_layer.receive("test-channel-3"))["type"] == "message.2" 239 | assert (await channel_layer.receive("test-channel-3"))["type"] == "message.3" 240 | await channel_layer.flush() 241 | 242 | 243 | @pytest.mark.asyncio 244 | async def test_reject_bad_channel(channel_layer): 245 | """ 246 | Makes sure sending/receiving on an invalic channel name fails. 247 | """ 248 | with pytest.raises(TypeError): 249 | await channel_layer.send("=+135!", {"type": "foom"}) 250 | with pytest.raises(TypeError): 251 | await channel_layer.receive("=+135!") 252 | 253 | 254 | @pytest.mark.asyncio 255 | async def test_reject_bad_client_prefix(channel_layer): 256 | """ 257 | Makes sure receiving on a non-prefixed local channel is not allowed. 258 | """ 259 | with pytest.raises(AssertionError): 260 | await channel_layer.receive("not-client-prefix!local_part") 261 | 262 | 263 | @pytest.mark.asyncio 264 | async def test_groups_basic(channel_layer): 265 | """ 266 | Tests basic group operation. 267 | """ 268 | channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) 269 | channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan-1") 270 | channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan-2") 271 | channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan-3") 272 | await channel_layer.group_add("test-group", channel_name1) 273 | await channel_layer.group_add("test-group", channel_name2) 274 | await channel_layer.group_add("test-group", channel_name3) 275 | await channel_layer.group_discard("test-group", channel_name2) 276 | await channel_layer.group_send("test-group", {"type": "message.1"}) 277 | # Make sure we get the message on the two channels that were in 278 | async with async_timeout.timeout(1): 279 | assert (await channel_layer.receive(channel_name1))["type"] == "message.1" 280 | assert (await channel_layer.receive(channel_name3))["type"] == "message.1" 281 | # Make sure the removed channel did not get the message 282 | with pytest.raises(asyncio.TimeoutError): 283 | async with async_timeout.timeout(1): 284 | await channel_layer.receive(channel_name2) 285 | await channel_layer.flush() 286 | 287 | 288 | @pytest.mark.asyncio 289 | async def test_groups_channel_full(channel_layer): 290 | """ 291 | Tests that group_send ignores ChannelFull 292 | """ 293 | channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) 294 | await channel_layer.group_add("test-group", "test-gr-chan-1") 295 | await channel_layer.group_send("test-group", {"type": "message.1"}) 296 | await channel_layer.group_send("test-group", {"type": "message.1"}) 297 | await channel_layer.group_send("test-group", {"type": "message.1"}) 298 | await channel_layer.group_send("test-group", {"type": "message.1"}) 299 | await channel_layer.group_send("test-group", {"type": "message.1"}) 300 | await channel_layer.flush() 301 | 302 | 303 | @pytest.mark.asyncio 304 | async def test_groups_multiple_hosts(channel_layer_multiple_hosts): 305 | """ 306 | Tests advanced group operation with multiple hosts. 307 | """ 308 | channel_layer = RedisChannelLayer(hosts=MULTIPLE_TEST_HOSTS, capacity=100) 309 | channel_name1 = await channel_layer.new_channel(prefix="channel1") 310 | channel_name2 = await channel_layer.new_channel(prefix="channel2") 311 | channel_name3 = await channel_layer.new_channel(prefix="channel3") 312 | await channel_layer.group_add("test-group", channel_name1) 313 | await channel_layer.group_add("test-group", channel_name2) 314 | await channel_layer.group_add("test-group", channel_name3) 315 | await channel_layer.group_discard("test-group", channel_name2) 316 | await channel_layer.group_send("test-group", {"type": "message.1"}) 317 | await channel_layer.group_send("test-group", {"type": "message.1"}) 318 | 319 | # Make sure we get the message on the two channels that were in 320 | async with async_timeout.timeout(1): 321 | assert (await channel_layer.receive(channel_name1))["type"] == "message.1" 322 | assert (await channel_layer.receive(channel_name3))["type"] == "message.1" 323 | 324 | with pytest.raises(asyncio.TimeoutError): 325 | async with async_timeout.timeout(1): 326 | await channel_layer.receive(channel_name2) 327 | 328 | await channel_layer.flush() 329 | 330 | 331 | @pytest.mark.asyncio 332 | async def test_groups_same_prefix(channel_layer): 333 | """ 334 | Tests group_send with multiple channels with same channel prefix 335 | """ 336 | channel_layer = RedisChannelLayer(hosts=TEST_HOSTS) 337 | channel_name1 = await channel_layer.new_channel(prefix="test-gr-chan") 338 | channel_name2 = await channel_layer.new_channel(prefix="test-gr-chan") 339 | channel_name3 = await channel_layer.new_channel(prefix="test-gr-chan") 340 | await channel_layer.group_add("test-group", channel_name1) 341 | await channel_layer.group_add("test-group", channel_name2) 342 | await channel_layer.group_add("test-group", channel_name3) 343 | await channel_layer.group_send("test-group", {"type": "message.1"}) 344 | 345 | # Make sure we get the message on the channels that were in 346 | async with async_timeout.timeout(1): 347 | assert (await channel_layer.receive(channel_name1))["type"] == "message.1" 348 | assert (await channel_layer.receive(channel_name2))["type"] == "message.1" 349 | assert (await channel_layer.receive(channel_name3))["type"] == "message.1" 350 | 351 | await channel_layer.flush() 352 | 353 | 354 | @pytest.mark.parametrize( 355 | "num_channels,timeout", 356 | [ 357 | (1, 1), # Edge cases - make sure we can send to a single channel 358 | (10, 1), 359 | (100, 10), 360 | ], 361 | ) 362 | @pytest.mark.asyncio 363 | async def test_groups_multiple_hosts_performance( 364 | channel_layer_multiple_hosts, num_channels, timeout 365 | ): 366 | """ 367 | Tests advanced group operation: can send efficiently to multiple channels 368 | with multiple hosts within a certain timeout 369 | """ 370 | channel_layer = RedisChannelLayer(hosts=MULTIPLE_TEST_HOSTS, capacity=100) 371 | 372 | channels = [] 373 | for i in range(0, num_channels): 374 | channel = await channel_layer.new_channel(prefix="channel%s" % i) 375 | await channel_layer.group_add("test-group", channel) 376 | channels.append(channel) 377 | 378 | async with async_timeout.timeout(timeout): 379 | await channel_layer.group_send("test-group", {"type": "message.1"}) 380 | 381 | # Make sure we get the message all the channels 382 | async with async_timeout.timeout(timeout): 383 | for channel in channels: 384 | assert (await channel_layer.receive(channel))["type"] == "message.1" 385 | 386 | await channel_layer.flush() 387 | 388 | 389 | @pytest.mark.asyncio 390 | async def test_group_send_capacity(channel_layer, caplog): 391 | """ 392 | Makes sure we dont group_send messages to channels that are over capacity. 393 | Make sure number of channels with full capacity are logged as an exception to help debug errors. 394 | """ 395 | 396 | channel = await channel_layer.new_channel() 397 | await channel_layer.group_add("test-group", channel) 398 | 399 | await channel_layer.group_send("test-group", {"type": "message.1"}) 400 | await channel_layer.group_send("test-group", {"type": "message.2"}) 401 | await channel_layer.group_send("test-group", {"type": "message.3"}) 402 | await channel_layer.group_send("test-group", {"type": "message.4"}) 403 | 404 | # We should receive the first 3 messages 405 | assert (await channel_layer.receive(channel))["type"] == "message.1" 406 | assert (await channel_layer.receive(channel))["type"] == "message.2" 407 | assert (await channel_layer.receive(channel))["type"] == "message.3" 408 | 409 | # Make sure we do NOT receive message 4 410 | with pytest.raises(asyncio.TimeoutError): 411 | async with async_timeout.timeout(1): 412 | await channel_layer.receive(channel) 413 | 414 | # Make sure number of channels over capacity are logged 415 | for record in caplog.records: 416 | assert record.levelname == "INFO" 417 | assert ( 418 | record.getMessage() == "1 of 1 channels over capacity in group test-group" 419 | ) 420 | 421 | 422 | @pytest.mark.asyncio 423 | async def test_group_send_capacity_multiple_channels(channel_layer, caplog): 424 | """ 425 | Makes sure we dont group_send messages to channels that are over capacity 426 | Make sure number of channels with full capacity are logged as an exception to help debug errors. 427 | """ 428 | 429 | channel_1 = await channel_layer.new_channel() 430 | channel_2 = await channel_layer.new_channel(prefix="channel_2") 431 | await channel_layer.group_add("test-group", channel_1) 432 | await channel_layer.group_add("test-group", channel_2) 433 | 434 | # Let's put channel_2 over capacity 435 | await channel_layer.send(channel_2, {"type": "message.0"}) 436 | 437 | await channel_layer.group_send("test-group", {"type": "message.1"}) 438 | await channel_layer.group_send("test-group", {"type": "message.2"}) 439 | await channel_layer.group_send("test-group", {"type": "message.3"}) 440 | 441 | # Channel_1 should receive all 3 group messages 442 | assert (await channel_layer.receive(channel_1))["type"] == "message.1" 443 | assert (await channel_layer.receive(channel_1))["type"] == "message.2" 444 | assert (await channel_layer.receive(channel_1))["type"] == "message.3" 445 | 446 | # Channel_2 should receive the first message + 2 group messages 447 | assert (await channel_layer.receive(channel_2))["type"] == "message.0" 448 | assert (await channel_layer.receive(channel_2))["type"] == "message.1" 449 | assert (await channel_layer.receive(channel_2))["type"] == "message.2" 450 | 451 | # Make sure channel_2 does not receive the 3rd group message 452 | with pytest.raises(asyncio.TimeoutError): 453 | async with async_timeout.timeout(1): 454 | await channel_layer.receive(channel_2) 455 | 456 | # Make sure number of channels over capacity are logged 457 | for record in caplog.records: 458 | assert record.levelname == "INFO" 459 | assert ( 460 | record.getMessage() == "1 of 2 channels over capacity in group test-group" 461 | ) 462 | 463 | 464 | @pytest.mark.xfail( 465 | reason=""" 466 | Fails with error in redis-py: int() argument must be a string, a bytes-like 467 | object or a real number, not 'NoneType'. Refs: #348 468 | """ 469 | ) 470 | @pytest.mark.asyncio 471 | async def test_receive_cancel(channel_layer): 472 | """ 473 | Makes sure we can cancel a receive without blocking 474 | """ 475 | channel_layer = RedisChannelLayer(capacity=30) 476 | channel = await channel_layer.new_channel() 477 | delay = 0 478 | while delay < 0.01: 479 | await channel_layer.send(channel, {"type": "test.message", "text": "Ahoy-hoy!"}) 480 | 481 | task = asyncio.ensure_future(channel_layer.receive(channel)) 482 | await asyncio.sleep(delay) 483 | task.cancel() 484 | delay += 0.0001 485 | 486 | try: 487 | await asyncio.wait_for(task, None) 488 | except asyncio.CancelledError: 489 | pass 490 | 491 | 492 | @pytest.mark.asyncio 493 | async def test_random_reset__channel_name(channel_layer): 494 | """ 495 | Makes sure resetting random seed does not make us reuse channel names. 496 | """ 497 | 498 | channel_layer = RedisChannelLayer() 499 | random.seed(1) 500 | channel_name_1 = await channel_layer.new_channel() 501 | random.seed(1) 502 | channel_name_2 = await channel_layer.new_channel() 503 | 504 | assert channel_name_1 != channel_name_2 505 | 506 | 507 | @pytest.mark.asyncio 508 | async def test_random_reset__client_prefix(channel_layer): 509 | """ 510 | Makes sure resetting random seed does not make us reuse client_prefixes. 511 | """ 512 | 513 | random.seed(1) 514 | channel_layer_1 = RedisChannelLayer() 515 | random.seed(1) 516 | channel_layer_2 = RedisChannelLayer() 517 | assert channel_layer_1.client_prefix != channel_layer_2.client_prefix 518 | 519 | 520 | @pytest.mark.asyncio 521 | async def test_message_expiry__earliest_message_expires(channel_layer): 522 | expiry = 3 523 | delay = 2 524 | channel_layer = RedisChannelLayer(expiry=expiry) 525 | channel_name = await channel_layer.new_channel() 526 | 527 | task = asyncio.ensure_future( 528 | send_three_messages_with_delay(channel_name, channel_layer, delay) 529 | ) 530 | await asyncio.wait_for(task, None) 531 | 532 | # the first message should have expired, we should only see the second message and the third 533 | message = await channel_layer.receive(channel_name) 534 | assert message["type"] == "test.message" 535 | assert message["text"] == "Second!" 536 | 537 | message = await channel_layer.receive(channel_name) 538 | assert message["type"] == "test.message" 539 | assert message["text"] == "Third!" 540 | 541 | # Make sure there's no third message even out of order 542 | with pytest.raises(asyncio.TimeoutError): 543 | async with async_timeout.timeout(1): 544 | await channel_layer.receive(channel_name) 545 | 546 | 547 | @pytest.mark.asyncio 548 | async def test_message_expiry__all_messages_under_expiration_time(channel_layer): 549 | expiry = 3 550 | delay = 1 551 | channel_layer = RedisChannelLayer(expiry=expiry) 552 | channel_name = await channel_layer.new_channel() 553 | 554 | task = asyncio.ensure_future( 555 | send_three_messages_with_delay(channel_name, channel_layer, delay) 556 | ) 557 | await asyncio.wait_for(task, None) 558 | 559 | # expiry = 3, total delay under 3, all messages there 560 | message = await channel_layer.receive(channel_name) 561 | assert message["type"] == "test.message" 562 | assert message["text"] == "First!" 563 | 564 | message = await channel_layer.receive(channel_name) 565 | assert message["type"] == "test.message" 566 | assert message["text"] == "Second!" 567 | 568 | message = await channel_layer.receive(channel_name) 569 | assert message["type"] == "test.message" 570 | assert message["text"] == "Third!" 571 | 572 | 573 | @pytest.mark.asyncio 574 | async def test_message_expiry__group_send(channel_layer): 575 | expiry = 3 576 | delay = 2 577 | channel_layer = RedisChannelLayer(expiry=expiry) 578 | channel_name = await channel_layer.new_channel() 579 | 580 | await channel_layer.group_add("test-group", channel_name) 581 | 582 | task = asyncio.ensure_future( 583 | group_send_three_messages_with_delay("test-group", channel_layer, delay) 584 | ) 585 | await asyncio.wait_for(task, None) 586 | 587 | # the first message should have expired, we should only see the second message and the third 588 | message = await channel_layer.receive(channel_name) 589 | assert message["type"] == "test.message" 590 | assert message["text"] == "Second!" 591 | 592 | message = await channel_layer.receive(channel_name) 593 | assert message["type"] == "test.message" 594 | assert message["text"] == "Third!" 595 | 596 | # Make sure there's no third message even out of order 597 | with pytest.raises(asyncio.TimeoutError): 598 | async with async_timeout.timeout(1): 599 | await channel_layer.receive(channel_name) 600 | 601 | 602 | @pytest.mark.xfail(reason="Fails with timeout. Refs: #348") 603 | @pytest.mark.asyncio 604 | async def test_message_expiry__group_send__one_channel_expires_message(channel_layer): 605 | expiry = 3 606 | delay = 1 607 | 608 | channel_layer = RedisChannelLayer(expiry=expiry) 609 | channel_1 = await channel_layer.new_channel() 610 | channel_2 = await channel_layer.new_channel(prefix="channel_2") 611 | 612 | await channel_layer.group_add("test-group", channel_1) 613 | await channel_layer.group_add("test-group", channel_2) 614 | 615 | # Let's give channel_1 one additional message and then sleep 616 | await channel_layer.send(channel_1, {"type": "test.message", "text": "Zero!"}) 617 | await asyncio.sleep(2) 618 | 619 | task = asyncio.ensure_future( 620 | group_send_three_messages_with_delay("test-group", channel_layer, delay) 621 | ) 622 | await asyncio.wait_for(task, None) 623 | 624 | # message Zero! was sent about 2 + 1 + 1 seconds ago and it should have expired 625 | message = await channel_layer.receive(channel_1) 626 | assert message["type"] == "test.message" 627 | assert message["text"] == "First!" 628 | 629 | message = await channel_layer.receive(channel_1) 630 | assert message["type"] == "test.message" 631 | assert message["text"] == "Second!" 632 | 633 | message = await channel_layer.receive(channel_1) 634 | assert message["type"] == "test.message" 635 | assert message["text"] == "Third!" 636 | 637 | # Make sure there's no fourth message even out of order 638 | with pytest.raises(asyncio.TimeoutError): 639 | async with async_timeout.timeout(1): 640 | await channel_layer.receive(channel_1) 641 | 642 | # channel_2 should receive all three messages from group_send 643 | message = await channel_layer.receive(channel_2) 644 | assert message["type"] == "test.message" 645 | assert message["text"] == "First!" 646 | 647 | # the first message should have expired, we should only see the second message and the third 648 | message = await channel_layer.receive(channel_2) 649 | assert message["type"] == "test.message" 650 | assert message["text"] == "Second!" 651 | 652 | message = await channel_layer.receive(channel_2) 653 | assert message["type"] == "test.message" 654 | assert message["text"] == "Third!" 655 | 656 | 657 | def test_default_group_key_format(): 658 | channel_layer = RedisChannelLayer() 659 | group_name = channel_layer._group_key("test_group") 660 | assert group_name == b"asgi:group:test_group" 661 | 662 | 663 | def test_custom_group_key_format(): 664 | channel_layer = RedisChannelLayer(prefix="test_prefix") 665 | group_name = channel_layer._group_key("test_group") 666 | assert group_name == b"test_prefix:group:test_group" 667 | 668 | 669 | def test_receive_buffer_respects_capacity(): 670 | channel_layer = RedisChannelLayer() 671 | buff = channel_layer.receive_buffer["test-group"] 672 | for i in range(10000): 673 | buff.put_nowait(i) 674 | 675 | capacity = 100 676 | assert channel_layer.capacity == capacity 677 | assert buff.full() is True 678 | assert buff.qsize() == capacity 679 | messages = [buff.get_nowait() for _ in range(capacity)] 680 | assert list(range(9900, 10000)) == messages 681 | 682 | 683 | def test_serialize(): 684 | """ 685 | Test default serialization method 686 | """ 687 | message = {"a": True, "b": None, "c": {"d": []}} 688 | channel_layer = RedisChannelLayer() 689 | serialized = channel_layer.serialize(message) 690 | assert isinstance(serialized, bytes) 691 | assert serialized[12:] == b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" 692 | 693 | 694 | def test_deserialize(): 695 | """ 696 | Test default deserialization method 697 | """ 698 | message = b"Q\x0c\xbb?Q\xbc\xe3|D\xfd9\x00\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90" 699 | channel_layer = RedisChannelLayer() 700 | deserialized = channel_layer.deserialize(message) 701 | 702 | assert isinstance(deserialized, dict) 703 | assert deserialized == {"a": True, "b": None, "c": {"d": []}} 704 | -------------------------------------------------------------------------------- /tests/test_serializers.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | from channels_redis.serializers import ( 4 | JSONSerializer, 5 | MsgPackSerializer, 6 | SerializerDoesNotExist, 7 | SerializersRegistry, 8 | ) 9 | 10 | 11 | @pytest.fixture 12 | def registry(): 13 | return SerializersRegistry() 14 | 15 | 16 | class OnlySerialize: 17 | def serialize(self, message): 18 | return message 19 | 20 | 21 | class OnlyDeserialize: 22 | def deserialize(self, message): 23 | return message 24 | 25 | 26 | def bad_serializer(): 27 | pass 28 | 29 | 30 | class NoopSerializer: 31 | def serialize(self, message): 32 | return message 33 | 34 | def deserialize(self, message): 35 | return message 36 | 37 | 38 | @pytest.mark.parametrize( 39 | "serializer_class", (OnlyDeserialize, OnlySerialize, bad_serializer) 40 | ) 41 | def test_refuse_to_register_bad_serializers(registry, serializer_class): 42 | with pytest.raises(AssertionError): 43 | registry.register_serializer("custom", serializer_class) 44 | 45 | 46 | def test_raise_error_for_unregistered_serializer(registry): 47 | with pytest.raises(SerializerDoesNotExist): 48 | registry.get_serializer("unexistent") 49 | 50 | 51 | def test_register_custom_serializer(registry): 52 | registry.register_serializer("custom", NoopSerializer) 53 | serializer = registry.get_serializer("custom") 54 | assert serializer.serialize("message") == "message" 55 | assert serializer.deserialize("message") == "message" 56 | 57 | 58 | @pytest.mark.parametrize( 59 | "serializer_cls,expected", 60 | ( 61 | (MsgPackSerializer, b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90"), 62 | (JSONSerializer, b'{"a": true, "b": null, "c": {"d": []}}'), 63 | ), 64 | ) 65 | @pytest.mark.parametrize("prefix_length", (8, 12, 0, -1)) 66 | def test_serialize(serializer_cls, expected, prefix_length): 67 | """ 68 | Test default serialization method 69 | """ 70 | message = {"a": True, "b": None, "c": {"d": []}} 71 | serializer = serializer_cls(random_prefix_length=prefix_length) 72 | serialized = serializer.serialize(message) 73 | assert isinstance(serialized, bytes) 74 | if prefix_length > 0: 75 | assert serialized[prefix_length:] == expected 76 | else: 77 | assert serialized == expected 78 | 79 | 80 | @pytest.mark.parametrize( 81 | "serializer_cls,value", 82 | ( 83 | (MsgPackSerializer, b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90"), 84 | (JSONSerializer, b'{"a": true, "b": null, "c": {"d": []}}'), 85 | ), 86 | ) 87 | @pytest.mark.parametrize( 88 | "prefix_length,prefix", 89 | ( 90 | (8, b"Q\x0c\xbb?Q\xbc\xe3|"), 91 | (12, b"Q\x0c\xbb?Q\xbc\xe3|D\xfd9\x00"), 92 | (0, b""), 93 | (-1, b""), 94 | ), 95 | ) 96 | def test_deserialize(serializer_cls, value, prefix_length, prefix): 97 | """ 98 | Test default deserialization method 99 | """ 100 | message = prefix + value 101 | serializer = serializer_cls(random_prefix_length=prefix_length) 102 | deserialized = serializer.deserialize(message) 103 | assert isinstance(deserialized, dict) 104 | assert deserialized == {"a": True, "b": None, "c": {"d": []}} 105 | 106 | 107 | @pytest.mark.parametrize( 108 | "serializer_cls,clear_value", 109 | ( 110 | (MsgPackSerializer, b"\x83\xa1a\xc3\xa1b\xc0\xa1c\x81\xa1d\x90"), 111 | (JSONSerializer, b'{"a": true, "b": null, "c": {"d": []}}'), 112 | ), 113 | ) 114 | def test_serialization_encrypted(serializer_cls, clear_value): 115 | """ 116 | Test serialization rount-trip with encryption 117 | """ 118 | message = {"a": True, "b": None, "c": {"d": []}} 119 | serializer = serializer_cls( 120 | symmetric_encryption_keys=["a-test-key"], random_prefix_length=4 121 | ) 122 | serialized = serializer.serialize(message) 123 | assert isinstance(serialized, bytes) 124 | assert serialized[4:] != clear_value 125 | deserialized = serializer.deserialize(serialized) 126 | assert isinstance(deserialized, dict) 127 | assert deserialized == message 128 | -------------------------------------------------------------------------------- /tests/test_utils.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | from channels_redis.utils import _consistent_hash 4 | 5 | 6 | @pytest.mark.parametrize( 7 | "value,ring_size,expected", 8 | [ 9 | ("key_one", 1, 0), 10 | ("key_two", 1, 0), 11 | ("key_one", 2, 1), 12 | ("key_two", 2, 0), 13 | ("key_one", 10, 6), 14 | ("key_two", 10, 4), 15 | (b"key_one", 10, 6), 16 | (b"key_two", 10, 4), 17 | ], 18 | ) 19 | def test_consistent_hash_result(value, ring_size, expected): 20 | assert _consistent_hash(value, ring_size) == expected 21 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | envlist = 3 | py{38,39,310,311,312,313}-ch{30,40,main}-redis50 4 | py311-chmain-redis{45,46,50,main} 5 | qa 6 | 7 | [testenv] 8 | usedevelop = true 9 | extras = tests 10 | commands = 11 | pytest -v {posargs} 12 | deps = 13 | ch30: channels>=3.0,<3.1 14 | ch40: channels>=4.0,<4.1 15 | chmain: https://github.com/django/channels/archive/main.tar.gz 16 | redis46: redis>=4.6,<4.7 17 | redis50: redis>=5.0,<5.1 18 | redismain: https://github.com/redis/redis-py/archive/master.tar.gz 19 | 20 | [testenv:qa] 21 | skip_install=true 22 | deps = 23 | black 24 | flake8 25 | isort 26 | commands = 27 | flake8 channels_redis tests 28 | black --check channels_redis tests 29 | isort --check-only --diff channels_redis tests 30 | --------------------------------------------------------------------------------