├── .gitignore ├── LICENSE ├── README.md ├── get_books.py ├── lua ├── apply_book_return.lua ├── books_to_stream.lua └── refill_automated_storage.lua ├── main.py ├── requirements.txt └── services ├── lending_service.py └── shelving_service.py /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | *.egg-info/ 24 | .installed.cfg 25 | *.egg 26 | MANIFEST 27 | 28 | # PyInstaller 29 | # Usually these files are written by a python script from a template 30 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 31 | *.manifest 32 | *.spec 33 | 34 | # Installer logs 35 | pip-log.txt 36 | pip-delete-this-directory.txt 37 | 38 | # Unit test / coverage reports 39 | htmlcov/ 40 | .tox/ 41 | .nox/ 42 | .coverage 43 | .coverage.* 44 | .cache 45 | nosetests.xml 46 | coverage.xml 47 | *.cover 48 | .hypothesis/ 49 | .pytest_cache/ 50 | 51 | # Translations 52 | *.mo 53 | *.pot 54 | 55 | # Django stuff: 56 | *.log 57 | local_settings.py 58 | db.sqlite3 59 | 60 | # Flask stuff: 61 | instance/ 62 | .webassets-cache 63 | 64 | # Scrapy stuff: 65 | .scrapy 66 | 67 | # Sphinx documentation 68 | docs/_build/ 69 | 70 | # PyBuilder 71 | target/ 72 | 73 | # Jupyter Notebook 74 | .ipynb_checkpoints 75 | 76 | # IPython 77 | profile_default/ 78 | ipython_config.py 79 | 80 | # pyenv 81 | .python-version 82 | 83 | # celery beat schedule file 84 | celerybeat-schedule 85 | 86 | # SageMath parsed files 87 | *.sage.py 88 | 89 | # Environments 90 | .env 91 | .venv 92 | env/ 93 | venv/ 94 | ENV/ 95 | env.bak/ 96 | venv.bak/ 97 | 98 | # Spyder project settings 99 | .spyderproject 100 | .spyproject 101 | 102 | # Rope project settings 103 | .ropeproject 104 | 105 | # mkdocs documentation 106 | /site 107 | 108 | # mypy 109 | .mypy_cache/ 110 | .dmypy.json 111 | dmypy.json 112 | 113 | # Pyre type checker 114 | .pyre/ 115 | .DS_Store 116 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Loris Cro 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Redis Microservices for Dummies 2 | This repository contains the source code for the sample app discussed 3 | in the last chapter of the freely available [Redis Microservices for Dummies](https://redislabs.com/docs/redis-microservices-for-dummies/) book. 4 | 5 | ## Overview 6 | This is a sample application written in Python that models the core 7 | functionality of a library with an automated book request/return system. 8 | 9 | The application is made up of two services that communicate via event streams and 10 | it's meant to exemplify how the microservices architecture influcences data modeling 11 | and communication. 12 | 13 | ## Project Struture 14 | 15 | ### `services/` 16 | Contains the implementation of two services: `LendingService` and `ShelvingService`. 17 | Most of the implemented functionality is in `LendingService`. 18 | `ShelvingService` represents the main storage of the library, while `LendingService` 19 | represents the robotic arm tasked with fetching books from the library. We also assume 20 | that `LendingService` has a small storage for frequently-requested books. 21 | 22 | 23 | ### `lua/` 24 | Contains the [Lua scripts](https://redis.io/commands/eval) that `LendingService` 25 | uses to perform some operations with transactional semantics (isolation, all-or-nothing). 26 | 27 | ### `main.py` 28 | Loads the application. It's also possible to launch mutiple instances in parallel (i.e., supports horizontal scaling). 29 | ``` 30 | usage: main.py [-h] [-f] [-a ADDRESS] [--db DB] [--password PASSWORD] [--ssl] 31 | unique_name 32 | 33 | LendingService sample implementation. 34 | 35 | positional arguments: 36 | unique_name unique name for this instance 37 | 38 | optional arguments: 39 | -h, --help show this help message and exit 40 | -f, --force start even if there is a lock on this instance name 41 | -a ADDRESS, --address ADDRESS 42 | redis address (or unix socket path) defaults to 43 | `redis://localhost:6379` 44 | --db DB redis database to use, defaults to 0 45 | --password PASSWORD redis password 46 | --ssl use ssl 47 | ``` 48 | 49 | ### `get_books.py` 50 | Allows you to request and return books. The result of each request will be logged by `main.py`. 51 | ``` 52 | usage: get_books.py [-h] [-a ADDRESS] [--db DB] [--password PASSWORD] [--ssl] 53 | {request,return} username book [book ...] 54 | 55 | CLI tool to get and return books. 56 | 57 | positional arguments: 58 | {request,return} action to perform, either `request` or `return` 59 | username name identifying the user 60 | book names identifying a book 61 | 62 | optional arguments: 63 | -h, --help show this help message and exit 64 | -a ADDRESS, --address ADDRESS 65 | redis address (or unix socket path) defaults to 66 | `redis://localhost:6379` 67 | --db DB redis database to use, defaults to 0 68 | --password PASSWORD redis password 69 | --ssl use ssl 70 | ``` 71 | 72 | ## Usage 73 | For convenience we assume that the library has a unique copy of every possible book. 74 | This means that all requests for new books will always succeed. 75 | Requesting a book that is already being lent to another user will not succeed. 76 | Users also have a limit of max 5 books they and be lent at any given time. 77 | 78 | 79 | ### Usage example 80 | 81 | First make sure to install all the dependences: 82 | 83 | `pip install -r requirements.txt` 84 | 85 | Then launch the main process: 86 | 87 | `python main.py worker1` 88 | 89 | Finally, in another terminal: 90 | 91 | `python get_books.py request user1 alice-in-wonderland geb invisible-citites` 92 | 93 | `python get_books.py return user1 invisible-cities` 94 | 95 | `python get_books.py request user2 geb invisible-cities selfish-gene` 96 | 97 | 98 | 99 | 100 | -------------------------------------------------------------------------------- /get_books.py: -------------------------------------------------------------------------------- 1 | import argparse, signal, asyncio, aioredis 2 | from services.shelving_service import ShelvingService 3 | from services.lending_service import LendingService, BOOKS_FOR_SHELVING_STREAM_KEY 4 | 5 | # Main event loop 6 | loop = asyncio.get_event_loop() 7 | 8 | # Configuration 9 | LENDING_REQUESTS_STREAM_KEY = "lending_requests_event_stream" 10 | BOOK_RETURN_REQUESTS_STREAM_KEY = "book_return_requests_event_stream" 11 | 12 | 13 | async def main(action, user, books, address, db, password): 14 | pool = await aioredis.create_redis_pool(address, db=db, password=password, 15 | minsize=4, maxsize=10, loop=loop, encoding='utf8') 16 | 17 | # Choose the target stream based on `action` 18 | stream_key = None 19 | if action == 'request': 20 | stream_key = LENDING_REQUESTS_STREAM_KEY 21 | elif action == 'return': 22 | stream_key = BOOK_RETURN_REQUESTS_STREAM_KEY 23 | else: 24 | print("Unexpected action") 25 | exit(1) 26 | 27 | # Send the request 28 | await pool.xadd(stream_key, {'user_id': user, 'book_ids': ','.join(books)}) 29 | print("OK") 30 | 31 | if __name__ == '__main__': 32 | parser = argparse.ArgumentParser(description='CLI tool to get and return books.') 33 | parser.add_argument('action', choices=['request', 'return'], type=str, 34 | help='action to perform, either `request` or `return`') 35 | parser.add_argument('name', metavar='username', type=str, 36 | help='name identifying the user') 37 | parser.add_argument('books', metavar='book', type=str, nargs='+', 38 | help='names identifying a book') 39 | parser.add_argument('-a', '--address', type=str, default="redis://localhost:6379", 40 | help='redis address (or unix socket path) defaults to `redis://localhost:6379`') 41 | parser.add_argument('--db', type=int, default=0, 42 | help='redis database to use, defaults to 0') 43 | parser.add_argument('--password', type=str, default=None, 44 | help='redis password') 45 | args = parser.parse_args() 46 | 47 | loop.run_until_complete(main(action=args.action, user=args.name, books=args.books, 48 | address=args.address, db=args.db, password=args.password)) 49 | 50 | 51 | -------------------------------------------------------------------------------- /lua/apply_book_return.lua: -------------------------------------------------------------------------------- 1 | local user_id = ARGV[1] 2 | local lent_books_key = KEYS[1] 3 | local temp_set_key = KEYS[2] 4 | local user_counts_key = KEYS[3] 5 | 6 | -- For each book, update `lent_books` if the user_id matches, 7 | -- otherwise just remove it from the set. 8 | for _, book_id in ipairs(redis.call('SMEMBERS', temp_set_key)) do 9 | if user_id == redis.call('HGET', lent_books_key, book_id) then 10 | redis.call('HDEL', lent_books_key, book_id) 11 | else 12 | redis.call('SREM', temp_set_key, book_id) 13 | end 14 | end 15 | 16 | -- All the books that remain in `temp_set` are the books that 17 | -- were succesfully returned. We now need to update the count. 18 | local returned_count = redis.call('SCARD', temp_set_key) 19 | if returned_count > 0 then 20 | redis.call('INCRBY', user_counts_key, -returned_count) 21 | end 22 | -------------------------------------------------------------------------------- /lua/books_to_stream.lua: -------------------------------------------------------------------------------- 1 | local temp_set_key = KEYS[1] 2 | local books_return_stream_key = KEYS[2] 3 | 4 | -- Publish books in temp_set on a stream (for ShelvingService) 5 | local books_to_return = redis.call('SMEMBERS', temp_set_key) 6 | if #books_to_return > 0 then 7 | redis.call('XADD', books_return_stream_key, '*', 'book_ids', table.concat(books_to_return, ',')) 8 | end 9 | -------------------------------------------------------------------------------- /lua/refill_automated_storage.lua: -------------------------------------------------------------------------------- 1 | local max_automated_storage_size = ARGV[1] 2 | local automated_storage_key = KEYS[1] 3 | local temp_set_key = KEYS[2] 4 | 5 | -- Move as many books as possible to automated storage 6 | local free_space = max_automated_storage_size - redis.call('SCARD', automated_storage_key) 7 | if free_space > 0 then 8 | local books_to_move = redis.call('SPOP', temp_set_key, free_space) 9 | if #books_to_move > 0 then 10 | redis.call('SADD', automated_storage_key, unpack(books_to_move)) 11 | end 12 | end 13 | 14 | -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | import argparse, signal, asyncio, aioredis 2 | from services.shelving_service import ShelvingService 3 | from services.lending_service import LendingService, BOOKS_FOR_SHELVING_STREAM_KEY 4 | 5 | # Main event loop 6 | loop = asyncio.get_event_loop() 7 | 8 | # Global list of running services, used by `graceful_shutdown` 9 | RUNNING_SERVICES = None 10 | SHUTTING_DOWN = False 11 | 12 | # Shutdown signal handler 13 | def graceful_shutdown(): 14 | global SHUTTING_DOWN 15 | if SHUTTING_DOWN: 16 | print("\nForcing shutdown...") 17 | exit(1) 18 | SHUTTING_DOWN = True 19 | for service in RUNNING_SERVICES: 20 | service.shutting_down = True 21 | print("\nShutting down (might take up to 10s)...") 22 | 23 | async def main(instance_name, force, address, db, password): 24 | pool = await aioredis.create_redis_pool(address, db=db, password=password, 25 | minsize=4, maxsize=10, loop=loop, encoding='utf8') 26 | 27 | lock_key = f"instance_lock:{instance_name}" 28 | if not force: 29 | if not await pool.setnx(lock_key, 'locked'): 30 | print("There might be another instance with the same name running.") 31 | print("Use the -f option to force launching anyway.") 32 | print("For the service to work correctly, each running instance must have a unique name.") 33 | exit(1) 34 | 35 | # Setup a singnal handler for graceful shutdown 36 | loop.add_signal_handler(signal.SIGINT, graceful_shutdown) 37 | 38 | # Instantiate LendingService and ShelvingService 39 | shelving_service = ShelvingService(pool, instance_name, BOOKS_FOR_SHELVING_STREAM_KEY) 40 | lending_service = LendingService(pool, instance_name, shelving_service) 41 | 42 | # Add services to `RUNNING_SERVICES` list to enable graceful shutdown 43 | global RUNNING_SERVICES 44 | RUNNING_SERVICES = [lending_service, shelving_service] 45 | 46 | # Launch all services 47 | await asyncio.gather(lending_service.launch_service(), shelving_service.launch_service()) 48 | 49 | # Release the instance name lock when shutting down 50 | await pool.delete(lock_key) 51 | 52 | 53 | if __name__ == '__main__': 54 | parser = argparse.ArgumentParser(description='LendingService sample implementation.') 55 | parser.add_argument('name', metavar='unique_name', type=str, 56 | help='unique name for this instance') 57 | parser.add_argument('-f', '--force', action='store_true', 58 | help='start even if there is a lock on this instance name') 59 | parser.add_argument('-a', '--address', type=str, default="redis://localhost:6379", 60 | help='redis address (or unix socket path) defaults to `redis://localhost:6379`') 61 | parser.add_argument('--db', type=int, default=0, 62 | help='redis database to use, defaults to 0') 63 | parser.add_argument('--password', type=str, default=None, 64 | help='redis password') 65 | args = parser.parse_args() 66 | 67 | loop.run_until_complete(main(instance_name=args.name, force=args.force, 68 | address=args.address, db=args.db, password=args.password)) 69 | 70 | 71 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | aioredis -------------------------------------------------------------------------------- /services/lending_service.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import aioredis 3 | loop = asyncio.get_event_loop() 4 | 5 | 6 | ## -- SERVICE STATE (stored in Redis) -- 7 | # AUTOMATED_BOOK_STORAGE_KEY is a Redis Set that contains BookIDs. 8 | # Books in the automated storage can be returned immediately to 9 | # the user. The book storage has limited space so when full books 10 | # must be handed off to BookShelvingService. 11 | AUTOMATED_BOOK_STORAGE_KEY = "automated_book_storage" 12 | 13 | # LENT_BOOKS_KEY is a Redis Hash that maps each lent BookID to the 14 | # UserID that presently has it. 15 | LENT_BOOKS_KEY = "lent_books" 16 | 17 | # Each user has a key that counts how many books they currently possess 18 | BOOK_COUNTS_KEY_TEMPLATE = "user:{user_id}:count" 19 | 20 | # While fullfilling a request, the service will progressively 21 | # reserve books by moving their IDs into a temporary Redis set 22 | # that is unique to each request. 23 | REQUEST_RESERVED_BOOKS_KEY_TEMPLATE = "request:{request_id}:temp" 24 | 25 | 26 | ## -- SERVICE CONFIGURATION -- 27 | # Represents the maximum number of books that the automated 28 | # storage can keep. 29 | AUTOMATED_BOOK_STORAGE_CAPACITY = 20 30 | 31 | # Maximum number of books a single user can have at the same time. 32 | # Ulterior requests for books will be denied once the limit is reached. 33 | MAX_LENT_BOOKS_PER_USER = 5 34 | 35 | # Names of required streams and consumer groups 36 | LENDING_SERVICE_CONSUMER_GROUP = 'lending_service' 37 | LENDING_REQUESTS_STREAM_KEY = "lending_requests_event_stream" 38 | BOOK_RETURN_REQUESTS_STREAM_KEY = "book_return_requests_event_stream" 39 | BOOKS_FOR_SHELVING_STREAM_KEY = "books_for_shelving_event_stream" 40 | 41 | # Global variable containing the SHA1 digest of `return_books.lua` 42 | # It gets populated during startup by `main()`. 43 | REFILL_AUTOMATED_STORAGE_LUA = None 44 | BOOKS_TO_STREAM_LUA = None 45 | APPLY_BOOK_RETURN_LUA = None 46 | 47 | 48 | class LendingService: 49 | def __init__(self, pool, instance_name, shelving_service): 50 | self.pool = pool 51 | self.instance_name = instance_name 52 | self.shutting_down = False 53 | self.shelving_service = shelving_service 54 | 55 | 56 | async def launch_service(self): 57 | # Ensure Redis has a consumer group defined for each relevant stream 58 | try: 59 | await self.pool.execute("XGROUP", "CREATE", LENDING_REQUESTS_STREAM_KEY, LENDING_SERVICE_CONSUMER_GROUP, "$", "MKSTREAM") 60 | except aioredis.errors.ReplyError as e: 61 | assert e.args[0].startswith("BUSYGROUP") 62 | try: 63 | await self.pool.execute("XGROUP", "CREATE", BOOK_RETURN_REQUESTS_STREAM_KEY, LENDING_SERVICE_CONSUMER_GROUP, "$", "MKSTREAM") 64 | except aioredis.errors.ReplyError as e: 65 | assert e.args[0].startswith("BUSYGROUP") 66 | 67 | # Ensure Redis has the required Lua scripts 68 | global REFILL_AUTOMATED_STORAGE_LUA, BOOKS_TO_STREAM_LUA, APPLY_BOOK_RETURN_LUA 69 | REFILL_AUTOMATED_STORAGE_LUA = await self.pool.script_load(open('lua/refill_automated_storage.lua', 'r').read()) 70 | BOOKS_TO_STREAM_LUA = await self.pool.script_load(open('lua/books_to_stream.lua', 'r').read()) 71 | APPLY_BOOK_RETURN_LUA = await self.pool.script_load(open('lua/apply_book_return.lua', 'r').read()) 72 | 73 | # First we retrieve any potential pending message 74 | events = await self.pool.xread_group(LENDING_SERVICE_CONSUMER_GROUP, self.instance_name, 75 | [LENDING_REQUESTS_STREAM_KEY, BOOK_RETURN_REQUESTS_STREAM_KEY], latest_ids=["0", "0"]) 76 | if len(events) > 0: 77 | print("[WARN] Found claimed events that need processing, resuming...") 78 | 79 | # This is the main loop 80 | print("Ready to process events...") 81 | with await self.pool as conn: 82 | while not self.shutting_down: 83 | tasks = [] 84 | for stream_name, event_id, message in events: 85 | if stream_name == LENDING_REQUESTS_STREAM_KEY: 86 | tasks.append(self.process_lending_request(event_id, message)) 87 | elif stream_name == BOOK_RETURN_REQUESTS_STREAM_KEY: 88 | tasks.append(self.process_returned_books_request(event_id, message)) 89 | await asyncio.gather(*tasks) 90 | 91 | # Gather new events to process (batch size = 10) 92 | events = await conn.xread_group(LENDING_SERVICE_CONSUMER_GROUP, self.instance_name, 93 | [LENDING_REQUESTS_STREAM_KEY, BOOK_RETURN_REQUESTS_STREAM_KEY], timeout=10000, count=10, latest_ids=[">", ">"]) 94 | 95 | 96 | async def process_lending_request(self, request_id, request): 97 | user_id = request['user_id'] 98 | requested_books = set(request['book_ids'].split(',')) 99 | user_book_counts_key = BOOK_COUNTS_KEY_TEMPLATE.format(user_id=user_id) 100 | request_reserved_books_key = REQUEST_RESERVED_BOOKS_KEY_TEMPLATE.format(request_id=request_id) 101 | 102 | ## -- PROCESS REQUEST -- 103 | 104 | # See if Redis contains already a partial set of reserved books. 105 | # This can happen in the case of a crash. In such case, we fetch 106 | # the set to resume from where we left off. 107 | books_found = await self.pool.smembers(request_reserved_books_key) 108 | if len(books_found) > 0: 109 | print("[WARN] Found partially-processed transaction, resuming.") 110 | 111 | for book_id in requested_books: 112 | # Was the book already reserved before a crash? 113 | if book_id in books_found: 114 | continue 115 | 116 | # Is the book lent already? 117 | if await self.pool.hexists(LENT_BOOKS_KEY, book_id): 118 | continue 119 | 120 | # Try to get the book from automated storage 121 | if await self.pool.smove(AUTOMATED_BOOK_STORAGE_KEY, request_reserved_books_key, book_id): 122 | books_found.append(book_id) 123 | continue 124 | 125 | # Try to get the book from ShelvingService 126 | # get_book() is idempotent, so it's ok to call it again 127 | # in case of a crash. 128 | if await self.shelving_service.get_book(book_id, request_id): 129 | await self.pool.sadd(request_reserved_books_key, book_id) 130 | books_found.append(book_id) 131 | continue 132 | 133 | # Requests for which we can't find any book get denied. 134 | # This is an arbitrary choice, but it makes more sense than 135 | # to accept a request and then give out 0 books. 136 | if len(books_found) == 0: 137 | await self.pool.xack(LENDING_REQUESTS_STREAM_KEY, LENDING_SERVICE_CONSUMER_GROUP, request_id) 138 | print(f"Request: [{request_id}] by {user_id} DENIED.") 139 | print(f" Cause: none of the requested books is available.\n") 140 | return 141 | 142 | # Reserving a connection for the upcoming transaction 143 | with await self.pool as conn: 144 | 145 | # The transaction can fail if another process is changing `user_book_counts_key`. 146 | # We retry indefinitely but know that the transaction will eventually succeed 147 | # because at every iteration at least 1 concurrent process will always move forward. 148 | # More generally, given the domain we chose, we don't expect users to perform 149 | # multiple requests at the same, normally. The transaction main purpose is to 150 | # prevent abuse where one user might try to perform multiple parallel requests 151 | # with the explicit goal of going over `MAX_LENT_BOOKS_PER_USER`. 152 | while True: 153 | # We WATCH `user_book_counts_key` to perform optimistinc locking over it. 154 | await conn.watch(user_book_counts_key) 155 | 156 | books_in_hand = int((await conn.get(user_book_counts_key)) or 0) 157 | if (len(books_found) + books_in_hand > MAX_LENT_BOOKS_PER_USER): 158 | # The user is overdrafting. We deny the request and roll-back 159 | # all reservations. 160 | 161 | await conn.unwatch() 162 | transaction = conn.multi_exec() 163 | 164 | # Refill local storage 165 | transaction.evalsha(REFILL_AUTOMATED_STORAGE_LUA, 166 | keys=[AUTOMATED_BOOK_STORAGE_KEY, request_reserved_books_key], 167 | args=[AUTOMATED_BOOK_STORAGE_CAPACITY]) 168 | 169 | # Return remaining books 170 | transaction.evalsha(BOOKS_TO_STREAM_LUA, 171 | keys=[request_reserved_books_key, BOOKS_FOR_SHELVING_STREAM_KEY]) 172 | 173 | # Cleanup 174 | transaction.unlink(request_reserved_books_key) 175 | transaction.xack(LENDING_REQUESTS_STREAM_KEY, LENDING_SERVICE_CONSUMER_GROUP, request_id) 176 | await transaction.execute() 177 | print(f"Request: [{request_id}] by {user_id} DENIED.") 178 | print(f" Cause: too many books (user has {books_in_hand} books, requested {len(requested_books)} of which {len(books_found)} were found).\n") 179 | else: 180 | # The user has enough capacity to get all the found books. 181 | # All temporarily reserved books will now be committed. 182 | transaction = conn.multi_exec() 183 | transaction.incrby(user_book_counts_key, len(books_found)) 184 | for book_id in books_found: 185 | transaction.hset(LENT_BOOKS_KEY, book_id, user_id) 186 | transaction.unlink(request_reserved_books_key) 187 | transaction.xack(LENDING_REQUESTS_STREAM_KEY, LENDING_SERVICE_CONSUMER_GROUP, request_id) 188 | try: 189 | await transaction.execute() 190 | print(f"Request: [{request_id}] by {user_id} ACCEPTED.") 191 | print(f"Books:") 192 | for i, book_id in enumerate(books_found): 193 | print(f" {i + 1}) {book_id}") 194 | print() 195 | except aioredis.WatchError: 196 | # If the transaction failed because the watched key 197 | # changed in the meantime, we retry the transaction. 198 | continue 199 | break 200 | return 201 | 202 | async def process_returned_books_request(self, return_request_id, return_request): 203 | user_id = return_request['user_id'] 204 | book_id_list = set(return_request['book_ids'].split(',')) 205 | user_book_counts_key = BOOK_COUNTS_KEY_TEMPLATE.format(user_id=user_id) 206 | temp_set_key = f"return:{return_request_id}:temp" 207 | 208 | with await self.pool as conn: 209 | # Start the transaction and load the data 210 | transaction = conn.multi_exec() 211 | transaction.sadd(temp_set_key, *book_id_list) 212 | 213 | # Apply book returns (updates `lent_books` and the user's book count) 214 | transaction.evalsha(APPLY_BOOK_RETURN_LUA, 215 | keys=[LENT_BOOKS_KEY, temp_set_key, user_book_counts_key], 216 | args=[user_id]) 217 | 218 | # Refill automated storage 219 | transaction.evalsha(REFILL_AUTOMATED_STORAGE_LUA, 220 | keys=[AUTOMATED_BOOK_STORAGE_KEY, temp_set_key], 221 | args=[AUTOMATED_BOOK_STORAGE_CAPACITY]) 222 | 223 | # Return remaining books 224 | transaction.evalsha(BOOKS_TO_STREAM_LUA, 225 | keys=[temp_set_key, BOOKS_FOR_SHELVING_STREAM_KEY]) 226 | 227 | # Cleanup 228 | transaction.unlink(temp_set_key) 229 | transaction.xack(BOOK_RETURN_REQUESTS_STREAM_KEY, LENDING_SERVICE_CONSUMER_GROUP, return_request_id) 230 | await transaction.execute() 231 | print(f"Book Return [{return_request_id}] PROCESSED \n") 232 | 233 | 234 | 235 | -------------------------------------------------------------------------------- /services/shelving_service.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import aioredis 3 | 4 | SHELVING_SERVICE_STATE_KEY = "shelving_service_state" 5 | SHELVING_SERVICE_CONSUMER_GROUP = 'shelving_service' 6 | 7 | # ShelvingService represents the library subsystem that is supposed 8 | # to put books back on the appropriate shelf. 9 | # Shelving service has less business logic than LendingService. 10 | # It has been implemented mainly to properly process returned books 11 | # so that users can use LendingService and see it behave correctly. 12 | 13 | class ShelvingService: 14 | 15 | def __init__(self, pool, instance_name, lending_service_returns_stream_key): 16 | self.pool = pool 17 | self.lending_service_returns_stream_key = lending_service_returns_stream_key 18 | self.instance_name = instance_name 19 | self.shutting_down = False 20 | 21 | # Public synchronous API. 22 | # In a microservices architecture you would probably 23 | # access this method through an HTTP / gRPC request. 24 | async def get_book(self, book_id, context_id): 25 | # Try to get the book 26 | if 1 == await self.pool.hsetnx(SHELVING_SERVICE_STATE_KEY, book_id, context_id): 27 | return True 28 | else: 29 | # Book is taken. If it was taken from this same context, return success anyway 30 | return context_id == await self.pool.hget(SHELVING_SERVICE_STATE_KEY, book_id) 31 | 32 | # Internal APIs 33 | async def launch_service(self): 34 | # Ensure we have a consumer group 35 | try: 36 | await self.pool.execute("XGROUP", "CREATE", self.lending_service_returns_stream_key, SHELVING_SERVICE_CONSUMER_GROUP, "$", "MKSTREAM") 37 | except aioredis.errors.ReplyError as e: 38 | assert e.args[0].startswith("BUSYGROUP") 39 | 40 | # Get pending returns 41 | events = await self.pool.xread_group(SHELVING_SERVICE_CONSUMER_GROUP, self.instance_name, 42 | [self.lending_service_returns_stream_key], latest_ids=["0"]) 43 | 44 | with await self.pool as conn: 45 | while not self.shutting_down: 46 | tasks = [self.process_return(event_id, message) for _, event_id, message in events] 47 | await asyncio.gather(*tasks) 48 | 49 | # Get more returns 50 | events = await conn.xread_group(SHELVING_SERVICE_CONSUMER_GROUP, self.instance_name, 51 | [self.lending_service_returns_stream_key], timeout=10000, latest_ids=[">"]) 52 | 53 | async def process_return(self, event_id, message): 54 | book_list = message['book_ids'].split(',') 55 | transaction = self.pool.multi_exec() 56 | transaction.hdel(SHELVING_SERVICE_STATE_KEY, *book_list) 57 | transaction.xack(self.lending_service_returns_stream_key, SHELVING_SERVICE_CONSUMER_GROUP, event_id) 58 | await transaction.execute() --------------------------------------------------------------------------------