├── .bumpversion.cfg ├── .env ├── .envrc ├── .github └── workflows │ └── tests.yml ├── .gitignore ├── .pylintrc ├── CHANGES.rst ├── LICENSE ├── MANIFEST.in ├── README.rst ├── dynamo3 ├── __init__.py ├── batch.py ├── connection.py ├── constants.py ├── exception.py ├── fields.py ├── rate.py ├── result.py ├── testing.py ├── types.py └── util.py ├── requirements_dev.txt ├── requirements_test.txt ├── setup.cfg ├── setup.py ├── tests ├── __init__.py ├── test_fields.py ├── test_rate_limit.py ├── test_read.py └── test_write.py └── tox.ini /.bumpversion.cfg: -------------------------------------------------------------------------------- 1 | [bumpversion] 2 | current_version = 1.0.0 3 | tag_name = {new_version} 4 | files = setup.py dynamo3/__init__.py 5 | commit = True 6 | tag = False 7 | parse = (?P\d+)\.(?P\d+)\.(?P\d+)(\.(?P[a-z]+)(?P\d+))? 8 | serialize = 9 | {major}.{minor}.{patch}.{release}{build} 10 | {major}.{minor}.{patch} 11 | 12 | [bumpversion:part:release] 13 | optional_value = prod 14 | first_value = dev 15 | values = 16 | dev 17 | prod 18 | 19 | [bumpversion:part:build] 20 | -------------------------------------------------------------------------------- /.env: -------------------------------------------------------------------------------- 1 | _new_root=$(dirname "$1") 2 | source $_new_root/dynamo3_env/bin/activate 3 | -------------------------------------------------------------------------------- /.envrc: -------------------------------------------------------------------------------- 1 | layout python 2 | -------------------------------------------------------------------------------- /.github/workflows/tests.yml: -------------------------------------------------------------------------------- 1 | name: Python package 2 | 3 | on: 4 | - push 5 | - pull_request 6 | 7 | jobs: 8 | build: 9 | runs-on: ubuntu-latest 10 | strategy: 11 | matrix: 12 | python-version: [3.6, 3.7, 3.8, 3.9] 13 | 14 | steps: 15 | - uses: actions/checkout@v1 16 | - name: Set up Python ${{ matrix.python-version }} 17 | uses: actions/setup-python@v2 18 | with: 19 | python-version: ${{ matrix.python-version }} 20 | - name: Install dependencies 21 | run: | 22 | python -m pip install --upgrade pip 23 | pip install tox tox-gh-actions 24 | - name: Test with tox 25 | run: tox 26 | - name: Publish coverage 27 | if: ${{ matrix.python-version == 3.8 }} 28 | run: tox -e coveralls 29 | env: 30 | GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} 31 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.py[cod] 2 | *.swp 3 | 4 | # C extensions 5 | *.so 6 | 7 | # Packages 8 | *.egg 9 | *.egg-info 10 | dist 11 | build 12 | eggs 13 | parts 14 | bin 15 | var 16 | sdist 17 | develop-eggs 18 | .installed.cfg 19 | lib 20 | lib64 21 | __pycache__ 22 | 23 | # Installer logs 24 | pip-log.txt 25 | 26 | # Unit test / coverage reports 27 | .coverage 28 | htmlcov 29 | .tox 30 | nosetests.xml 31 | 32 | # Translations 33 | *.mo 34 | 35 | # Mr Developer 36 | .mr.developer.cfg 37 | .project 38 | .pydevproject 39 | .ropeproject 40 | 41 | dynamo3_env 42 | *.db 43 | *.log 44 | .direnv/ 45 | -------------------------------------------------------------------------------- /.pylintrc: -------------------------------------------------------------------------------- 1 | [MESSAGES CONTROL] 2 | disable=E0102,W0104,E1101,W0511,W0612,W0613,W0212,W0221,W0703,W0622,C,R 3 | 4 | [BASIC] 5 | argument-rgx=[a-z_][a-z0-9_]{0,30}$ 6 | variable-rgx=[a-z_][a-z0-9_]{0,30}$ 7 | function-rgx=[a-z_][a-z0-9_]{0,30}$ 8 | attr-rgx=[a-z_][a-z0-9_]{0,30}$ 9 | method-rgx=([a-z_][a-z0-9_]{0,50}|setUp|tearDown|setUpClass|tearDownClass)$ 10 | no-docstring-rgx=((__.*__)|setUp|tearDown)$ 11 | 12 | [REPORTS] 13 | reports=no 14 | msg-template={path}:{line}: [{msg_id}({symbol}), {obj}] {msg} 15 | -------------------------------------------------------------------------------- /CHANGES.rst: -------------------------------------------------------------------------------- 1 | Changelog 2 | ========= 3 | 4 | 1.0.0 5 | ----- 6 | * Removed the legacy API (scan, query, update_item, delete_item, put_item, get_item) 7 | * Renamed the new API methods to match the old ones (e.g. scan2 -> scan, query2 -> query) 8 | * Moved constant values into ``dynamo3.constants``. This is where you can now find STRING, BINARY, etc 9 | * Added mypy typing where possible 10 | * Drop support for Python 2 11 | * Add support for table billing mode (aka on-demand tables) 12 | * Add support for SSE, TTL, and transactions 13 | 14 | 0.4.10 15 | ------ 16 | * Fixed DynamoDB Local link in testing framework 17 | 18 | 0.4.9 19 | ----- 20 | * Feature: Result objects from get_item have an ``exists`` flag 21 | * Feature: ``wait`` keyword for create and delete table 22 | 23 | 0.4.8 24 | ----- 25 | * Bug fix: Scans/Queries could return incomplete results if AWS returned an empty Items list 26 | 27 | 0.4.7 28 | ----- 29 | * New ``RateLimit`` class to avoid blowing through your provisioned throughput 30 | 31 | 0.4.6 32 | ----- 33 | * New ``Limit`` class for more complex query limit behavior 34 | * Bug fix: Scan and Query with ``Select='COUNT'`` will page results properly 35 | 36 | 0.4.5 37 | ----- 38 | * batch_get supports ``alias`` arg for ExpressionAttributeNames 39 | 40 | 0.4.4 41 | ----- 42 | * Make connection stateless again. Puts consumed_capacity into response object and fixes mystery crash. 43 | 44 | 0.4.3 45 | ----- 46 | * Bug fix: getting ConsumedCapacity doesn't crash for BatchGetItem and BatchWriteItem 47 | * Feature: connection.default_return_capacity 48 | * Feature: hooks for ``precall``, ``postcall``, and ``capacity`` 49 | * Better handling of ConsumedCapacity results 50 | 51 | 0.4.2 52 | ----- 53 | * Feature: New methods to take advantage of the newer expression API. See get_item2, put_item2. 54 | * Feature: Shortcut ``use_version`` for switching over to the new APIs. 55 | 56 | 0.4.1 57 | ----- 58 | * Feature: update_table can create and delete global indexes 59 | * Feature: New methods to take advantage of the newer expression API. See scan2, query2, update_item2, and delete_item2. 60 | 61 | 0.4.0 62 | ----- 63 | * Migrating to botocore client API since services will be deprecated soon 64 | 65 | 0.3.2 66 | ----- 67 | * Bug fix: Serialization of blobs broken with botocore 0.85.0 68 | 69 | 0.3.1 70 | ----- 71 | * Bug fix: Crash when parsing description of table being deleted 72 | 73 | 0.3.0 74 | ----- 75 | * **Breakage**: Dropping support for python 3.2 due to lack of botocore support 76 | * Feature: Support JSON document data types 77 | 78 | Features thanks to DynamoDB upgrades: https://aws.amazon.com/blogs/aws/dynamodb-update-json-and-more/ 79 | 80 | 0.2.2 81 | ----- 82 | * Tweak: Nose plugin allows setting region when connecting to DynamoDB Local 83 | 84 | 0.2.1 85 | ----- 86 | * Feature: New, unified ``connect`` method 87 | 88 | 0.2.0 89 | ----- 90 | * Feature: More expressive 'expected' conditionals 91 | * Feature: Queries can filter on non-indexed fields 92 | * Feature: Filter constraints may be OR'd together 93 | 94 | Features thanks to DynamoDB upgrades: http://aws.amazon.com/blogs/aws/improved-queries-and-updates-for-dynamodb/ 95 | 96 | 0.1.3 97 | ----- 98 | * Bug fix: sometimes crash after deleting table 99 | * Bug fix: DynamoDB Local nose plugin fails 100 | 101 | 0.1.2 102 | ----- 103 | * Bug fix: serializing ints fails 104 | 105 | 0.1.1 106 | ----- 107 | * Feature: Allow ``access_key`` and ``secret_key`` to be passed to the ``DynamoDBConnection.connect_to_*`` methods 108 | 109 | 0.1.0 110 | ----- 111 | * First public release 112 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2014 Steven Arcangeli 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy of 6 | this software and associated documentation files (the "Software"), to deal in 7 | the Software without restriction, including without limitation the rights to 8 | use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of 9 | the Software, and to permit persons to whom the Software is furnished to do so, 10 | subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS 17 | FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR 18 | COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER 19 | IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN 20 | CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 21 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include README.rst CHANGES.rst requirements_test.txt 2 | recursive-exclude tests * 3 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | Dynamo3 2 | ======= 3 | :Build: |build|_ |coverage|_ 4 | :Downloads: http://pypi.python.org/pypi/dynamo3 5 | :Source: https://github.com/stevearc/dynamo3 6 | 7 | .. |build| image:: https://travis-ci.org/stevearc/dynamo3.png?branch=master 8 | .. _build: https://travis-ci.org/stevearc/dynamo3 9 | .. |coverage| image:: https://coveralls.io/repos/stevearc/dynamo3/badge.png?branch=master 10 | .. _coverage: https://coveralls.io/r/stevearc/dynamo3?branch=master 11 | 12 | Dynamo3 is a library for querying DynamoDB. It is designed to be higher-level 13 | than boto (it's built on top of botocore), to make simple operations easier to 14 | perform and understand. 15 | 16 | Features 17 | -------- 18 | * Mypy-typed API 19 | * Python object wrappers for most AWS data structures 20 | * Automatic serialization of built-in types, with hooks for custom types 21 | * Automatic paging of results 22 | * Automatic batching for batch_write_item 23 | * Exponential backoff of requests when throughput is exceeded 24 | * Throughput limits to self-throttle requests to a certain rate 25 | * Nose plugin for running DynamoDB Local 26 | 27 | DynamoDB features that are not yet supported 28 | -------------------------------------------- 29 | * Reading from streams 30 | * Adding/removing tags on a table 31 | * Table backups 32 | * Scanning with segments 33 | * Table replicas (Global tables version 2019.11.21) 34 | * Table auto scaling 35 | * DAX 36 | 37 | Note that you can still access these APIs by using ``DynamoDBConnection.call``, 38 | though you may prefer to go straight to boto3/botocore. 39 | -------------------------------------------------------------------------------- /dynamo3/__init__.py: -------------------------------------------------------------------------------- 1 | """ An API that wraps DynamoDB calls """ 2 | from .connection import DynamoDBConnection 3 | from .exception import ( 4 | CheckFailed, 5 | ConditionalCheckFailedException, 6 | DynamoDBError, 7 | ProvisionedThroughputExceededException, 8 | ThroughputException, 9 | TransactionCanceledException, 10 | ) 11 | from .fields import DynamoKey, GlobalIndex, IndexUpdate, LocalIndex, Table, Throughput 12 | from .rate import RateLimit 13 | from .result import Capacity, Limit 14 | from .types import Binary, Dynamizer, is_null 15 | 16 | __all__ = [ 17 | "Binary", 18 | "Capacity", 19 | "CheckFailed", 20 | "ConditionalCheckFailedException", 21 | "Dynamizer", 22 | "DynamoDBConnection", 23 | "DynamoDBError", 24 | "DynamoKey", 25 | "GlobalIndex", 26 | "IndexUpdate", 27 | "Limit", 28 | "LocalIndex", 29 | "ProvisionedThroughputExceededException", 30 | "RateLimit", 31 | "Table", 32 | "Throughput", 33 | "ThroughputException", 34 | "TransactionCanceledException", 35 | "is_null", 36 | ] 37 | 38 | __version__ = "1.0.0" 39 | -------------------------------------------------------------------------------- /dynamo3/batch.py: -------------------------------------------------------------------------------- 1 | """ Code for batch processing """ 2 | import logging 3 | from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple 4 | 5 | from .constants import ( 6 | MAX_WRITE_BATCH, 7 | ReturnCapacityType, 8 | ReturnItemCollectionMetricsType, 9 | ) 10 | from .result import ConsumedCapacity 11 | from .types import ( 12 | Dynamizer, 13 | DynamoObject, 14 | ExpressionAttributeNamesType, 15 | ExpressionValuesType, 16 | ExpressionValueType, 17 | build_expression_values, 18 | is_null, 19 | ) 20 | 21 | if TYPE_CHECKING: 22 | from .connection import DynamoDBConnection 23 | 24 | LOG = logging.getLogger(__name__) 25 | 26 | 27 | def _encode_write( 28 | dynamizer: Dynamizer, data: DynamoObject, action: str, key: str 29 | ) -> Dict: 30 | """Encode an item write command""" 31 | # Strip null values out of data 32 | data = dict(((k, dynamizer.encode(v)) for k, v in data.items() if not is_null(v))) 33 | return { 34 | action: { 35 | key: data, 36 | } 37 | } 38 | 39 | 40 | def encode_put(dynamizer: Dynamizer, data: DynamoObject) -> Dict: 41 | """Encode an item put command""" 42 | return _encode_write(dynamizer, data, "PutRequest", "Item") 43 | 44 | 45 | def encode_delete(dynamizer: Dynamizer, data: DynamoObject) -> Dict: 46 | """Encode an item delete command""" 47 | return _encode_write(dynamizer, data, "DeleteRequest", "Key") 48 | 49 | 50 | class BatchWriter(object): 51 | 52 | """Context manager for writing a large number of items to a table""" 53 | 54 | def __init__( 55 | self, 56 | connection: "DynamoDBConnection", 57 | return_capacity: Optional[ReturnCapacityType] = None, 58 | return_item_collection_metrics: Optional[ 59 | ReturnItemCollectionMetricsType 60 | ] = None, 61 | ): 62 | self.connection = connection 63 | self.return_capacity = return_capacity 64 | self.return_item_collection_metrics = return_item_collection_metrics 65 | self._to_put: List[Tuple[str, DynamoObject]] = [] 66 | self._to_delete: List[Tuple[str, DynamoObject]] = [] 67 | self._unprocessed: List[Tuple[str, DynamoObject]] = [] 68 | self._attempt = 0 69 | self.consumed_capacity: Optional[Dict[str, ConsumedCapacity]] = None 70 | 71 | def __enter__(self): 72 | return self 73 | 74 | def __exit__(self, exc_type, *_): 75 | # Don't try to flush remaining if we hit an exception 76 | if exc_type is not None: 77 | return 78 | # Flush anything that's left. 79 | if self._to_put or self._to_delete: 80 | self.flush() 81 | 82 | # Finally, handle anything that wasn't processed. 83 | if self._unprocessed: 84 | self.resend_unprocessed() 85 | 86 | def put(self, tablename: str, data: DynamoObject) -> None: 87 | """ 88 | Write an item (will overwrite existing data) 89 | 90 | Parameters 91 | ---------- 92 | data : dict 93 | Item data 94 | 95 | """ 96 | self._to_put.append((tablename, data)) 97 | 98 | if self.should_flush: 99 | self.flush() 100 | 101 | def delete(self, tablename: str, kwargs: DynamoObject) -> None: 102 | """ 103 | Delete an item 104 | 105 | Parameters 106 | ---------- 107 | kwargs : dict 108 | The primary key of the item to delete 109 | 110 | """ 111 | self._to_delete.append((tablename, kwargs)) 112 | 113 | if self.should_flush: 114 | self.flush() 115 | 116 | @property 117 | def should_flush(self) -> bool: 118 | """True if a flush is needed""" 119 | return ( 120 | len(self._to_put) + len(self._to_delete) + len(self._unprocessed) 121 | >= MAX_WRITE_BATCH 122 | ) 123 | 124 | def flush(self) -> None: 125 | """Flush pending items to Dynamo""" 126 | table_map: Dict[str, List[Dict]] = {} 127 | count = 0 128 | 129 | for tablename, data in self._unprocessed: 130 | items = table_map.setdefault(tablename, []) 131 | items.append(data) 132 | count += 1 133 | self._unprocessed = [] 134 | 135 | put_items, self._to_put = ( 136 | self._to_put[0 : MAX_WRITE_BATCH - count], 137 | self._to_put[MAX_WRITE_BATCH - count :], 138 | ) 139 | for tablename, data in put_items: 140 | count += 1 141 | items = table_map.setdefault(tablename, []) 142 | items.append(encode_put(self.connection.dynamizer, data)) 143 | 144 | delete_items, self._to_delete = ( 145 | self._to_delete[0 : MAX_WRITE_BATCH - count], 146 | self._to_delete[MAX_WRITE_BATCH - count :], 147 | ) 148 | for tablename, data in delete_items: 149 | items = table_map.setdefault(tablename, []) 150 | items.append(encode_delete(self.connection.dynamizer, data)) 151 | if table_map: 152 | self._write(table_map) 153 | # This will only happen if we're getting throttled hard. We shouldn't hit the 154 | # recursion limit because we'll be sleeping exponentially 155 | if self.should_flush: 156 | self.flush() 157 | 158 | def _write(self, table_map: Dict[str, List[Dict]]) -> None: 159 | """Perform a batch write and handle the response""" 160 | response = self._batch_write_item(table_map) 161 | if "consumed_capacity" in response: 162 | self.consumed_capacity = self.consumed_capacity or {} 163 | for cap in response["consumed_capacity"]: 164 | self.consumed_capacity[ 165 | cap.tablename 166 | ] = cap + self.consumed_capacity.get(cap.tablename) 167 | 168 | if "UnprocessedItems" in response: 169 | for tablename, unprocessed in response["UnprocessedItems"].items(): 170 | # Some items have not been processed. Stow them for now & 171 | # re-attempt on the next try 172 | LOG.info( 173 | "%d items were unprocessed. Storing for later.", len(unprocessed) 174 | ) 175 | for item in unprocessed: 176 | self._unprocessed.append((tablename, item)) 177 | # Getting UnprocessedItems indicates that we are exceeding our 178 | # throughput. So sleep for a bit. 179 | self._attempt += 1 180 | self.connection.exponential_sleep(self._attempt) 181 | else: 182 | # No UnprocessedItems means our request rate is fine, so we can 183 | # reset the attempt number. 184 | self._attempt = 0 185 | 186 | def resend_unprocessed(self): 187 | """Resend all unprocessed items""" 188 | LOG.info("Re-sending %d unprocessed items.", len(self._unprocessed)) 189 | 190 | while self._unprocessed: 191 | self.flush() 192 | LOG.info("%d unprocessed items left", len(self._unprocessed)) 193 | 194 | def _batch_write_item(self, table_map: Dict[str, List[Dict]]) -> Dict[str, Any]: 195 | """Make a BatchWriteItem call to Dynamo""" 196 | kwargs: Dict[str, Any] = {"RequestItems": table_map} 197 | if self.return_capacity is not None: 198 | kwargs["ReturnConsumedCapacity"] = self.return_capacity 199 | if self.return_item_collection_metrics is not None: 200 | kwargs["ReturnItemCollectionMetrics"] = self.return_item_collection_metrics 201 | return self.connection.call("batch_write_item", **kwargs) 202 | 203 | 204 | class BatchWriterSingleTable(object): 205 | def __init__(self, tablename: str, writer: BatchWriter): 206 | self._tablename = tablename 207 | self._writer = writer 208 | 209 | def __enter__(self): 210 | return self 211 | 212 | def __exit__(self, *args): 213 | self._writer.__exit__(*args) 214 | 215 | def put(self, data: DynamoObject) -> None: 216 | self._writer.put(self._tablename, data) 217 | 218 | def delete(self, kwargs: DynamoObject) -> None: 219 | self._writer.delete(self._tablename, kwargs) 220 | 221 | def flush(self) -> None: 222 | self._writer.flush() 223 | 224 | @property 225 | def consumed_capacity(self) -> Optional[ConsumedCapacity]: 226 | """Getter for consumed_capacity""" 227 | cap_map = self._writer.consumed_capacity 228 | if cap_map is None: 229 | return None 230 | return cap_map[self._tablename] 231 | 232 | 233 | class TransactionWriter(object): 234 | def __init__( 235 | self, 236 | connection: "DynamoDBConnection", 237 | token: Optional[str] = None, 238 | return_capacity: Optional[ReturnCapacityType] = None, 239 | return_item_collection_metrics: Optional[ 240 | ReturnItemCollectionMetricsType 241 | ] = None, 242 | ): 243 | self._connection = connection 244 | self.token = token 245 | self._return_capacity = return_capacity 246 | self._return_item_collection_metrics = return_item_collection_metrics 247 | self._items: List[Dict] = [] 248 | self.consumed_capacity: Optional[Dict[str, ConsumedCapacity]] = None 249 | 250 | def __enter__(self): 251 | return self 252 | 253 | def __exit__(self, exc_type, *_): 254 | # Don't try to flush remaining if we hit an exception 255 | if exc_type is not None: 256 | return 257 | self.execute() 258 | 259 | def _encode_action( 260 | self, 261 | __item_key: str, 262 | tablename: str, 263 | key: DynamoObject, 264 | condition: Optional[str], 265 | expr_values: Optional[ExpressionValuesType] = None, 266 | alias: Optional[ExpressionAttributeNamesType] = None, 267 | **kwargs: ExpressionValueType, 268 | ) -> Dict[str, Any]: 269 | action = { 270 | "TableName": tablename, 271 | __item_key: self._connection.dynamizer.encode_keys(key), 272 | } 273 | if condition is not None: 274 | action["ConditionExpression"] = condition 275 | values = build_expression_values( 276 | self._connection.dynamizer, expr_values, kwargs 277 | ) 278 | if values: 279 | action["ExpressionAttributeValues"] = values 280 | if alias: 281 | action["ExpressionAttributeNames"] = alias 282 | return action 283 | 284 | def check( 285 | self, 286 | tablename: str, 287 | key: DynamoObject, 288 | condition: str, 289 | expr_values: Optional[ExpressionValuesType] = None, 290 | alias: Optional[ExpressionAttributeNamesType] = None, 291 | **kwargs: ExpressionValueType, 292 | ) -> None: 293 | action = self._encode_action( 294 | "Key", tablename, key, condition, expr_values, alias, **kwargs 295 | ) 296 | self._items.append({"ConditionCheck": action}) 297 | 298 | def delete( 299 | self, 300 | tablename: str, 301 | key: DynamoObject, 302 | condition: Optional[str] = None, 303 | expr_values: Optional[ExpressionValuesType] = None, 304 | alias: Optional[ExpressionAttributeNamesType] = None, 305 | **kwargs: ExpressionValueType, 306 | ) -> None: 307 | action = self._encode_action( 308 | "Key", tablename, key, condition, expr_values, alias, **kwargs 309 | ) 310 | self._items.append({"Delete": action}) 311 | 312 | def put( 313 | self, 314 | tablename: str, 315 | item: DynamoObject, 316 | condition: Optional[str] = None, 317 | expr_values: Optional[ExpressionValuesType] = None, 318 | alias: Optional[ExpressionAttributeNamesType] = None, 319 | **kwargs: ExpressionValueType, 320 | ) -> None: 321 | action = self._encode_action( 322 | "Item", tablename, item, condition, expr_values, alias, **kwargs 323 | ) 324 | self._items.append({"Put": action}) 325 | 326 | def update( 327 | self, 328 | tablename: str, 329 | key: DynamoObject, 330 | expression: str, 331 | condition: Optional[str] = None, 332 | expr_values: Optional[ExpressionValuesType] = None, 333 | alias: Optional[ExpressionAttributeNamesType] = None, 334 | **kwargs: ExpressionValueType, 335 | ) -> None: 336 | action = self._encode_action( 337 | "Key", tablename, key, condition, expr_values, alias, **kwargs 338 | ) 339 | action["UpdateExpression"] = expression 340 | self._items.append({"Update": action}) 341 | 342 | def execute(self) -> None: 343 | kwargs: Dict[str, Any] = {"TransactItems": self._items} 344 | if self.token is not None: 345 | kwargs["ClientRequestToken"] = self.token 346 | if self._return_capacity is not None: 347 | kwargs["ReturnConsumedCapacity"] = self._return_capacity 348 | if self._return_item_collection_metrics is not None: 349 | kwargs["ReturnItemCollectionMetrics"] = self._return_item_collection_metrics 350 | response = self._connection.call("transact_write_items", **kwargs) 351 | 352 | if "consumed_capacity" in response: 353 | self.consumed_capacity = self.consumed_capacity or {} 354 | for cap in response["consumed_capacity"]: 355 | self.consumed_capacity[ 356 | cap.tablename 357 | ] = cap + self.consumed_capacity.get(cap.tablename) 358 | -------------------------------------------------------------------------------- /dynamo3/constants.py: -------------------------------------------------------------------------------- 1 | """ Constant values """ 2 | from typing import FrozenSet 3 | 4 | from typing_extensions import Final, Literal 5 | 6 | # Data types 7 | NUMBER: Final[Literal["N"]] = "N" 8 | STRING: Final[Literal["S"]] = "S" 9 | BINARY: Final[Literal["B"]] = "B" 10 | NUMBER_SET: Final[Literal["NS"]] = "NS" 11 | STRING_SET: Final[Literal["SS"]] = "SS" 12 | BINARY_SET: Final[Literal["BS"]] = "BS" 13 | LIST: Final[Literal["L"]] = "L" 14 | BOOL: Final[Literal["BOOL"]] = "BOOL" 15 | MAP: Final[Literal["M"]] = "M" 16 | NULL: Final[Literal["NULL"]] = "NULL" 17 | KeyType = Literal[Literal["S"], Literal["N"], Literal["B"]] 18 | 19 | NONE: Final[Literal["NONE"]] = "NONE" 20 | 21 | # SELECT 22 | COUNT: Final[Literal["COUNT"]] = "COUNT" 23 | ALL_ATTRIBUTES: Final[Literal["ALL_ATTRIBUTES"]] = "ALL_ATTRIBUTES" 24 | ALL_PROJECTED_ATTRIBUTES: Final[ 25 | Literal["ALL_PROJECTED_ATTRIBUTES"] 26 | ] = "ALL_PROJECTED_ATTRIBUTES" 27 | SPECIFIC_ATTRIBUTES: Final[Literal["SPECIFIC_ATTRIBUTES"]] = "SPECIFIC_ATTRIBUTES" 28 | SelectType = Literal[ 29 | Literal["COUNT"], 30 | Literal["ALL_ATTRIBUTES"], 31 | Literal["ALL_PROJECTED_ATTRIBUTES"], 32 | Literal["SPECIFIC_ATTRIBUTES"], 33 | ] 34 | NonCountSelectType = Literal[ 35 | Literal["ALL_ATTRIBUTES"], 36 | Literal["ALL_PROJECTED_ATTRIBUTES"], 37 | Literal["SPECIFIC_ATTRIBUTES"], 38 | ] 39 | 40 | # ReturnValues 41 | ALL_OLD: Final[Literal["ALL_OLD"]] = "ALL_OLD" 42 | ALL_NEW: Final[Literal["ALL_NEW"]] = "ALL_NEW" 43 | UPDATED_OLD: Final[Literal["UPDATED_OLD"]] = "UPDATED_OLD" 44 | UPDATED_NEW: Final[Literal["UPDATED_NEW"]] = "UPDATED_NEW" 45 | 46 | # ReturnConsumedCapacity 47 | INDEXES: Final[Literal["INDEXES"]] = "INDEXES" 48 | TOTAL: Final[Literal["TOTAL"]] = "TOTAL" 49 | ReturnCapacityType = Literal[Literal["NONE"], Literal["INDEXES"], Literal["TOTAL"]] 50 | 51 | # ReturnItemCollectionMetrics 52 | SIZE: Final[Literal["SIZE"]] = "SIZE" 53 | ReturnItemCollectionMetricsType = Literal[Literal["SIZE"], Literal["NONE"]] 54 | 55 | TableStatusType = Literal[ 56 | Literal["CREATING"], 57 | Literal["UPDATING"], 58 | Literal["DELETING"], 59 | Literal["ACTIVE"], 60 | Literal["INACCESSIBLE_ENCRYPTION_CREDENTIALS"], 61 | Literal["ARCHIVING"], 62 | Literal["ARCHIVED"], 63 | ] 64 | 65 | IndexStatusType = Literal[ 66 | Literal["CREATING"], 67 | Literal["UPDATING"], 68 | Literal["DELETING"], 69 | Literal["ACTIVE"], 70 | ] 71 | 72 | # Billing mode 73 | PROVISIONED: Final[Literal["PROVISIONED"]] = "PROVISIONED" 74 | PAY_PER_REQUEST: Final[Literal["PAY_PER_REQUEST"]] = "PAY_PER_REQUEST" 75 | BillingModeType = Literal[Literal["PROVISIONED"], Literal["PAY_PER_REQUEST"]] 76 | 77 | # Stream view type 78 | KEYS_ONLY: Final[Literal["KEYS_ONLY"]] = "KEYS_ONLY" 79 | NEW_IMAGE: Final[Literal["NEW_IMAGE"]] = "NEW_IMAGE" 80 | OLD_IMAGE: Final[Literal["OLD_IMAGE"]] = "OLD_IMAGE" 81 | NEW_AND_OLD_IMAGES: Final[Literal["NEW_AND_OLD_IMAGES"]] = "NEW_AND_OLD_IMAGES" 82 | StreamViewType = Literal[ 83 | Literal["KEYS_ONLY"], 84 | Literal["NEW_IMAGE"], 85 | Literal["OLD_IMAGE"], 86 | Literal["NEW_AND_OLD_IMAGES"], 87 | ] 88 | 89 | # TTL 90 | TimeToLiveStatusType = Literal[ 91 | Literal["ENABLING"], 92 | Literal["DISABLING"], 93 | Literal["ENABLED"], 94 | Literal["DISABLED"], 95 | ] 96 | 97 | # Maximum number of keys in a BatchGetItem request 98 | MAX_GET_BATCH: Final[Literal[100]] = 100 99 | # Maximum number of items in a BatchWriteItem request 100 | MAX_WRITE_BATCH: Final[Literal[25]] = 25 101 | 102 | READ_COMMANDS: Final[FrozenSet[str]] = frozenset( 103 | ["batch_get_item", "get_item", "query", "scan", "transact_get_items"] 104 | ) 105 | 106 | # Last fetched on 2015-11-10 107 | # http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ReservedWords.html 108 | RESERVED_WORDS: Final[FrozenSet[str]] = frozenset( 109 | [ 110 | "ABORT", 111 | "ABSOLUTE", 112 | "ACTION", 113 | "ADD", 114 | "AFTER", 115 | "AGENT", 116 | "AGGREGATE", 117 | "ALL", 118 | "ALLOCATE", 119 | "ALTER", 120 | "ANALYZE", 121 | "AND", 122 | "ANY", 123 | "ARCHIVE", 124 | "ARE", 125 | "ARRAY", 126 | "AS", 127 | "ASC", 128 | "ASCII", 129 | "ASENSITIVE", 130 | "ASSERTION", 131 | "ASYMMETRIC", 132 | "AT", 133 | "ATOMIC", 134 | "ATTACH", 135 | "ATTRIBUTE", 136 | "AUTH", 137 | "AUTHORIZATION", 138 | "AUTHORIZE", 139 | "AUTO", 140 | "AVG", 141 | "BACK", 142 | "BACKUP", 143 | "BASE", 144 | "BATCH", 145 | "BEFORE", 146 | "BEGIN", 147 | "BETWEEN", 148 | "BIGINT", 149 | "BINARY", 150 | "BIT", 151 | "BLOB", 152 | "BLOCK", 153 | "BOOLEAN", 154 | "BOTH", 155 | "BREADTH", 156 | "BUCKET", 157 | "BULK", 158 | "BY", 159 | "BYTE", 160 | "CALL", 161 | "CALLED", 162 | "CALLING", 163 | "CAPACITY", 164 | "CASCADE", 165 | "CASCADED", 166 | "CASE", 167 | "CAST", 168 | "CATALOG", 169 | "CHAR", 170 | "CHARACTER", 171 | "CHECK", 172 | "CLASS", 173 | "CLOB", 174 | "CLOSE", 175 | "CLUSTER", 176 | "CLUSTERED", 177 | "CLUSTERING", 178 | "CLUSTERS", 179 | "COALESCE", 180 | "COLLATE", 181 | "COLLATION", 182 | "COLLECTION", 183 | "COLUMN", 184 | "COLUMNS", 185 | "COMBINE", 186 | "COMMENT", 187 | "COMMIT", 188 | "COMPACT", 189 | "COMPILE", 190 | "COMPRESS", 191 | "CONDITION", 192 | "CONFLICT", 193 | "CONNECT", 194 | "CONNECTION", 195 | "CONSISTENCY", 196 | "CONSISTENT", 197 | "CONSTRAINT", 198 | "CONSTRAINTS", 199 | "CONSTRUCTOR", 200 | "CONSUMED", 201 | "CONTINUE", 202 | "CONVERT", 203 | "COPY", 204 | "CORRESPONDING", 205 | "COUNT", 206 | "COUNTER", 207 | "CREATE", 208 | "CROSS", 209 | "CUBE", 210 | "CURRENT", 211 | "CURSOR", 212 | "CYCLE", 213 | "DATA", 214 | "DATABASE", 215 | "DATE", 216 | "DATETIME", 217 | "DAY", 218 | "DEALLOCATE", 219 | "DEC", 220 | "DECIMAL", 221 | "DECLARE", 222 | "DEFAULT", 223 | "DEFERRABLE", 224 | "DEFERRED", 225 | "DEFINE", 226 | "DEFINED", 227 | "DEFINITION", 228 | "DELETE", 229 | "DELIMITED", 230 | "DEPTH", 231 | "DEREF", 232 | "DESC", 233 | "DESCRIBE", 234 | "DESCRIPTOR", 235 | "DETACH", 236 | "DETERMINISTIC", 237 | "DIAGNOSTICS", 238 | "DIRECTORIES", 239 | "DISABLE", 240 | "DISCONNECT", 241 | "DISTINCT", 242 | "DISTRIBUTE", 243 | "DO", 244 | "DOMAIN", 245 | "DOUBLE", 246 | "DROP", 247 | "DUMP", 248 | "DURATION", 249 | "DYNAMIC", 250 | "EACH", 251 | "ELEMENT", 252 | "ELSE", 253 | "ELSEIF", 254 | "EMPTY", 255 | "ENABLE", 256 | "END", 257 | "EQUAL", 258 | "EQUALS", 259 | "ERROR", 260 | "ESCAPE", 261 | "ESCAPED", 262 | "EVAL", 263 | "EVALUATE", 264 | "EXCEEDED", 265 | "EXCEPT", 266 | "EXCEPTION", 267 | "EXCEPTIONS", 268 | "EXCLUSIVE", 269 | "EXEC", 270 | "EXECUTE", 271 | "EXISTS", 272 | "EXIT", 273 | "EXPLAIN", 274 | "EXPLODE", 275 | "EXPORT", 276 | "EXPRESSION", 277 | "EXTENDED", 278 | "EXTERNAL", 279 | "EXTRACT", 280 | "FAIL", 281 | "FALSE", 282 | "FAMILY", 283 | "FETCH", 284 | "FIELDS", 285 | "FILE", 286 | "FILTER", 287 | "FILTERING", 288 | "FINAL", 289 | "FINISH", 290 | "FIRST", 291 | "FIXED", 292 | "FLATTERN", 293 | "FLOAT", 294 | "FOR", 295 | "FORCE", 296 | "FOREIGN", 297 | "FORMAT", 298 | "FORWARD", 299 | "FOUND", 300 | "FREE", 301 | "FROM", 302 | "FULL", 303 | "FUNCTION", 304 | "FUNCTIONS", 305 | "GENERAL", 306 | "GENERATE", 307 | "GET", 308 | "GLOB", 309 | "GLOBAL", 310 | "GO", 311 | "GOTO", 312 | "GRANT", 313 | "GREATER", 314 | "GROUP", 315 | "GROUPING", 316 | "HANDLER", 317 | "HASH", 318 | "HAVE", 319 | "HAVING", 320 | "HEAP", 321 | "HIDDEN", 322 | "HOLD", 323 | "HOUR", 324 | "IDENTIFIED", 325 | "IDENTITY", 326 | "IF", 327 | "IGNORE", 328 | "IMMEDIATE", 329 | "IMPORT", 330 | "IN", 331 | "INCLUDING", 332 | "INCLUSIVE", 333 | "INCREMENT", 334 | "INCREMENTAL", 335 | "INDEX", 336 | "INDEXED", 337 | "INDEXES", 338 | "INDICATOR", 339 | "INFINITE", 340 | "INITIALLY", 341 | "INLINE", 342 | "INNER", 343 | "INNTER", 344 | "INOUT", 345 | "INPUT", 346 | "INSENSITIVE", 347 | "INSERT", 348 | "INSTEAD", 349 | "INT", 350 | "INTEGER", 351 | "INTERSECT", 352 | "INTERVAL", 353 | "INTO", 354 | "INVALIDATE", 355 | "IS", 356 | "ISOLATION", 357 | "ITEM", 358 | "ITEMS", 359 | "ITERATE", 360 | "JOIN", 361 | "KEY", 362 | "KEYS", 363 | "LAG", 364 | "LANGUAGE", 365 | "LARGE", 366 | "LAST", 367 | "LATERAL", 368 | "LEAD", 369 | "LEADING", 370 | "LEAVE", 371 | "LEFT", 372 | "LENGTH", 373 | "LESS", 374 | "LEVEL", 375 | "LIKE", 376 | "LIMIT", 377 | "LIMITED", 378 | "LINES", 379 | "LIST", 380 | "LOAD", 381 | "LOCAL", 382 | "LOCALTIME", 383 | "LOCALTIMESTAMP", 384 | "LOCATION", 385 | "LOCATOR", 386 | "LOCK", 387 | "LOCKS", 388 | "LOG", 389 | "LOGED", 390 | "LONG", 391 | "LOOP", 392 | "LOWER", 393 | "MAP", 394 | "MATCH", 395 | "MATERIALIZED", 396 | "MAX", 397 | "MAXLEN", 398 | "MEMBER", 399 | "MERGE", 400 | "METHOD", 401 | "METRICS", 402 | "MIN", 403 | "MINUS", 404 | "MINUTE", 405 | "MISSING", 406 | "MOD", 407 | "MODE", 408 | "MODIFIES", 409 | "MODIFY", 410 | "MODULE", 411 | "MONTH", 412 | "MULTI", 413 | "MULTISET", 414 | "NAME", 415 | "NAMES", 416 | "NATIONAL", 417 | "NATURAL", 418 | "NCHAR", 419 | "NCLOB", 420 | "NEW", 421 | "NEXT", 422 | "NO", 423 | "NONE", 424 | "NOT", 425 | "NULL", 426 | "NULLIF", 427 | "NUMBER", 428 | "NUMERIC", 429 | "OBJECT", 430 | "OF", 431 | "OFFLINE", 432 | "OFFSET", 433 | "OLD", 434 | "ON", 435 | "ONLINE", 436 | "ONLY", 437 | "OPAQUE", 438 | "OPEN", 439 | "OPERATOR", 440 | "OPTION", 441 | "OR", 442 | "ORDER", 443 | "ORDINALITY", 444 | "OTHER", 445 | "OTHERS", 446 | "OUT", 447 | "OUTER", 448 | "OUTPUT", 449 | "OVER", 450 | "OVERLAPS", 451 | "OVERRIDE", 452 | "OWNER", 453 | "PAD", 454 | "PARALLEL", 455 | "PARAMETER", 456 | "PARAMETERS", 457 | "PARTIAL", 458 | "PARTITION", 459 | "PARTITIONED", 460 | "PARTITIONS", 461 | "PATH", 462 | "PERCENT", 463 | "PERCENTILE", 464 | "PERMISSION", 465 | "PERMISSIONS", 466 | "PIPE", 467 | "PIPELINED", 468 | "PLAN", 469 | "POOL", 470 | "POSITION", 471 | "PRECISION", 472 | "PREPARE", 473 | "PRESERVE", 474 | "PRIMARY", 475 | "PRIOR", 476 | "PRIVATE", 477 | "PRIVILEGES", 478 | "PROCEDURE", 479 | "PROCESSED", 480 | "PROJECT", 481 | "PROJECTION", 482 | "PROPERTY", 483 | "PROVISIONING", 484 | "PUBLIC", 485 | "PUT", 486 | "QUERY", 487 | "QUIT", 488 | "QUORUM", 489 | "RAISE", 490 | "RANDOM", 491 | "RANGE", 492 | "RANK", 493 | "RAW", 494 | "READ", 495 | "READS", 496 | "REAL", 497 | "REBUILD", 498 | "RECORD", 499 | "RECURSIVE", 500 | "REDUCE", 501 | "REF", 502 | "REFERENCE", 503 | "REFERENCES", 504 | "REFERENCING", 505 | "REGEXP", 506 | "REGION", 507 | "REINDEX", 508 | "RELATIVE", 509 | "RELEASE", 510 | "REMAINDER", 511 | "RENAME", 512 | "REPEAT", 513 | "REPLACE", 514 | "REQUEST", 515 | "RESET", 516 | "RESIGNAL", 517 | "RESOURCE", 518 | "RESPONSE", 519 | "RESTORE", 520 | "RESTRICT", 521 | "RESULT", 522 | "RETURN", 523 | "RETURNING", 524 | "RETURNS", 525 | "REVERSE", 526 | "REVOKE", 527 | "RIGHT", 528 | "ROLE", 529 | "ROLES", 530 | "ROLLBACK", 531 | "ROLLUP", 532 | "ROUTINE", 533 | "ROW", 534 | "ROWS", 535 | "RULE", 536 | "RULES", 537 | "SAMPLE", 538 | "SATISFIES", 539 | "SAVE", 540 | "SAVEPOINT", 541 | "SCAN", 542 | "SCHEMA", 543 | "SCOPE", 544 | "SCROLL", 545 | "SEARCH", 546 | "SECOND", 547 | "SECTION", 548 | "SEGMENT", 549 | "SEGMENTS", 550 | "SELECT", 551 | "SELF", 552 | "SEMI", 553 | "SENSITIVE", 554 | "SEPARATE", 555 | "SEQUENCE", 556 | "SERIALIZABLE", 557 | "SESSION", 558 | "SET", 559 | "SETS", 560 | "SHARD", 561 | "SHARE", 562 | "SHARED", 563 | "SHORT", 564 | "SHOW", 565 | "SIGNAL", 566 | "SIMILAR", 567 | "SIZE", 568 | "SKEWED", 569 | "SMALLINT", 570 | "SNAPSHOT", 571 | "SOME", 572 | "SOURCE", 573 | "SPACE", 574 | "SPACES", 575 | "SPARSE", 576 | "SPECIFIC", 577 | "SPECIFICTYPE", 578 | "SPLIT", 579 | "SQL", 580 | "SQLCODE", 581 | "SQLERROR", 582 | "SQLEXCEPTION", 583 | "SQLSTATE", 584 | "SQLWARNING", 585 | "START", 586 | "STATE", 587 | "STATIC", 588 | "STATUS", 589 | "STORAGE", 590 | "STORE", 591 | "STORED", 592 | "STREAM", 593 | "STRING", 594 | "STRUCT", 595 | "STYLE", 596 | "SUB", 597 | "SUBMULTISET", 598 | "SUBPARTITION", 599 | "SUBSTRING", 600 | "SUBTYPE", 601 | "SUM", 602 | "SUPER", 603 | "SYMMETRIC", 604 | "SYNONYM", 605 | "SYSTEM", 606 | "TABLE", 607 | "TABLESAMPLE", 608 | "TEMP", 609 | "TEMPORARY", 610 | "TERMINATED", 611 | "TEXT", 612 | "THAN", 613 | "THEN", 614 | "THROUGHPUT", 615 | "TIME", 616 | "TIMESTAMP", 617 | "TIMEZONE", 618 | "TINYINT", 619 | "TO", 620 | "TOKEN", 621 | "TOTAL", 622 | "TOUCH", 623 | "TRAILING", 624 | "TRANSACTION", 625 | "TRANSFORM", 626 | "TRANSLATE", 627 | "TRANSLATION", 628 | "TREAT", 629 | "TRIGGER", 630 | "TRIM", 631 | "TRUE", 632 | "TRUNCATE", 633 | "TTL", 634 | "TUPLE", 635 | "TYPE", 636 | "UNDER", 637 | "UNDO", 638 | "UNION", 639 | "UNIQUE", 640 | "UNIT", 641 | "UNKNOWN", 642 | "UNLOGGED", 643 | "UNNEST", 644 | "UNPROCESSED", 645 | "UNSIGNED", 646 | "UNTIL", 647 | "UPDATE", 648 | "UPPER", 649 | "URL", 650 | "USAGE", 651 | "USE", 652 | "USER", 653 | "USERS", 654 | "USING", 655 | "UUID", 656 | "VACUUM", 657 | "VALUE", 658 | "VALUED", 659 | "VALUES", 660 | "VARCHAR", 661 | "VARIABLE", 662 | "VARIANCE", 663 | "VARINT", 664 | "VARYING", 665 | "VIEW", 666 | "VIEWS", 667 | "VIRTUAL", 668 | "VOID", 669 | "WAIT", 670 | "WHEN", 671 | "WHENEVER", 672 | "WHERE", 673 | "WHILE", 674 | "WINDOW", 675 | "WITH", 676 | "WITHIN", 677 | "WITHOUT", 678 | "WORK", 679 | "WRAPPED", 680 | "WRITE", 681 | "YEAR", 682 | "ZONE", 683 | ] 684 | ) 685 | -------------------------------------------------------------------------------- /dynamo3/exception.py: -------------------------------------------------------------------------------- 1 | """ Exceptions and exception logic for DynamoDBConnection """ 2 | import sys 3 | from pprint import pformat 4 | 5 | import botocore 6 | 7 | 8 | class DynamoDBError(botocore.exceptions.BotoCoreError): 9 | 10 | """Base error that we get back from Dynamo""" 11 | 12 | fmt = "{Code}: {Message}\nArgs: {args}" 13 | 14 | def __init__(self, status_code, exc_info=None, **kwargs): 15 | self.exc_info = exc_info 16 | self.status_code = status_code 17 | super(DynamoDBError, self).__init__(**kwargs) 18 | 19 | def re_raise(self): 20 | """Raise this exception with the original traceback""" 21 | if self.exc_info is not None: 22 | traceback = self.exc_info[2] 23 | if self.__traceback__ != traceback: 24 | raise self.with_traceback(traceback) 25 | raise self 26 | 27 | 28 | class ConditionalCheckFailedException(DynamoDBError): 29 | 30 | """Raised when an item field value fails the expected value check""" 31 | 32 | fmt = "{Code}: {Message}" 33 | 34 | 35 | CheckFailed = ConditionalCheckFailedException 36 | 37 | 38 | class TransactionCanceledException(DynamoDBError): 39 | 40 | """Raised when a transaction fails""" 41 | 42 | fmt = "{Code}: {Message}" 43 | 44 | 45 | class ProvisionedThroughputExceededException(DynamoDBError): 46 | 47 | """Raised when an item field value fails the expected value check""" 48 | 49 | fmt = "{Code}: {Message}" 50 | 51 | 52 | ThroughputException = ProvisionedThroughputExceededException 53 | 54 | EXC = { 55 | "ConditionalCheckFailedException": ConditionalCheckFailedException, 56 | "ProvisionedThroughputExceededException": ThroughputException, 57 | "TransactionCanceledException": TransactionCanceledException, 58 | } 59 | 60 | 61 | def translate_exception(exc, kwargs): 62 | """Translate a botocore.exceptions.ClientError into a dynamo3 error""" 63 | error = exc.response["Error"] 64 | error.setdefault("Message", "") 65 | err_class = EXC.get(error["Code"], DynamoDBError) 66 | return err_class( 67 | exc.response["ResponseMetadata"]["HTTPStatusCode"], 68 | exc_info=sys.exc_info(), 69 | args=pformat(kwargs), 70 | **error 71 | ) 72 | -------------------------------------------------------------------------------- /dynamo3/fields.py: -------------------------------------------------------------------------------- 1 | """ Objects for defining fields and indexes """ 2 | from abc import ABC, abstractmethod 3 | from typing import Any, Dict, List, NamedTuple, Optional, Set, Tuple, Union 4 | 5 | from typing_extensions import Final, Literal 6 | 7 | from .constants import ( 8 | PAY_PER_REQUEST, 9 | STRING, 10 | BillingModeType, 11 | IndexStatusType, 12 | KeyType, 13 | StreamViewType, 14 | TableStatusType, 15 | TimeToLiveStatusType, 16 | ) 17 | from .types import TYPES_REV 18 | 19 | 20 | class TTL(NamedTuple): 21 | attribute_name: Optional[str] 22 | status: TimeToLiveStatusType 23 | 24 | @classmethod 25 | def default(cls) -> "TTL": 26 | return cls(None, "DISABLED") 27 | 28 | 29 | class DynamoKey(object): 30 | 31 | """ 32 | A single field inside a Dynamo table 33 | 34 | Parameters 35 | ---------- 36 | name : str 37 | The name of the field 38 | data_type : {STRING, NUMBER, BINARY} 39 | The Dynamo data type of the field 40 | 41 | """ 42 | 43 | def __init__(self, name: str, data_type: KeyType = STRING): 44 | self.name = name 45 | self.data_type = data_type 46 | 47 | def definition(self) -> Dict[str, str]: 48 | """Returns the attribute definition""" 49 | return { 50 | "AttributeName": self.name, 51 | "AttributeType": self.data_type, 52 | } 53 | 54 | def hash_schema(self): 55 | """Get the schema definition with this field as a hash key""" 56 | return self._schema("HASH") 57 | 58 | def range_schema(self): 59 | """Get the schema definition with this field as a range key""" 60 | return self._schema("RANGE") 61 | 62 | def _schema( 63 | self, key_type: Literal[Literal["HASH"], Literal["RANGE"]] 64 | ) -> Dict[str, str]: 65 | """Construct the schema definition for this table""" 66 | return { 67 | "AttributeName": self.name, 68 | "KeyType": key_type, 69 | } 70 | 71 | def __str__(self): 72 | return "DynamoKey(%s, %s)" % (self.name, TYPES_REV[self.data_type]) 73 | 74 | def __repr__(self): 75 | return str(self) 76 | 77 | def __hash__(self): 78 | return hash(self.name) 79 | 80 | def __eq__(self, other): 81 | return ( 82 | isinstance(other, DynamoKey) 83 | and self.name == other.name 84 | and self.data_type == other.data_type 85 | ) 86 | 87 | def __ne__(self, other): 88 | return not self.__eq__(other) 89 | 90 | 91 | ThroughputOrTuple = Union["Throughput", Tuple[int, int]] 92 | 93 | 94 | class Throughput(object): 95 | 96 | """ 97 | Representation of table or global index throughput 98 | 99 | Parameters 100 | ---------- 101 | read : int, optional 102 | Read capacity throughput (default 5) 103 | write : int, optional 104 | Write capacity throughput (default 5) 105 | 106 | """ 107 | 108 | def __init__(self, read: int = 5, write: int = 5): 109 | self.read = read 110 | self.write = write 111 | 112 | def __str__(self): 113 | return "Throughput({0}, {1})".format(self.read, self.write) 114 | 115 | def __repr__(self): 116 | return str(self) 117 | 118 | def __bool__(self): 119 | return bool(self.read and self.write) 120 | 121 | def schema(self) -> Dict[str, int]: 122 | """Construct the schema definition for the throughput""" 123 | return { 124 | "ReadCapacityUnits": self.read, 125 | "WriteCapacityUnits": self.write, 126 | } 127 | 128 | @classmethod 129 | def normalize(cls, instance: Optional[ThroughputOrTuple]) -> "Throughput": 130 | if instance is None: 131 | return cls(0, 0) 132 | if isinstance(instance, Throughput): 133 | return instance 134 | return cls(instance[0], instance[1]) 135 | 136 | @classmethod 137 | def from_response(cls, response: Dict[str, int]) -> "Throughput": 138 | """Create Throughput from returned Dynamo data""" 139 | return cls( 140 | response["ReadCapacityUnits"], 141 | response["WriteCapacityUnits"], 142 | ) 143 | 144 | def __hash__(self): 145 | return self.read + self.write 146 | 147 | def __eq__(self, other): 148 | return ( 149 | isinstance(other, Throughput) 150 | and self.read == other.read 151 | and self.write == other.write 152 | ) 153 | 154 | def __ne__(self, other): 155 | return not self.__eq__(other) 156 | 157 | 158 | ProjectionType = Literal[Literal["ALL"], Literal["KEYS_ONLY"], Literal["INCLUDE"]] 159 | 160 | 161 | class BaseIndex(object): 162 | 163 | """Base class for indexes""" 164 | 165 | ALL: Final[Literal["ALL"]] = "ALL" 166 | KEYS: Final[Literal["KEYS_ONLY"]] = "KEYS_ONLY" 167 | INCLUDE: Final[Literal["INCLUDE"]] = "INCLUDE" 168 | 169 | def __init__( 170 | self, 171 | projection_type: ProjectionType, 172 | name: str, 173 | range_key: Optional[DynamoKey], 174 | includes: Optional[List[str]], 175 | ): 176 | self.projection_type = projection_type 177 | self.name = name 178 | self.range_key = range_key 179 | self.include_fields = includes 180 | self.response: Dict[str, Any] = {} 181 | 182 | def __getitem__(self, key: str) -> Any: 183 | return self.response[key] 184 | 185 | def get(self, key: str, default: Any = None) -> Any: 186 | return self.response.get(key, default) 187 | 188 | def __contains__(self, key: str) -> bool: 189 | return key in self.response 190 | 191 | def _schema(self, hash_key: DynamoKey) -> Dict[str, Any]: 192 | """ 193 | Create the index schema 194 | 195 | Parameters 196 | ---------- 197 | hash_key : :class:`~.DynamoKey` 198 | The hash key of the table 199 | 200 | """ 201 | key_schema = [hash_key.hash_schema()] 202 | if self.range_key is not None: 203 | key_schema.append(self.range_key.range_schema()) 204 | schema_data = { 205 | "IndexName": self.name, 206 | "KeySchema": key_schema, 207 | } 208 | projection: Any = { 209 | "ProjectionType": self.projection_type, 210 | } 211 | if self.include_fields is not None: 212 | projection["NonKeyAttributes"] = self.include_fields 213 | schema_data["Projection"] = projection 214 | return schema_data 215 | 216 | def __hash__(self): 217 | return hash(self.projection_type) + hash(self.name) + hash(self.range_key) 218 | 219 | def __eq__(self, other): 220 | return ( 221 | type(other) == type(self) 222 | and self.projection_type == other.projection_type 223 | and self.name == other.name 224 | and self.range_key == other.range_key 225 | and self.include_fields == other.include_fields 226 | ) 227 | 228 | def __ne__(self, other): 229 | return not self.__eq__(other) 230 | 231 | 232 | class LocalIndex(BaseIndex): 233 | 234 | """ 235 | A local secondary index for a table 236 | 237 | You should generally use the factory methods :meth:`~.all`, :meth:`~.keys`, 238 | and :meth:`~.include` instead of the constructor. 239 | 240 | """ 241 | 242 | range_key: DynamoKey 243 | 244 | def __init__( 245 | self, 246 | projection_type: ProjectionType, 247 | name: str, 248 | range_key: DynamoKey, 249 | includes: Optional[List[str]] = None, 250 | ): 251 | super(LocalIndex, self).__init__(projection_type, name, range_key, includes) 252 | 253 | @classmethod 254 | def all(cls, name: str, range_key: DynamoKey) -> "LocalIndex": 255 | """Create an index that projects all attributes""" 256 | return cls(cls.ALL, name, range_key) 257 | 258 | @classmethod 259 | def keys(cls, name: str, range_key: DynamoKey) -> "LocalIndex": 260 | """Create an index that projects only key attributes""" 261 | return cls(cls.KEYS, name, range_key) 262 | 263 | @classmethod 264 | def include( 265 | cls, name: str, range_key: DynamoKey, includes: List[str] 266 | ) -> "LocalIndex": 267 | """Create an index that projects key attributes plus some others""" 268 | return cls(cls.INCLUDE, name, range_key, includes) 269 | 270 | def schema(self, hash_key: DynamoKey) -> Dict[str, Any]: 271 | return super()._schema(hash_key) 272 | 273 | @classmethod 274 | def from_response( 275 | cls, response: Dict[str, Any], attrs: Dict[Any, Any] 276 | ) -> "LocalIndex": 277 | """Create an index from returned Dynamo data""" 278 | proj = response["Projection"] 279 | range_key = None 280 | for key_schema in response["KeySchema"]: 281 | if key_schema["KeyType"] == "RANGE": 282 | range_key = attrs[key_schema["AttributeName"]] 283 | if range_key is None: 284 | raise ValueError("No range key in local index definition") 285 | index = cls( 286 | proj["ProjectionType"], 287 | response["IndexName"], 288 | range_key, 289 | proj.get("NonKeyAttributes"), 290 | ) 291 | index.response = response 292 | return index 293 | 294 | def __str__(self): 295 | if self.include_fields: 296 | return "LocalIndex(%s, %s, %s, [%s])" % ( 297 | self.name, 298 | self.projection_type, 299 | self.range_key, 300 | ", ".join(self.include_fields), 301 | ) 302 | else: 303 | return "LocalIndex(%s, %s, %s)" % ( 304 | self.name, 305 | self.projection_type, 306 | self.range_key, 307 | ) 308 | 309 | def __repr__(self): 310 | return "LocalIndex(%s)" % self.name 311 | 312 | 313 | class GlobalIndex(BaseIndex): 314 | 315 | """ 316 | A global secondary index for a table 317 | 318 | You should generally use the factory methods :meth:`~.all`, :meth:`~.keys`, 319 | and :meth:`~.include` instead of the constructor. 320 | 321 | """ 322 | 323 | def __init__( 324 | self, 325 | projection_type: ProjectionType, 326 | name: str, 327 | hash_key: Optional[DynamoKey], 328 | range_key: Optional[DynamoKey] = None, 329 | includes: Optional[List[str]] = None, 330 | throughput: Optional[ThroughputOrTuple] = None, 331 | status: Optional[IndexStatusType] = None, 332 | backfilling: bool = False, 333 | item_count: int = 0, 334 | size: int = 0, 335 | ): 336 | super(GlobalIndex, self).__init__(projection_type, name, range_key, includes) 337 | self.hash_key = hash_key 338 | self.throughput = Throughput.normalize(throughput) 339 | self.status: Optional[IndexStatusType] = status 340 | self.backfilling = backfilling 341 | self.item_count = item_count 342 | self.size = size 343 | 344 | @classmethod 345 | def all( 346 | cls, 347 | name: str, 348 | hash_key: DynamoKey, 349 | range_key: Optional[DynamoKey] = None, 350 | throughput: Optional[ThroughputOrTuple] = None, 351 | ) -> "GlobalIndex": 352 | """Create an index that projects all attributes""" 353 | return cls(cls.ALL, name, hash_key, range_key, throughput=throughput) 354 | 355 | @classmethod 356 | def keys( 357 | cls, 358 | name: str, 359 | hash_key: DynamoKey, 360 | range_key: Optional[DynamoKey] = None, 361 | throughput: Optional[ThroughputOrTuple] = None, 362 | ) -> "GlobalIndex": 363 | """Create an index that projects only key attributes""" 364 | return cls(cls.KEYS, name, hash_key, range_key, throughput=throughput) 365 | 366 | @classmethod 367 | def include( 368 | cls, 369 | name: str, 370 | hash_key: DynamoKey, 371 | range_key: Optional[DynamoKey] = None, 372 | includes: Optional[List[str]] = None, 373 | throughput: Optional[ThroughputOrTuple] = None, 374 | ) -> "GlobalIndex": 375 | """Create an index that projects key attributes plus some others""" 376 | return cls( 377 | cls.INCLUDE, name, hash_key, range_key, includes, throughput=throughput 378 | ) 379 | 380 | def schema(self) -> Dict[str, Any]: 381 | """Construct the schema definition for this index""" 382 | if self.hash_key is None: 383 | raise ValueError( 384 | "Cannot construct schema for index %r. Missing hash key" % self.name 385 | ) 386 | schema_data = super(GlobalIndex, self)._schema(self.hash_key) 387 | if self.throughput: 388 | schema_data["ProvisionedThroughput"] = self.throughput.schema() 389 | return schema_data 390 | 391 | @classmethod 392 | def from_response( 393 | cls, response: Dict[str, Any], attrs: Dict[str, Any] 394 | ) -> "GlobalIndex": 395 | """Create an index from returned Dynamo data""" 396 | proj = response["Projection"] 397 | hash_key = None 398 | range_key = None 399 | for key_schema in response["KeySchema"]: 400 | key_attr = attrs.get(key_schema["AttributeName"]) 401 | if key_schema["KeyType"] == "HASH": 402 | hash_key = key_attr 403 | if key_schema["KeyType"] == "RANGE": 404 | range_key = key_attr 405 | throughput = None 406 | if "ProvisionedThroughput" in response: 407 | throughput = Throughput.from_response(response["ProvisionedThroughput"]) 408 | index = cls( 409 | proj["ProjectionType"], 410 | response["IndexName"], 411 | hash_key, 412 | range_key, 413 | proj.get("NonKeyAttributes"), 414 | throughput, 415 | response["IndexStatus"], 416 | response.get("Backfilling", False), 417 | response.get("ItemCount", 0), 418 | response.get("IndexSizeBytes", 0), 419 | ) 420 | index.response = response 421 | return index 422 | 423 | def __str__(self): 424 | lines = ["GlobalIndex(%s, %s)" % (self.name, self.projection_type)] 425 | if self.hash_key: 426 | lines.append("Hash key: %s" % self.hash_key) 427 | if self.range_key: 428 | lines.append("Hash key: %s" % self.range_key) 429 | if self.include_fields: 430 | lines.append("Includes: %s" % ", ".join(self.include_fields)) 431 | if self.throughput: 432 | lines.append("Throughput: %s" % self.throughput) 433 | 434 | return "\n ".join(lines) 435 | 436 | def __repr__(self): 437 | return "GlobalIndex(%s)" % self.name 438 | 439 | def __hash__(self): # pylint: disable=W0235 440 | return super().__hash__() 441 | 442 | def __eq__(self, other): 443 | return ( 444 | super(GlobalIndex, self).__eq__(other) 445 | and self.hash_key == other.hash_key 446 | and self.throughput == other.throughput 447 | ) 448 | 449 | 450 | class RestoreSummary(NamedTuple): 451 | in_progress: bool 452 | time: int 453 | source_backup_arn: Optional[str] 454 | source_table_arn: Optional[str] 455 | 456 | @classmethod 457 | def from_response(cls, response: Dict[str, Any]) -> Optional["RestoreSummary"]: 458 | summary = response.get("RestoreSummary") 459 | if summary is None: 460 | return None 461 | return cls( 462 | response["RestoreInProgress"], 463 | response["RestoreDateTime"], 464 | response.get("SourceBackupArn"), 465 | response.get("SourceTableArn"), 466 | ) 467 | 468 | @classmethod 469 | def default(cls): 470 | return cls(False, 0, None, None) 471 | 472 | 473 | class SSEDescription(NamedTuple): 474 | status: Optional[ 475 | Literal[ 476 | Literal["ENABLING"], 477 | Literal["ENABLED"], 478 | Literal["DISABLING"], 479 | Literal["DISABLED"], 480 | Literal["UPDATING"], 481 | ] 482 | ] 483 | type: Optional[Literal[Literal["AES256"], Literal["KMS"]]] 484 | kms_arn: Optional[str] 485 | inaccessible_encryption_time: Optional[int] 486 | 487 | @classmethod 488 | def from_response(cls, response: Dict[str, Any]) -> Optional["SSEDescription"]: 489 | summary = response.get("SSEDescription") 490 | if summary is None: 491 | return None 492 | return cls( 493 | response.get("Status"), 494 | response.get("SSEType"), 495 | response.get("KMSMasterKeyArn"), 496 | response.get("InaccessibleEncryptionDateTime"), 497 | ) 498 | 499 | @classmethod 500 | def default(cls): 501 | return cls(None, None, None, None) 502 | 503 | 504 | class Table(object): 505 | 506 | """Representation of a DynamoDB table""" 507 | 508 | def __init__( 509 | self, 510 | name: str, 511 | # Hash key will not exist if table is DELETING 512 | hash_key: Optional[DynamoKey], 513 | range_key: Optional[DynamoKey] = None, 514 | indexes: Optional[List[LocalIndex]] = None, 515 | global_indexes: Optional[List[GlobalIndex]] = None, 516 | throughput: Optional[ThroughputOrTuple] = None, 517 | status: TableStatusType = "ACTIVE", 518 | billing_mode: Optional[BillingModeType] = None, 519 | arn: Optional[str] = None, 520 | stream_type: Optional[StreamViewType] = None, 521 | restore_summary: Optional[RestoreSummary] = None, 522 | sse_description: Optional[SSEDescription] = None, 523 | decreases_today: Optional[int] = None, 524 | item_count: int = 0, 525 | size: int = 0, 526 | ): 527 | self.name = name 528 | self.hash_key = hash_key 529 | self.range_key = range_key 530 | self.indexes = indexes or [] 531 | self.global_indexes = global_indexes or [] 532 | self.throughput = Throughput.normalize(throughput) 533 | self.status = status 534 | self.billing_mode = billing_mode 535 | self.arn = arn 536 | self.stream_type = stream_type 537 | self.restore_summary = restore_summary or RestoreSummary.default() 538 | self.sse_description = sse_description or SSEDescription.default() 539 | self.decreases_today = decreases_today 540 | self.item_count = item_count 541 | self.size = size 542 | self.response: Dict[str, Any] = {} 543 | self.ttl: Optional[TTL] = None 544 | 545 | @property 546 | def attribute_definitions(self) -> Set[DynamoKey]: 547 | """Getter for attribute_definitions""" 548 | attrs = set() 549 | if self.hash_key is not None: 550 | attrs.add(self.hash_key) 551 | if self.range_key is not None: 552 | attrs.add(self.range_key) 553 | for index in self.indexes: 554 | attrs.add(index.range_key) 555 | for gindex in self.global_indexes: 556 | if gindex.hash_key is not None: 557 | attrs.add(gindex.hash_key) 558 | if gindex.range_key is not None: 559 | attrs.add(gindex.range_key) 560 | return attrs 561 | 562 | def __getitem__(self, key: str) -> Any: 563 | return self.response[key] 564 | 565 | def get(self, key: str, default: Any = None) -> Any: 566 | return self.response.get(key, default) 567 | 568 | def __contains__(self, key: str) -> bool: 569 | return key in self.response 570 | 571 | @classmethod 572 | def from_response(cls, response: Dict[str, Any]) -> "Table": 573 | """Create a Table from returned Dynamo data""" 574 | hash_key = None 575 | range_key = None 576 | # KeySchema may not be in the response if the TableStatus is DELETING. 577 | if "KeySchema" in response: 578 | attrs = dict( 579 | ( 580 | ( 581 | d["AttributeName"], 582 | DynamoKey(d["AttributeName"], d["AttributeType"]), 583 | ) 584 | for d in response["AttributeDefinitions"] 585 | ) 586 | ) 587 | for key_schema in response["KeySchema"]: 588 | key_attr = attrs[key_schema["AttributeName"]] 589 | if key_schema["KeyType"] == "HASH": 590 | hash_key = key_attr 591 | if key_schema["KeyType"] == "RANGE": 592 | range_key = key_attr 593 | 594 | indexes = [] 595 | for idx in response.get("LocalSecondaryIndexes", []): 596 | indexes.append(LocalIndex.from_response(idx, attrs)) 597 | global_indexes = [] 598 | for idx in response.get("GlobalSecondaryIndexes", []): 599 | global_indexes.append(GlobalIndex.from_response(idx, attrs)) 600 | throughput = None 601 | decreases_today = None 602 | if "ProvisionedThroughput" in response: 603 | throughput = Throughput.from_response(response["ProvisionedThroughput"]) 604 | decreases_today = response["ProvisionedThroughput"][ 605 | "NumberOfDecreasesToday" 606 | ] 607 | stream_type = None 608 | if ( 609 | "StreamSpecification" in response 610 | and response["StreamSpecification"]["StreamEnabled"] 611 | ): 612 | stream_type = response["StreamSpecification"]["StreamViewType"] 613 | 614 | # TODO Replicas 615 | 616 | table = cls( 617 | name=response["TableName"], 618 | hash_key=hash_key, 619 | range_key=range_key, 620 | indexes=indexes, 621 | global_indexes=global_indexes, 622 | throughput=throughput, 623 | status=response["TableStatus"], 624 | billing_mode=response.get("BillingModeSummary", {}).get("BillingMode"), 625 | arn=response.get("TableArn"), 626 | stream_type=stream_type, 627 | restore_summary=RestoreSummary.from_response(response), 628 | sse_description=SSEDescription.from_response(response), 629 | decreases_today=decreases_today, 630 | item_count=response["ItemCount"], 631 | size=response["TableSizeBytes"], 632 | ) 633 | table.response = response 634 | return table 635 | 636 | @property 637 | def is_on_demand(self): 638 | """Getter for is_on_demand""" 639 | return self.billing_mode == PAY_PER_REQUEST 640 | 641 | def __str__(self): 642 | lines = [ 643 | "Table(%s)" % self.name, 644 | ] 645 | if self.hash_key is not None: 646 | lines.append("Hash key: %s" % self.hash_key) 647 | if self.range_key is not None: 648 | lines.append("Range key: %s" % self.range_key) 649 | if self.indexes: 650 | lines.append("Local indexes:") 651 | for index in self.indexes: 652 | lines.append(" %s" % index) 653 | if self.global_indexes: 654 | lines.append("Global indexes:") 655 | for gindex in self.global_indexes: 656 | lines.append(" %s" % gindex) 657 | return "\n ".join(lines) 658 | 659 | def __repr__(self): 660 | return "Table(%s)" % self.name 661 | 662 | def __hash__(self): 663 | return hash(self.name) 664 | 665 | def __eq__(self, other): 666 | return ( 667 | isinstance(other, Table) 668 | and self.name == other.name 669 | and self.hash_key == other.hash_key 670 | and self.range_key == other.range_key 671 | and self.indexes == other.indexes 672 | and self.global_indexes == other.global_indexes 673 | ) 674 | 675 | def __ne__(self, other): 676 | return not self.__eq__(other) 677 | 678 | 679 | class IndexUpdate(ABC): 680 | 681 | """ 682 | An update to a GlobalSecondaryIndex to be passed to update_table 683 | 684 | You should generally use the factory methods :meth:`~update`, 685 | :meth:`~create`, and :meth:`~delete` instead of the constructor. 686 | 687 | """ 688 | 689 | def __init__( 690 | self, 691 | action: Literal[Literal["Create"], Literal["Update"], Literal["Delete"]], 692 | ): 693 | self.action = action 694 | 695 | @staticmethod 696 | def update(index_name: str, throughput: ThroughputOrTuple) -> "IndexUpdateUpdate": 697 | """Update the throughput on the index""" 698 | return IndexUpdateUpdate(index_name, throughput) 699 | 700 | @staticmethod 701 | def create(index: GlobalIndex) -> "IndexUpdateCreate": 702 | """Create a new index""" 703 | return IndexUpdateCreate(index) 704 | 705 | @staticmethod 706 | def delete(index_name: str) -> "IndexUpdateDelete": 707 | """Delete an index""" 708 | return IndexUpdateDelete(index_name) 709 | 710 | @abstractmethod 711 | def get_attrs(self) -> List[DynamoKey]: 712 | """Get all attrs necessary for the update (empty unless Create)""" 713 | 714 | def serialize(self) -> Dict[str, Any]: 715 | """Get the serialized Dynamo format for the update""" 716 | return {self.action: self._get_schema()} 717 | 718 | @abstractmethod 719 | def _get_schema(self) -> Any: 720 | raise NotImplementedError() 721 | 722 | @abstractmethod 723 | def __hash__(self) -> int: 724 | raise NotImplementedError() 725 | 726 | @abstractmethod 727 | def __eq__(self, other: Any) -> bool: 728 | raise NotImplementedError() 729 | 730 | def __ne__(self, other: Any) -> bool: 731 | return not self.__eq__(other) 732 | 733 | 734 | class IndexUpdateCreate(IndexUpdate): 735 | def __init__( 736 | self, 737 | index: GlobalIndex, 738 | ): 739 | super().__init__("Create") 740 | self.index = index 741 | 742 | def get_attrs(self) -> List[DynamoKey]: 743 | ret = [] 744 | if self.index.hash_key is not None: 745 | ret.append(self.index.hash_key) 746 | if self.index.range_key is not None: 747 | ret.append(self.index.range_key) 748 | return ret 749 | 750 | def _get_schema(self): 751 | return self.index.schema() 752 | 753 | def __hash__(self): 754 | return hash(self.action) + hash(self.index) 755 | 756 | def __eq__(self, other): 757 | return ( 758 | type(other) == type(self) 759 | and self.action == other.action 760 | and self.index == other.index 761 | ) 762 | 763 | 764 | class IndexUpdateUpdate(IndexUpdate): 765 | def __init__( 766 | self, 767 | index_name: str, 768 | throughput: ThroughputOrTuple, 769 | ): 770 | super().__init__("Update") 771 | self.index_name = index_name 772 | self.throughput = Throughput.normalize(throughput) 773 | 774 | def get_attrs(self) -> List[DynamoKey]: 775 | return [] 776 | 777 | def _get_schema(self): 778 | return { 779 | "IndexName": self.index_name, 780 | "ProvisionedThroughput": self.throughput.schema(), 781 | } 782 | 783 | def __hash__(self): 784 | return hash(self.action) + hash(self.index_name) + hash(self.throughput) 785 | 786 | def __eq__(self, other): 787 | return ( 788 | type(other) == type(self) 789 | and self.action == other.action 790 | and self.index_name == other.index_name 791 | and self.throughput == other.throughput 792 | ) 793 | 794 | 795 | class IndexUpdateDelete(IndexUpdate): 796 | def __init__( 797 | self, 798 | index_name: str, 799 | ): 800 | super().__init__("Delete") 801 | self.index_name = index_name 802 | 803 | def get_attrs(self) -> List[DynamoKey]: 804 | return [] 805 | 806 | def _get_schema(self): 807 | return { 808 | "IndexName": self.index_name, 809 | } 810 | 811 | def __hash__(self): 812 | return hash(self.action) + hash(self.index_name) 813 | 814 | def __eq__(self, other): 815 | return ( 816 | type(other) == type(self) 817 | and self.action == other.action 818 | and self.index_name == other.index_name 819 | ) 820 | -------------------------------------------------------------------------------- /dynamo3/rate.py: -------------------------------------------------------------------------------- 1 | """ Tools for rate limiting """ 2 | import logging 3 | import math 4 | import time 5 | from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Tuple 6 | 7 | from typing_extensions import TypedDict 8 | 9 | from .result import Capacity, ConsumedCapacity 10 | 11 | if TYPE_CHECKING: 12 | from .connection import DynamoDBConnection 13 | 14 | LOG = logging.getLogger(__name__) 15 | 16 | 17 | class DecayingCapacityStore(object): 18 | """ 19 | Store for time series data that expires 20 | 21 | Parameters 22 | ---------- 23 | window : int, optional 24 | Number of seconds in the window (default 1) 25 | 26 | """ 27 | 28 | def __init__(self, window: int = 1): 29 | self.window = window 30 | self.points: List[Tuple[float, int]] = [] 31 | 32 | def add(self, now: float, num: int) -> None: 33 | """Add a timestamp and date to the data""" 34 | if num == 0: 35 | return 36 | self.points.append((now, num)) 37 | 38 | @property 39 | def value(self) -> int: 40 | """Get the summation of all non-expired points""" 41 | now = time.time() 42 | cutoff = now - self.window 43 | while self.points and self.points[0][0] < cutoff: 44 | self.points.pop(0) 45 | return sum([p[1] for p in self.points]) 46 | 47 | 48 | RemainingCapacity = TypedDict( 49 | "RemainingCapacity", {"read": DecayingCapacityStore, "write": DecayingCapacityStore} 50 | ) 51 | 52 | 53 | class RateLimit(object): 54 | """ 55 | Class for rate limiting requests to DynamoDB 56 | 57 | Parameters 58 | ---------- 59 | total_read : float, optional 60 | The overall maximum read units per second 61 | total_write : float, optional 62 | The overall maximum write units per second 63 | total : :class:`~dynamo3.result.Capacity`, optional 64 | A Capacity object. You can provide this instead of ``total_read`` and 65 | ``total_write``. 66 | default_read : float, optional 67 | The default read unit cap for tables 68 | default_write : float, optional 69 | The default write unit cap for tables 70 | default : :class:`~dynamo3.result.Capacity`, optional 71 | A Capacity object. You can provide this instead of ``default_read`` and 72 | ``default_write``. 73 | table_caps : dict, optional 74 | Mapping of table name to dicts with two optional keys ('read' and 75 | 'write') that provide the maximum units for the table and local 76 | indexes. You can specify global indexes by adding a key to the dict or 77 | by providing another key in ``table_caps`` that is the table name and 78 | index name joined by a ``:``. 79 | callback : callable, optional 80 | A function that will be called if the rate limit is exceeded. It will 81 | be called with (connection, command, query, response, 82 | consumed_capacity, seconds). If this function returns True, RateLimit 83 | will skip the sleep. 84 | """ 85 | 86 | def __init__( 87 | self, 88 | total_read: int = 0, 89 | total_write: int = 0, 90 | total: Optional[Capacity] = None, 91 | default_read: int = 0, 92 | default_write: int = 0, 93 | default: Optional[Capacity] = None, 94 | table_caps: Optional[Dict[str, Any]] = None, 95 | callback: Optional[Callable] = None, 96 | ): 97 | if total is not None: 98 | self.total_cap = total 99 | else: 100 | self.total_cap = Capacity(total_read, total_write) 101 | if default is not None: 102 | self.default_cap = default 103 | else: 104 | self.default_cap = Capacity(default_read, default_write) 105 | self.table_caps = table_caps or {} 106 | self._old_default_return_capacity = False 107 | self._consumed: Dict[str, RemainingCapacity] = {} 108 | self._total_consumed = { 109 | "read": DecayingCapacityStore(), 110 | "write": DecayingCapacityStore(), 111 | } 112 | self.callback = callback 113 | 114 | def get_consumed(self, key: str) -> RemainingCapacity: 115 | """Getter for a consumed capacity storage dict""" 116 | if key not in self._consumed: 117 | self._consumed[key] = { 118 | "read": DecayingCapacityStore(), 119 | "write": DecayingCapacityStore(), 120 | } 121 | return self._consumed[key] 122 | 123 | def on_capacity( 124 | self, 125 | connection: "DynamoDBConnection", 126 | command: str, 127 | query_kwargs: Dict[str, Any], 128 | response: Dict[str, Any], 129 | capacity: ConsumedCapacity, 130 | ) -> None: 131 | """Hook that runs in response to a 'returned capacity' event""" 132 | now = time.time() 133 | args = (connection, command, query_kwargs, response, capacity) 134 | # Check total against the total_cap 135 | self._wait(args, now, self.total_cap, self._total_consumed, capacity.total) 136 | 137 | # Increment table consumed capacity & check it 138 | if capacity.tablename in self.table_caps: 139 | table_cap = self.table_caps[capacity.tablename] 140 | else: 141 | table_cap = self.default_cap 142 | consumed_history = self.get_consumed(capacity.tablename) 143 | if capacity.table_capacity is not None: 144 | self._wait(args, now, table_cap, consumed_history, capacity.table_capacity) 145 | # The local index consumed capacity also counts against the table 146 | if capacity.local_index_capacity is not None: 147 | for consumed in capacity.local_index_capacity.values(): 148 | self._wait(args, now, table_cap, consumed_history, consumed) 149 | 150 | # Increment global indexes 151 | # check global indexes against the table+index cap or default 152 | gic = capacity.global_index_capacity 153 | if gic is not None: 154 | for index_name, consumed in gic.items(): 155 | full_name = capacity.tablename + ":" + index_name 156 | if index_name in table_cap: 157 | index_cap = table_cap[index_name] 158 | elif full_name in self.table_caps: 159 | index_cap = self.table_caps[full_name] 160 | else: 161 | # If there's no specified capacity for the index, 162 | # use the cap on the table 163 | index_cap = table_cap 164 | consumed_history = self.get_consumed(full_name) 165 | self._wait(args, now, index_cap, consumed_history, consumed) 166 | 167 | def _wait(self, args, now, cap, consumed_history, consumed_capacity): 168 | """Check the consumed capacity against the limit and sleep""" 169 | for key in ["read", "write"]: 170 | if key in cap and cap[key] > 0: 171 | consumed_history[key].add(now, consumed_capacity[key]) 172 | consumed = consumed_history[key].value 173 | if consumed > 0 and consumed >= cap[key]: 174 | seconds = math.ceil(float(consumed) / cap[key]) 175 | LOG.debug( 176 | "Rate limited throughput exceeded. Sleeping " "for %d seconds.", 177 | seconds, 178 | ) 179 | if callable(self.callback): 180 | callback_args = args + (seconds,) 181 | if self.callback(*callback_args): 182 | continue 183 | time.sleep(seconds) 184 | -------------------------------------------------------------------------------- /dynamo3/result.py: -------------------------------------------------------------------------------- 1 | """ Wrappers for result objects and iterators """ 2 | from abc import ABC, abstractmethod, abstractproperty 3 | from typing import ( 4 | TYPE_CHECKING, 5 | Any, 6 | Callable, 7 | Dict, 8 | ItemsView, 9 | Iterable, 10 | Iterator, 11 | KeysView, 12 | List, 13 | Optional, 14 | Tuple, 15 | Union, 16 | ValuesView, 17 | overload, 18 | ) 19 | 20 | from .constants import MAX_GET_BATCH, ReturnCapacityType 21 | from .types import ( 22 | Dynamizer, 23 | DynamoObject, 24 | EncodedDynamoObject, 25 | ExpressionAttributeNamesType, 26 | ) 27 | 28 | if TYPE_CHECKING: 29 | from .connection import DynamoDBConnection 30 | 31 | 32 | def add_dicts(d1, d2): 33 | """Merge two dicts of addable values""" 34 | if d1 is None: 35 | return d2 36 | if d2 is None: 37 | return d1 38 | keys = set(d1) 39 | keys.update(set(d2)) 40 | ret = {} 41 | for key in keys: 42 | v1 = d1.get(key) 43 | v2 = d2.get(key) 44 | if v1 is None: 45 | ret[key] = v2 46 | elif v2 is None: 47 | ret[key] = v1 48 | else: 49 | ret[key] = v1 + v2 50 | return ret 51 | 52 | 53 | class Count(int): 54 | 55 | """Wrapper for response to query with Select=COUNT""" 56 | 57 | count: int 58 | scanned_count: int 59 | consumed_capacity: Optional["Capacity"] 60 | 61 | def __new__( 62 | cls, 63 | count: int, 64 | scanned_count: int, 65 | consumed_capacity: Optional["Capacity"] = None, 66 | ) -> "Count": 67 | ret = super(Count, cls).__new__(cls, count) 68 | ret.count = count 69 | ret.scanned_count = scanned_count 70 | ret.consumed_capacity = consumed_capacity 71 | return ret 72 | 73 | @classmethod 74 | def from_response(cls, response: Dict[str, Any]) -> "Count": 75 | """Factory method""" 76 | return cls( 77 | response["Count"], 78 | response["ScannedCount"], 79 | response.get("consumed_capacity"), 80 | ) 81 | 82 | def __add__(self, other): 83 | if other is None: 84 | return self 85 | if not isinstance(other, Count): 86 | return self.count + other 87 | if self.consumed_capacity is None: 88 | capacity = other.consumed_capacity 89 | else: 90 | capacity = self.consumed_capacity + other.consumed_capacity 91 | return Count( 92 | self.count + other.count, self.scanned_count + other.scanned_count, capacity 93 | ) 94 | 95 | def __radd__(self, other): 96 | return self.__add__(other) 97 | 98 | def __repr__(self): 99 | return "Count(%d)" % self 100 | 101 | 102 | class Capacity(object): 103 | """Wrapper for the capacity of a table or index""" 104 | 105 | def __init__(self, read: float, write: float): 106 | self._read = read 107 | self._write = write 108 | 109 | @property 110 | def read(self) -> float: 111 | """The read capacity""" 112 | return self._read 113 | 114 | @property 115 | def write(self) -> float: 116 | """The write capacity""" 117 | return self._write 118 | 119 | @classmethod 120 | def from_response( 121 | cls, response: Dict[str, Any], is_read: Optional[bool] 122 | ) -> "Capacity": 123 | read = response.get("ReadCapacityUnits") 124 | if read is None: 125 | read = response["CapacityUnits"] if is_read else 0 126 | write = response.get("WriteCapacityUnits") 127 | if write is None: 128 | write = 0 if is_read else response["CapacityUnits"] 129 | return cls(read, write) 130 | 131 | def __getitem__(self, key): 132 | return getattr(self, key) 133 | 134 | def __contains__(self, key): 135 | return key in ["read", "write"] 136 | 137 | def __hash__(self): 138 | return self._read + self._write 139 | 140 | def __eq__(self, other): 141 | if isinstance(other, tuple): 142 | return self.read == other[0] and self.write == other[1] 143 | return self.read == getattr(other, "read", None) and self.write == getattr( 144 | other, "write", None 145 | ) 146 | 147 | def __ne__(self, other): 148 | return not self.__eq__(other) 149 | 150 | def __add__(self, other): 151 | if isinstance(other, tuple): 152 | return Capacity(self.read + other[0], self.write + other[1]) 153 | return Capacity(self.read + other.read, self.write + other.write) 154 | 155 | def __radd__(self, other): 156 | return self.__add__(other) 157 | 158 | def __str__(self): 159 | pieces = [] 160 | if self.read: 161 | pieces.append("R:{0:.1f}".format(self.read)) 162 | if self.write: 163 | pieces.append("W:{0:.1f}".format(self.write)) 164 | if not pieces: 165 | return "0" 166 | return " ".join(pieces) 167 | 168 | 169 | class ConsumedCapacity(object): 170 | """Record of the consumed capacity of a request""" 171 | 172 | def __init__( 173 | self, 174 | tablename: str, 175 | total: Capacity, 176 | table_capacity: Optional[Capacity] = None, 177 | local_index_capacity: Optional[Dict[str, Capacity]] = None, 178 | global_index_capacity: Optional[Dict[str, Capacity]] = None, 179 | ): 180 | self.tablename = tablename 181 | self.total = total 182 | self.table_capacity = table_capacity 183 | self.local_index_capacity = local_index_capacity 184 | self.global_index_capacity = global_index_capacity 185 | 186 | @classmethod 187 | def build_indexes( 188 | cls, response: Dict[str, Dict[str, Any]], key: str, is_read: Optional[bool] 189 | ) -> Optional[Dict[str, Capacity]]: 190 | """Construct index capacity map from a request fragment""" 191 | if key not in response: 192 | return None 193 | indexes = {} 194 | for key, val in response[key].items(): 195 | indexes[key] = Capacity.from_response(val, is_read) 196 | return indexes 197 | 198 | @classmethod 199 | def from_response( 200 | cls, response: Dict[str, Any], is_read: Optional[bool] = None 201 | ) -> "ConsumedCapacity": 202 | """Factory method for ConsumedCapacity from a response object""" 203 | kwargs = { 204 | "tablename": response["TableName"], 205 | "total": Capacity.from_response(response, is_read), 206 | } 207 | local = cls.build_indexes(response, "LocalSecondaryIndexes", is_read) 208 | kwargs["local_index_capacity"] = local 209 | gindex = cls.build_indexes(response, "GlobalSecondaryIndexes", is_read) 210 | kwargs["global_index_capacity"] = gindex 211 | if "Table" in response: 212 | kwargs["table_capacity"] = Capacity.from_response( 213 | response["Table"], is_read 214 | ) 215 | return cls(**kwargs) 216 | 217 | def __hash__(self): 218 | return hash(self.tablename) + hash(self.total) 219 | 220 | def __eq__(self, other): 221 | properties = [ 222 | "tablename", 223 | "total", 224 | "table_capacity", 225 | "local_index_capacity", 226 | "global_index_capacity", 227 | ] 228 | for prop in properties: 229 | if getattr(self, prop) != getattr(other, prop, None): 230 | return False 231 | return True 232 | 233 | def __ne__(self, other): 234 | return not self.__eq__(other) 235 | 236 | def __radd__(self, other): 237 | return self.__add__(other) 238 | 239 | def __add__(self, other): 240 | # Handle identity cases when added to empty values 241 | if other is None: 242 | return self 243 | if self.tablename != other.tablename: 244 | raise TypeError("Cannot add capacities from different tables") 245 | kwargs = { 246 | "total": self.total + other.total, 247 | } 248 | if self.table_capacity is not None: 249 | kwargs["table_capacity"] = self.table_capacity + other.table_capacity 250 | kwargs["local_index_capacity"] = add_dicts( 251 | self.local_index_capacity, other.local_index_capacity 252 | ) 253 | kwargs["global_index_capacity"] = add_dicts( 254 | self.global_index_capacity, other.global_index_capacity 255 | ) 256 | 257 | return ConsumedCapacity(self.tablename, **kwargs) 258 | 259 | def __str__(self): 260 | lines = [] 261 | if self.table_capacity: 262 | lines.append("Table: %s" % self.table_capacity) 263 | if self.local_index_capacity: 264 | for name, cap in self.local_index_capacity.items(): 265 | lines.append("Local index '%s': %s" % (name, cap)) 266 | if self.global_index_capacity: 267 | for name, cap in self.global_index_capacity.items(): 268 | lines.append("Global index '%s': %s" % (name, cap)) 269 | lines.append("Total: %s" % self.total) 270 | return "\n".join(lines) 271 | 272 | 273 | class PagedIterator(ABC): 274 | 275 | """An iterator that iterates over paged results from Dynamo""" 276 | 277 | def __init__(self): 278 | self.iterator = None 279 | 280 | @abstractproperty 281 | def can_fetch_more(self) -> bool: # pragma: no cover 282 | """Return True if more results can be fetched from the server""" 283 | raise NotImplementedError 284 | 285 | @abstractmethod 286 | def _fetch(self) -> Iterator: # pragma: no cover 287 | """Fetch additional results from the server and return an iterator""" 288 | raise NotImplementedError 289 | 290 | def __iter__(self): 291 | return self 292 | 293 | def __next__(self): 294 | if self.iterator is None: 295 | self.iterator = self._fetch() 296 | while True: 297 | try: 298 | return next(self.iterator) 299 | except StopIteration: 300 | if self.can_fetch_more: 301 | self.iterator = self._fetch() 302 | else: 303 | raise 304 | 305 | 306 | class ResultSet(PagedIterator): 307 | 308 | """Iterator that pages results from Dynamo""" 309 | 310 | def __init__( 311 | self, 312 | connection: "DynamoDBConnection", 313 | limit: "Limit", 314 | *args: Any, 315 | **kwargs: Any 316 | ): 317 | super(ResultSet, self).__init__() 318 | self.connection = connection 319 | # The limit will be mutated, so copy it and leave the original intact 320 | self.limit = limit.copy() 321 | self.args = args 322 | self.kwargs = kwargs 323 | self.last_evaluated_key: Optional[dict] = None 324 | self.consumed_capacity: Optional[ConsumedCapacity] = None 325 | 326 | @property 327 | def can_fetch_more(self) -> bool: 328 | """True if there are more results on the server""" 329 | return self.last_evaluated_key is not None and not self.limit.complete 330 | 331 | def _fetch(self) -> Iterator: 332 | """Fetch more results from Dynamo""" 333 | self.limit.set_request_args(self.kwargs) 334 | data = self.connection.call(*self.args, **self.kwargs) 335 | self.limit.post_fetch(data) 336 | self.last_evaluated_key = data.get("LastEvaluatedKey") 337 | if self.last_evaluated_key is None: 338 | self.kwargs.pop("ExclusiveStartKey", None) 339 | else: 340 | self.kwargs["ExclusiveStartKey"] = self.last_evaluated_key 341 | if "consumed_capacity" in data: 342 | self.consumed_capacity += data["consumed_capacity"] 343 | for raw_item in data["Items"]: 344 | item = self.connection.dynamizer.decode_keys(raw_item) 345 | if self.limit.accept(item): 346 | yield item 347 | 348 | def __next__(self) -> DynamoObject: # pylint: disable=W0235 349 | return super().__next__() 350 | 351 | 352 | class GetResultSet(PagedIterator): 353 | 354 | """Iterator that pages the results of a BatchGetItem""" 355 | 356 | def __init__( 357 | self, 358 | connection: "DynamoDBConnection", 359 | keymap: Dict[str, Iterable[DynamoObject]], 360 | consistent: bool = False, 361 | attributes: Optional[str] = None, 362 | alias: Optional[ExpressionAttributeNamesType] = None, 363 | return_capacity: Optional[ReturnCapacityType] = None, 364 | ): 365 | super(GetResultSet, self).__init__() 366 | self.connection = connection 367 | self.keymap: Dict[str, Iterator[DynamoObject]] = { 368 | t: iter(keys) for t, keys in keymap.items() 369 | } 370 | self.consistent = consistent 371 | self.attributes = attributes 372 | self.alias = alias 373 | self.return_capacity = return_capacity 374 | self._pending_keys: Dict[str, List[EncodedDynamoObject]] = {} 375 | self._attempt = 0 376 | self.consumed_capacity: Optional[Dict[str, ConsumedCapacity]] = None 377 | self._cached_dict: Optional[Dict[str, List[DynamoObject]]] = None 378 | self._started_iterator = False 379 | 380 | @property 381 | def can_fetch_more(self) -> bool: 382 | return bool(self.keymap) or bool(self._pending_keys) 383 | 384 | def build_kwargs(self): 385 | """Construct the kwargs to pass to batch_get_item""" 386 | num_pending = sum([len(v) for v in self._pending_keys.values()]) 387 | if num_pending < MAX_GET_BATCH: 388 | tablenames_to_remove = [] 389 | for tablename, key_iter in self.keymap.items(): 390 | for key in key_iter: 391 | pending_keys = self._pending_keys.setdefault(tablename, []) 392 | pending_keys.append(self.connection.dynamizer.encode_keys(key)) 393 | num_pending += 1 394 | if num_pending == MAX_GET_BATCH: 395 | break 396 | else: 397 | tablenames_to_remove.append(tablename) 398 | if num_pending == MAX_GET_BATCH: 399 | break 400 | for tablename in tablenames_to_remove: 401 | self.keymap.pop(tablename, None) 402 | 403 | if not self._pending_keys: 404 | return None 405 | request_items = {} 406 | for tablename, keys in self._pending_keys.items(): 407 | query: Dict[str, Any] = {"ConsistentRead": self.consistent} 408 | if self.attributes: 409 | query["ProjectionExpression"] = self.attributes 410 | if self.alias: 411 | query["ExpressionAttributeNames"] = self.alias 412 | query["Keys"] = keys 413 | request_items[tablename] = query 414 | self._pending_keys = {} 415 | return { 416 | "RequestItems": request_items, 417 | "ReturnConsumedCapacity": self.return_capacity, 418 | } 419 | 420 | def _fetch(self) -> Iterator: 421 | """Fetch a set of items from their keys""" 422 | kwargs = self.build_kwargs() 423 | if kwargs is None: 424 | return iter([]) 425 | data = self.connection.call("batch_get_item", **kwargs) 426 | if "UnprocessedKeys" in data: 427 | for tablename, items in data["UnprocessedKeys"].items(): 428 | keys = self._pending_keys.setdefault(tablename, []) 429 | keys.extend(items["Keys"]) 430 | # Getting UnprocessedKeys indicates that we are exceeding our 431 | # throughput. So sleep for a bit. 432 | self._attempt += 1 433 | self.connection.exponential_sleep(self._attempt) 434 | else: 435 | # No UnprocessedKeys means our request rate is fine, so we can 436 | # reset the attempt number. 437 | self._attempt = 0 438 | if "consumed_capacity" in data: 439 | self.consumed_capacity = self.consumed_capacity or {} 440 | for cap in data["consumed_capacity"]: 441 | self.consumed_capacity[ 442 | cap.tablename 443 | ] = cap + self.consumed_capacity.get(cap.tablename) 444 | for tablename, items in data["Responses"].items(): 445 | for item in items: 446 | yield tablename, item 447 | 448 | def __getitem__(self, key: str) -> List[DynamoObject]: 449 | return self.asdict()[key] 450 | 451 | def items(self) -> ItemsView[str, List[DynamoObject]]: 452 | return self.asdict().items() 453 | 454 | def keys(self) -> KeysView[str]: 455 | return self.asdict().keys() 456 | 457 | def values(self) -> ValuesView[List[DynamoObject]]: 458 | return self.asdict().values() 459 | 460 | def __next__(self) -> Tuple[str, DynamoObject]: 461 | self._started_iterator = True 462 | tablename, result = super().__next__() 463 | return tablename, self.connection.dynamizer.decode_keys(result) 464 | 465 | def asdict(self) -> Dict[str, List[DynamoObject]]: 466 | if self._cached_dict is None: 467 | if self._started_iterator: 468 | raise ValueError( 469 | "Cannot use asdict if also using GetResultSet as an iterator" 470 | ) 471 | self._cached_dict = {} 472 | for tablename, item in self: 473 | items = self._cached_dict.setdefault(tablename, []) 474 | items.append(item) 475 | return self._cached_dict 476 | 477 | 478 | class SingleTableGetResultSet(object): 479 | def __init__(self, result_set: GetResultSet): 480 | self.result_set = result_set 481 | 482 | @property 483 | def consumed_capacity(self) -> Optional[ConsumedCapacity]: 484 | """Getter for consumed_capacity""" 485 | cap_map = self.result_set.consumed_capacity 486 | if cap_map is None: 487 | return None 488 | return next(iter(cap_map.values())) 489 | 490 | def __iter__(self): 491 | return self 492 | 493 | def __next__(self) -> DynamoObject: 494 | return next(self.result_set)[1] 495 | 496 | 497 | class TableResultSet(PagedIterator): 498 | 499 | """Iterator that pages table names from ListTables""" 500 | 501 | def __init__(self, connection: "DynamoDBConnection", limit: Optional[int] = None): 502 | super(TableResultSet, self).__init__() 503 | self.connection = connection 504 | self.limit = limit 505 | self.last_evaluated_table_name: Optional[str] = None 506 | 507 | @property 508 | def can_fetch_more(self) -> bool: 509 | if self.last_evaluated_table_name is None: 510 | return False 511 | return self.limit is None or self.limit > 0 512 | 513 | def _fetch(self) -> Iterator: 514 | kwargs: Dict[str, Any] = {} 515 | if self.limit is None: 516 | kwargs["Limit"] = 100 517 | else: 518 | kwargs["Limit"] = min(self.limit, 100) 519 | if self.last_evaluated_table_name is not None: 520 | kwargs["ExclusiveStartTableName"] = self.last_evaluated_table_name 521 | data = self.connection.call("list_tables", **kwargs) 522 | self.last_evaluated_table_name = data.get("LastEvaluatedTableName") 523 | tables = data["TableNames"] 524 | if self.limit is not None: 525 | self.limit -= len(tables) 526 | return iter(tables) 527 | 528 | def __next__(self) -> str: # pylint: disable=W0235 529 | return super().__next__() 530 | 531 | 532 | class Result(dict): 533 | 534 | """ 535 | A wrapper for an item returned from Dynamo 536 | 537 | Attributes 538 | ---------- 539 | consumed_capacity : :class:`~dynamo3.result.ConsumedCapacity`, optional 540 | Consumed capacity on the table 541 | exists : bool 542 | False if the result is empty (i.e. no result was returned from dynamo) 543 | 544 | """ 545 | 546 | def __init__(self, dynamizer: Dynamizer, response: Dict[str, Any], item_key: str): 547 | super(Result, self).__init__() 548 | self.exists = item_key in response 549 | for k, v in response.get(item_key, {}).items(): 550 | self[k] = dynamizer.decode(v) 551 | 552 | self.consumed_capacity: Optional[ConsumedCapacity] = response.get( 553 | "consumed_capacity" 554 | ) 555 | 556 | def __repr__(self): 557 | return "Result({0})".format(super(Result, self).__repr__()) 558 | 559 | 560 | class Limit(object): 561 | 562 | """ 563 | Class that defines query/scan limit behavior 564 | 565 | Parameters 566 | ---------- 567 | scan_limit : int, optional 568 | The maximum number of items for DynamoDB to scan. This will not 569 | necessarily be the number of items returned. 570 | item_limit : int, optional 571 | The maximum number of items to return. Fetches will continue until this 572 | number is reached or there are no results left. See also: ``strict`` 573 | min_scan_limit : int, optional 574 | This only matters when ``item_limit`` is set and ``scan_limit`` is not. 575 | After doing multiple fetches, the ``item_limit`` may drop to a low 576 | value. The ``item_limit`` will be passed up as the query ``Limit``, but 577 | if your ``item_limit`` is down to 1 you may want to fetch more than 1 578 | item at a time. ``min_scan_limit`` determines the minimum ``Limit`` to 579 | send up when ``scan_limit`` is None. (default 20) 580 | strict : bool, optional 581 | This modifies the behavior of ``item_limit``. If True, the query will 582 | never return more items than ``item_limit``. If False, the query will 583 | fetch until it hits the ``item_limit``, and then return the rest of the 584 | page as well. (default False) 585 | filter : callable, optional 586 | Function that takes a single item dict and returns a boolean. If True, 587 | the item will be counted towards the ``item_limit`` and returned from 588 | the iterator. If False, it will be skipped. 589 | 590 | """ 591 | 592 | def __init__( 593 | self, 594 | scan_limit: Optional[int] = None, 595 | item_limit: Optional[int] = None, 596 | min_scan_limit: int = 20, 597 | strict: bool = False, 598 | filter: Callable[[DynamoObject], bool] = lambda x: True, 599 | ): 600 | self.scan_limit = scan_limit 601 | if item_limit is None: 602 | self.item_limit = scan_limit 603 | else: 604 | self.item_limit = item_limit 605 | self.min_scan_limit = min_scan_limit 606 | self.strict = strict 607 | self.filter = filter 608 | 609 | def copy(self) -> "Limit": 610 | """Return a copy of the limit""" 611 | return Limit( 612 | self.scan_limit, 613 | self.item_limit, 614 | self.min_scan_limit, 615 | self.strict, 616 | self.filter, 617 | ) 618 | 619 | def set_request_args(self, args: Dict[str, Any]) -> None: 620 | """Set the Limit parameter into the request args""" 621 | if self.scan_limit is not None: 622 | args["Limit"] = self.scan_limit 623 | elif self.item_limit is not None: 624 | args["Limit"] = max(self.item_limit, self.min_scan_limit) 625 | else: 626 | args.pop("Limit", None) 627 | 628 | @property 629 | def complete(self) -> bool: 630 | """Return True if the limit has been reached""" 631 | if self.scan_limit is not None and self.scan_limit == 0: 632 | return True 633 | if self.item_limit is not None and self.item_limit == 0: 634 | return True 635 | return False 636 | 637 | def post_fetch(self, response: Dict[str, Any]) -> None: 638 | """Called after a fetch. Updates the ScannedCount""" 639 | if self.scan_limit is not None: 640 | self.scan_limit -= response["ScannedCount"] 641 | 642 | def accept(self, item: DynamoObject) -> bool: 643 | """Apply the filter and item_limit, and return True to accept""" 644 | accept = self.filter(item) 645 | if accept and self.item_limit is not None: 646 | if self.item_limit > 0: 647 | self.item_limit -= 1 648 | elif self.strict: 649 | return False 650 | return accept 651 | 652 | 653 | class TransactionGet(object): 654 | def __init__( 655 | self, 656 | connection: "DynamoDBConnection", 657 | return_capacity: Optional[ReturnCapacityType] = None, 658 | ): 659 | self._connection = connection 660 | self._return_capacity = return_capacity 661 | self._cached_list: Optional[List[DynamoObject]] = None 662 | self.consumed_capacity: Optional[Dict[str, ConsumedCapacity]] = None 663 | self._items: List[ 664 | Tuple[ 665 | str, 666 | DynamoObject, 667 | Optional[Union[str, Iterable[str]]], 668 | Optional[ExpressionAttributeNamesType], 669 | ] 670 | ] = [] 671 | 672 | def add_key( 673 | self, 674 | tablename: str, 675 | key: DynamoObject, 676 | attributes: Optional[Union[str, Iterable[str]]] = None, 677 | alias: Optional[ExpressionAttributeNamesType] = None, 678 | ) -> None: 679 | self._items.append((tablename, key, attributes, alias)) 680 | 681 | def __iter__(self): 682 | return iter(self.aslist()) 683 | 684 | @overload 685 | def __getitem__(self, index: int) -> DynamoObject: 686 | ... 687 | 688 | @overload 689 | def __getitem__(self, index: slice) -> List[DynamoObject]: 690 | ... 691 | 692 | def __getitem__( 693 | self, index: Union[int, slice] 694 | ) -> Union[DynamoObject, List[DynamoObject]]: 695 | return self.aslist()[index] 696 | 697 | def __len__(self): 698 | return len(self.aslist()) 699 | 700 | def _fetch(self) -> List[DynamoObject]: 701 | if self._cached_list is not None or not self._items: 702 | return self._cached_list or [] 703 | transact_items = [] 704 | for (tablename, key, attributes, alias) in self._items: 705 | item = { 706 | "TableName": tablename, 707 | "Key": self._connection.dynamizer.encode_keys(key), 708 | } 709 | if attributes: 710 | if not isinstance(attributes, str): 711 | attributes = ", ".join(attributes) 712 | item["ProjectionExpression"] = attributes 713 | if alias is not None: 714 | item["ExpressionAttributeNames"] = alias 715 | transact_items.append({"Get": item}) 716 | kwargs: Dict[str, Any] = {"TransactItems": transact_items} 717 | if self._return_capacity is not None: 718 | kwargs["ReturnConsumedCapacity"] = self._return_capacity 719 | response = self._connection.call("transact_get_items", **kwargs) 720 | if "consumed_capacity" in response: 721 | self.consumed_capacity = self.consumed_capacity or {} 722 | for cap in response["consumed_capacity"]: 723 | self.consumed_capacity[ 724 | cap.tablename 725 | ] = cap + self.consumed_capacity.get(cap.tablename) 726 | decoded = [] 727 | for response_item in response["Responses"]: 728 | decoded.append( 729 | self._connection.dynamizer.decode_keys(response_item["Item"]) 730 | ) 731 | return decoded 732 | 733 | def aslist(self) -> List[DynamoObject]: 734 | if self._cached_list is None: 735 | self._cached_list = self._fetch() 736 | return self._cached_list 737 | -------------------------------------------------------------------------------- /dynamo3/testing.py: -------------------------------------------------------------------------------- 1 | """ 2 | Testing tools for DynamoDB 3 | 4 | To use the DynamoDB Local service in your unit tests, enable the nose plugin by 5 | running with '--with-dynamo'. Your tests can access the 6 | :class:`~dynamo3.DynamoDBConnection` object by putting a 'dynamo' attribute on 7 | the test class. For example:: 8 | 9 | class TestDynamo(unittest.TestCase): 10 | dynamo = None 11 | 12 | def test_delete_table(self): 13 | self.dynamo.delete_table('foobar') 14 | 15 | The 'dynamo' object will be detected and replaced by the nose plugin at 16 | runtime. 17 | 18 | """ 19 | import argparse 20 | import inspect 21 | import locale 22 | import logging 23 | import os 24 | import subprocess 25 | import tarfile 26 | import tempfile 27 | from contextlib import closing 28 | from typing import List, Optional 29 | from urllib.request import urlretrieve 30 | 31 | import nose 32 | 33 | from . import DynamoDBConnection 34 | 35 | DYNAMO_LOCAL = ( 36 | "https://s3-us-west-2.amazonaws.com/dynamodb-local/dynamodb_local_latest.tar.gz" 37 | ) 38 | 39 | DEFAULT_REGION = "us-east-1" 40 | DEFAULT_PORT = 8000 41 | 42 | 43 | class DynamoLocalPlugin(nose.plugins.Plugin): 44 | 45 | """ 46 | Nose plugin to run the Dynamo Local service 47 | 48 | See: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Tools.html 49 | 50 | """ 51 | 52 | name = "dynamo" 53 | 54 | def __init__(self): 55 | super(DynamoLocalPlugin, self).__init__() 56 | self._dynamo_local: Optional[subprocess.Popen] = None 57 | self._dynamo: Optional[DynamoDBConnection] = None 58 | self.port: int = DEFAULT_PORT 59 | self.path: str = "" 60 | self.link: str = "" 61 | self.region: str = DEFAULT_REGION 62 | self.live: bool = False 63 | 64 | def options(self, parser, env): 65 | super(DynamoLocalPlugin, self).options(parser, env) 66 | parser.add_option( 67 | "--dynamo-port", 68 | type=int, 69 | default=DEFAULT_PORT, 70 | help="Run the DynamoDB Local service on this port " "(default %(default)s)", 71 | ) 72 | default_path = os.path.join(tempfile.gettempdir(), "dynamolocal") 73 | parser.add_option( 74 | "--dynamo-path", 75 | default=default_path, 76 | help="Download the Dynamo Local server to this " 77 | "directory (default '%(default)s')", 78 | ) 79 | parser.add_option( 80 | "--dynamo-link", 81 | default=DYNAMO_LOCAL, 82 | help="The link to the dynamodb local server code " 83 | "(default '%(default)s')", 84 | ) 85 | parser.add_option( 86 | "--dynamo-region", 87 | default=DEFAULT_REGION, 88 | help="Connect to this AWS region (default %(default)s)", 89 | ) 90 | parser.add_option( 91 | "--dynamo-live", 92 | action="store_true", 93 | help="Run tests on ACTUAL DynamoDB region. " 94 | "Standard AWS charges apply. " 95 | "This will destroy all tables you have in the " 96 | "region.", 97 | ) 98 | 99 | def configure(self, options, conf): 100 | super(DynamoLocalPlugin, self).configure(options, conf) 101 | self.port = options.dynamo_port 102 | self.path = options.dynamo_path 103 | self.link = options.dynamo_link 104 | self.region = options.dynamo_region 105 | self.live = options.dynamo_live 106 | logging.getLogger("botocore").setLevel(logging.WARNING) 107 | 108 | @property 109 | def dynamo(self): 110 | """Lazy loading of the dynamo connection""" 111 | if self._dynamo is None: 112 | if self.live: # pragma: no cover 113 | # Connect to live DynamoDB Region 114 | self._dynamo = DynamoDBConnection.connect(self.region) 115 | else: 116 | cmd = _dynamo_local_cmd(self.path, self.link, self.port, in_memory=True) 117 | self._dynamo_local = subprocess.Popen( 118 | cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT 119 | ) 120 | self._dynamo = DynamoDBConnection.connect( 121 | self.region, 122 | access_key="", 123 | secret_key="", 124 | host="localhost", 125 | port=self.port, 126 | is_secure=False, 127 | ) 128 | return self._dynamo 129 | 130 | def startContext(self, context): # pylint: disable=C0103 131 | """Called at the beginning of modules and TestCases""" 132 | # If this is a TestCase, dynamically set the dynamo connection 133 | if inspect.isclass(context) and hasattr(context, "dynamo"): 134 | context.dynamo = self.dynamo 135 | 136 | def finalize(self, result): 137 | """terminate the dynamo local service""" 138 | if self._dynamo_local is not None: 139 | self._dynamo_local.terminate() 140 | if ( 141 | not result.wasSuccessful() and self._dynamo_local.stdout is not None 142 | ): # pragma: no cover 143 | output = self._dynamo_local.stdout.read() 144 | encoding = locale.getdefaultlocale()[1] or "utf-8" 145 | print("DynamoDB Local output:") 146 | print(output.decode(encoding)) 147 | 148 | 149 | def run_dynamo_local(argv=None): 150 | """Run DynamoDB Local""" 151 | parser = argparse.ArgumentParser(description=run_dynamo_local.__doc__) 152 | default_path = os.path.join(tempfile.gettempdir(), "dynamolocal") 153 | parser.add_argument( 154 | "--path", 155 | help="Path to download DynamoDB local (default %(default)s)", 156 | default=default_path, 157 | ) 158 | parser.add_argument( 159 | "--link", 160 | help="URL to use to download DynamoDB local (default %(default)s)", 161 | default=DYNAMO_LOCAL, 162 | ) 163 | parser.add_argument( 164 | "-p", 165 | "--port", 166 | type=int, 167 | help="Port to listen on (default %(default)d)", 168 | default=DEFAULT_PORT, 169 | ) 170 | parser.add_argument( 171 | "--inMemory", 172 | action="store_true", 173 | help="When specified, DynamoDB Local will run in memory.", 174 | ) 175 | parser.add_argument( 176 | "--cors", 177 | help="Enable CORS support for javascript against a specific allow-list list the domains separated by , use '*' for public access (default is '*')", 178 | ) 179 | parser.add_argument( 180 | "--dbPath", 181 | help="Specify the location of your database file. Default is the current directory", 182 | ) 183 | parser.add_argument( 184 | "--optimizeDbBeforeStartup", 185 | action="store_true", 186 | help="Optimize the underlying backing store database tables before starting up the server", 187 | ) 188 | parser.add_argument( 189 | "--sharedDb", 190 | action="store_true", 191 | help="When specified, DynamoDB Local will use a single database instead of separate databases for each credential and region. As a result, all clients will interact with the same set of tables, regardless of their region and credential configuration. (Useful for interacting with Local through the JS Shell in addition to other SDKs)", 192 | ) 193 | parser.add_argument( 194 | "--delayTransientStatuses", 195 | action="store_true", 196 | help="When specified, DynamoDB Local will introduce delays to hold various transient table and index statuses so that it simulates actual service more closely.", 197 | ) 198 | 199 | args = parser.parse_args(argv) 200 | cmd = _dynamo_local_cmd( 201 | args.path, 202 | args.link, 203 | args.port, 204 | args.inMemory, 205 | args.cors, 206 | args.dbPath, 207 | args.optimizeDbBeforeStartup, 208 | args.delayTransientStatuses, 209 | args.sharedDb, 210 | ) 211 | subprocess.call(cmd) 212 | 213 | 214 | def _dynamo_local_cmd( 215 | path: str, 216 | link: str, 217 | port: int, 218 | in_memory: bool = False, 219 | cors: Optional[str] = None, 220 | db_path: Optional[str] = None, 221 | optimize_before_startup: bool = False, 222 | delay_transient_statuses: bool = False, 223 | shared_db: bool = False, 224 | ) -> List[str]: 225 | # Download DynamoDB Local 226 | if not os.path.exists(path): 227 | tarball = urlretrieve(link)[0] 228 | with closing(tarfile.open(tarball, "r:gz")) as archive: 229 | archive.extractall(path) 230 | os.unlink(tarball) 231 | 232 | # Run the jar 233 | lib_path = os.path.join(path, "DynamoDBLocal_lib") 234 | jar_path = os.path.join(path, "DynamoDBLocal.jar") 235 | cmd = [ 236 | "java", 237 | "-Djava.library.path=" + lib_path, 238 | "-jar", 239 | jar_path, 240 | "--port", 241 | str(port), 242 | ] 243 | if in_memory: 244 | cmd.append("-inMemory") 245 | if cors is not None: 246 | cmd.extend(["-cors", cors]) 247 | if db_path is not None: 248 | cmd.extend(["-dbPath", db_path]) 249 | if optimize_before_startup: 250 | cmd.append("-optimizeDbBeforeStartup") 251 | if delay_transient_statuses: 252 | cmd.append("-delayTransientStatuses") 253 | if shared_db: 254 | cmd.append("-sharedDb") 255 | return cmd 256 | -------------------------------------------------------------------------------- /dynamo3/types.py: -------------------------------------------------------------------------------- 1 | """ DynamoDB types and type logic """ 2 | from decimal import Clamped, Context, Decimal, Overflow, Underflow 3 | from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Type, Union 4 | 5 | from typing_extensions import Literal, TypedDict 6 | 7 | from .constants import ( 8 | BINARY, 9 | BINARY_SET, 10 | BOOL, 11 | LIST, 12 | MAP, 13 | NULL, 14 | NUMBER, 15 | NUMBER_SET, 16 | STRING, 17 | STRING_SET, 18 | ) 19 | 20 | DECIMAL_CONTEXT = Context( 21 | Emin=-128, Emax=126, rounding=None, prec=38, traps=[Clamped, Overflow, Underflow] 22 | ) 23 | 24 | # Dynamo values after we have encoded them 25 | DynamoNumber = TypedDict("DynamoNumber", {"N": Decimal}) 26 | DynamoString = TypedDict("DynamoString", {"S": str}) 27 | DynamoBinary = TypedDict("DynamoBinary", {"B": "Binary"}) 28 | DynamoSetNumber = TypedDict("DynamoSetNumber", {"NS": List[Decimal]}) 29 | DynamoSetString = TypedDict("DynamoSetString", {"SS": List[str]}) 30 | DynamoSetBinary = TypedDict("DynamoSetBinary", {"BS": List["Binary"]}) 31 | DynamoList = TypedDict( 32 | "DynamoList", {"L": List[Any]} # mypy doesn't support recursion yet 33 | ) 34 | DynamoBool = TypedDict("DynamoBool", {"BOOL": bool}) 35 | DynamoNull = TypedDict("DynamoNull", {"NULL": Literal[True]}) 36 | DynamoMap = TypedDict( 37 | "DynamoMap", {"M": Dict[str, Any]} # mypy doesn't support recursion yet 38 | ) 39 | EncodedDynamoValue = Union[ 40 | DynamoNumber, 41 | DynamoString, 42 | DynamoBinary, 43 | DynamoSetNumber, 44 | DynamoSetString, 45 | DynamoSetBinary, 46 | DynamoList, 47 | DynamoBool, 48 | DynamoMap, 49 | DynamoNull, 50 | ] 51 | EncodedDynamoObject = Dict[str, EncodedDynamoValue] 52 | 53 | 54 | # Dynamo values decoded from the API 55 | DecodedDynamoNumber = TypedDict("DecodedDynamoNumber", {"N": str}) 56 | DecodedDynamoBinary = TypedDict("DecodedDynamoBinary", {"B": bytes}) 57 | DecodedDynamoSetNumber = TypedDict("DecodedDynamoSetNumber", {"NS": List[str]}) 58 | DecodedDynamoSetBinary = TypedDict("DecodedDynamoSetBinary", {"BS": List[bytes]}) 59 | DecodedDynamoObject = Union[ 60 | DecodedDynamoNumber, 61 | DynamoString, 62 | DecodedDynamoBinary, 63 | DecodedDynamoSetNumber, 64 | DynamoSetString, 65 | DecodedDynamoSetBinary, 66 | DynamoList, 67 | DynamoBool, 68 | DynamoMap, 69 | DynamoNull, 70 | ] 71 | 72 | # Encoders return a tuple of (str, value) 73 | DynamoEncoderNumber = Tuple[Literal["N"], Decimal] 74 | DynamoEncoderString = Tuple[Literal["S"], str] 75 | DynamoEncoderBinary = Tuple[Literal["B"], "Binary"] 76 | DynamoEncoderSetNumber = Tuple[Literal["NS"], List[Decimal]] 77 | DynamoEncoderSetString = Tuple[Literal["SS"], List[str]] 78 | DynamoEncoderSetBinary = Tuple[Literal["BS"], List["Binary"]] 79 | DynamoEncoderList = Tuple[Literal["L"], List[EncodedDynamoValue]] 80 | DynamoEncoderBool = Tuple[Literal["BOOL"], bool] 81 | DynamoEncoderMap = Tuple[Literal["M"], EncodedDynamoObject] 82 | DynamoEncoderNull = Tuple[Literal["NULL"], Literal[True]] 83 | EncoderReturn = Union[ 84 | DynamoEncoderNumber, 85 | DynamoEncoderString, 86 | DynamoEncoderBinary, 87 | DynamoEncoderSetNumber, 88 | DynamoEncoderSetString, 89 | DynamoEncoderSetBinary, 90 | DynamoEncoderList, 91 | DynamoEncoderBool, 92 | DynamoEncoderMap, 93 | DynamoEncoderNull, 94 | ] 95 | 96 | # An object in python that can be stored to DynamoDB 97 | DynamoObject = Dict[str, Any] 98 | 99 | # Expression types 100 | ExpressionValueType = Any 101 | ExpressionValuesType = Dict[str, ExpressionValueType] 102 | ExpressionAttributeNamesType = Dict[str, str] 103 | 104 | 105 | def float_to_decimal(f: float) -> Decimal: 106 | """Convert a float to a 38-precision Decimal""" 107 | n, d = f.as_integer_ratio() 108 | numerator, denominator = Decimal(n), Decimal(d) 109 | return DECIMAL_CONTEXT.divide(numerator, denominator) 110 | 111 | 112 | TYPES = { 113 | "NUMBER": NUMBER, 114 | "STRING": STRING, 115 | "BINARY": BINARY, 116 | "NUMBER_SET": NUMBER_SET, 117 | "STRING_SET": STRING_SET, 118 | "BINARY_SET": BINARY_SET, 119 | "LIST": LIST, 120 | "BOOL": BOOL, 121 | "MAP": MAP, 122 | "NULL": NULL, 123 | } 124 | TYPES_REV = dict(((v, k) for k, v in TYPES.items())) 125 | 126 | 127 | def is_dynamo_value(value: Any) -> bool: 128 | """Returns True if the value is a Dynamo-formatted value""" 129 | if not isinstance(value, dict) or len(value) != 1: 130 | return False 131 | subkey = next(iter(value.keys())) 132 | return subkey in TYPES_REV 133 | 134 | 135 | def is_null(value: Any) -> bool: 136 | """Check if a value is equivalent to null in Dynamo""" 137 | return value is None or (isinstance(value, (set, frozenset)) and len(value) == 0) 138 | 139 | 140 | class Binary(object): 141 | 142 | """Wrap a binary string""" 143 | 144 | def __init__(self, value: Union[str, bytes]): 145 | if isinstance(value, str): 146 | value = value.encode("utf-8") 147 | if not isinstance(value, bytes): 148 | raise TypeError("Value must be a string of binary data!") 149 | 150 | self.value = value 151 | 152 | def __hash__(self): 153 | return hash(self.value) 154 | 155 | def __eq__(self, other): 156 | if isinstance(other, Binary): 157 | return self.value == other.value 158 | else: 159 | return self.value == other 160 | 161 | def __ne__(self, other): 162 | return not self.__eq__(other) 163 | 164 | def __repr__(self): 165 | return "Binary(%r)" % self.value 166 | 167 | 168 | def encode_set( 169 | dynamizer: "Dynamizer", 170 | value: Iterable[Union[int, float, Decimal, str, bytes, "Binary"]], 171 | ) -> EncoderReturn: 172 | """Encode a set for the DynamoDB format""" 173 | inner_value = next(iter(value)) 174 | inner_type = dynamizer.raw_encode(inner_value)[0] 175 | encoded_set: Any = [dynamizer.raw_encode(v)[1] for v in value] 176 | set_type: Any = inner_type + "S" 177 | return set_type, encoded_set 178 | 179 | 180 | def encode_list(dynamizer: "Dynamizer", value: List[Any]) -> DynamoEncoderList: 181 | """Encode a list for the DynamoDB format""" 182 | encoded_list: Any = [] 183 | dict(map(dynamizer.raw_encode, value)) 184 | for v in value: 185 | encoded_type, encoded_value = dynamizer.raw_encode(v) 186 | encoded_list.append( 187 | { 188 | encoded_type: encoded_value, 189 | } 190 | ) 191 | return "L", encoded_list 192 | 193 | 194 | def encode_dict(dynamizer: "Dynamizer", value: Any) -> DynamoEncoderMap: 195 | """Encode a dict for the DynamoDB format""" 196 | encoded_dict: Any = {} 197 | for k, v in value.items(): 198 | encoded_type, encoded_value = dynamizer.raw_encode(v) 199 | encoded_dict[k] = { 200 | encoded_type: encoded_value, 201 | } 202 | return "M", encoded_dict 203 | 204 | 205 | TagDict = TypedDict("TagDict", {"Key": str, "Value": str}) 206 | 207 | 208 | def encode_tags(tags: Dict[str, str]) -> List[TagDict]: 209 | return [{"Key": k, "Value": v} for k, v in tags.items()] 210 | 211 | 212 | def build_expression_values( 213 | dynamizer: "Dynamizer", 214 | expr_values: Optional[ExpressionValuesType], 215 | kwargs: ExpressionValueType, 216 | ) -> Optional[EncodedDynamoObject]: 217 | """Build ExpresionAttributeValues from a value or kwargs""" 218 | if expr_values: 219 | values = expr_values 220 | return dynamizer.encode_keys(values) 221 | elif kwargs: 222 | values = dict(((":" + k, v) for k, v in kwargs.items())) 223 | return dynamizer.encode_keys(values) 224 | return None 225 | 226 | 227 | class Dynamizer(object): 228 | 229 | """Handles the encoding/decoding of Dynamo values""" 230 | 231 | def __init__(self): 232 | self.encoders = {} 233 | self.register_encoder(str, lambda _, v: (STRING, v)) 234 | self.register_encoder(bytes, lambda _, v: (STRING, v.decode("utf-8"))) 235 | self.register_encoder(int, lambda _, v: (NUMBER, str(v))) 236 | self.register_encoder(float, lambda _, v: (NUMBER, str(float_to_decimal(v)))) 237 | self.register_encoder( 238 | Decimal, 239 | lambda _, v: (NUMBER, str(DECIMAL_CONTEXT.create_decimal(v))), 240 | ) 241 | self.register_encoder(set, encode_set) 242 | self.register_encoder(frozenset, encode_set) 243 | self.register_encoder(Binary, lambda _, v: (BINARY, v.value)) 244 | self.register_encoder(bool, lambda _, v: (BOOL, v)) 245 | self.register_encoder(list, encode_list) 246 | self.register_encoder(dict, encode_dict) 247 | self.register_encoder(type(None), lambda _, v: (NULL, True)) 248 | 249 | def register_encoder(self, type: Type, encoder: Callable) -> None: 250 | """ 251 | Set an encoder method for a data type 252 | 253 | Parameters 254 | ---------- 255 | type : object 256 | The class of the data type to encode 257 | encoder : callable 258 | Accepts a (Dynamizer, value) and returns 259 | (dynamo_type, dynamo_value) 260 | 261 | """ 262 | self.encoders[type] = encoder 263 | 264 | def raw_encode(self, value: Any) -> EncoderReturn: 265 | """Run the encoder on a value""" 266 | if type(value) in self.encoders: 267 | encoder = self.encoders[type(value)] 268 | return encoder(self, value) 269 | raise ValueError( 270 | "No encoder for value '%s' of type '%s'" % (value, type(value)) 271 | ) 272 | 273 | def encode_keys(self, keys: DynamoObject) -> EncodedDynamoObject: 274 | """Run the encoder on a dict of values""" 275 | return dict(((k, self.encode(v)) for k, v in keys.items() if not is_null(v))) 276 | 277 | def maybe_encode_keys( 278 | self, keys: Union[DynamoObject, EncodedDynamoObject] 279 | ) -> EncodedDynamoObject: 280 | """Same as encode_keys but a no-op if already in Dynamo format""" 281 | ret = {} 282 | for k, v in keys.items(): 283 | if is_dynamo_value(v): 284 | return keys 285 | elif not is_null(v): 286 | ret[k] = self.encode(v) 287 | return ret 288 | 289 | def encode(self, value: Any) -> EncodedDynamoValue: 290 | """Encode a value into the Dynamo dict format""" 291 | return dict([self.raw_encode(value)]) # type: ignore 292 | 293 | def decode_keys(self, keys: dict) -> DynamoObject: 294 | """Run the decoder on a dict of values""" 295 | return {k: self.decode(v) for k, v in keys.items()} 296 | 297 | def decode(self, dynamo_value: dict) -> Optional[Any]: 298 | """Decode a dynamo value into a python value""" 299 | # mypy can't do the type refinement needed here :( 300 | type, value = next(iter(dynamo_value.items())) 301 | if type == STRING: 302 | return value 303 | elif type == BINARY: 304 | return Binary(value) 305 | elif type == NUMBER: 306 | return Decimal(value) 307 | elif type == STRING_SET: 308 | return set(value) 309 | elif type == BINARY_SET: 310 | return set((Binary(v) for v in value)) 311 | elif type == NUMBER_SET: 312 | return set((Decimal(v) for v in value)) 313 | elif type == BOOL: 314 | return value 315 | elif type == LIST: 316 | return [self.decode(v) for v in value] 317 | elif type == MAP: 318 | decoded_dict = {} 319 | for k, v in value.items(): 320 | decoded_dict[k] = self.decode(v) 321 | return decoded_dict 322 | elif type == NULL: 323 | return None 324 | else: 325 | raise TypeError("Received unrecognized type %r from dynamo" % type) 326 | -------------------------------------------------------------------------------- /dynamo3/util.py: -------------------------------------------------------------------------------- 1 | """ Utility methods """ 2 | 3 | 4 | def snake_to_camel(name): 5 | """Convert snake_case to CamelCase""" 6 | return "".join([piece.capitalize() for piece in name.split("_")]) 7 | -------------------------------------------------------------------------------- /requirements_dev.txt: -------------------------------------------------------------------------------- 1 | -r requirements_test.txt 2 | tox 3 | bump2version 4 | -------------------------------------------------------------------------------- /requirements_test.txt: -------------------------------------------------------------------------------- 1 | black 2 | docutils 3 | isort 4 | mock 5 | mypy 6 | nose 7 | pylint==2.7.1 8 | types-mock 9 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [nosetests] 2 | match = ^test 3 | with-dynamo=true 4 | 5 | [wheel] 6 | 7 | [pycodestyle] 8 | ignore=E,W 9 | 10 | [mypy] 11 | ignore_missing_imports = True 12 | disallow_incomplete_defs = True 13 | check_untyped_defs = True 14 | warn_unused_ignores = True 15 | warn_unreachable = True 16 | no_implicit_reexport = True 17 | 18 | [isort] 19 | multi_line_output=3 20 | include_trailing_comma=True 21 | force_grid_wrap=0 22 | use_parentheses=True 23 | line_length=88 24 | ignore_whitespace=True 25 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | """ Setup file """ 2 | import os 3 | 4 | from setuptools import find_packages, setup 5 | 6 | HERE = os.path.abspath(os.path.dirname(__file__)) 7 | 8 | with open(os.path.join(HERE, "README.rst")) as f: 9 | README = f.read() 10 | 11 | with open(os.path.join(HERE, "CHANGES.rst")) as f: 12 | CHANGES = f.read() 13 | 14 | REQUIREMENTS_TEST = open(os.path.join(HERE, "requirements_test.txt")).readlines() 15 | 16 | REQUIREMENTS = [ 17 | "botocore>=0.89.0", 18 | ] 19 | 20 | if __name__ == "__main__": 21 | setup( 22 | name="dynamo3", 23 | version="1.0.0", 24 | description="Python 3 compatible library for DynamoDB", 25 | long_description=README + "\n\n" + CHANGES, 26 | classifiers=[ 27 | "Development Status :: 4 - Beta", 28 | "Intended Audience :: Developers", 29 | "License :: OSI Approved :: MIT License", 30 | "Operating System :: OS Independent", 31 | "Programming Language :: Python", 32 | "Programming Language :: Python :: 3", 33 | "Programming Language :: Python :: 3.6", 34 | "Programming Language :: Python :: 3.7", 35 | "Programming Language :: Python :: 3.8", 36 | "Programming Language :: Python :: 3.9", 37 | ], 38 | author="Steven Arcangeli", 39 | author_email="stevearc@stevearc.com", 40 | url="http://github.com/stevearc/dynamo3", 41 | keywords="aws dynamo dynamodb", 42 | include_package_data=True, 43 | packages=find_packages(exclude=("tests",)), 44 | license="MIT", 45 | entry_points={ 46 | "console_scripts": [ 47 | "dynamodb-local = dynamo3.testing:run_dynamo_local", 48 | ], 49 | "nose.plugins": [ 50 | "dynamolocal=dynamo3.testing:DynamoLocalPlugin", 51 | ], 52 | }, 53 | python_requires=">=3.6", 54 | install_requires=REQUIREMENTS, 55 | tests_require=REQUIREMENTS + REQUIREMENTS_TEST, 56 | ) 57 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- 1 | """ Tests for Dynamo3 """ 2 | 3 | import sys 4 | import unittest 5 | from decimal import Decimal 6 | from pickle import dumps, loads 7 | from urllib.parse import urlparse 8 | 9 | from botocore.exceptions import ClientError 10 | from mock import ANY, MagicMock, patch 11 | 12 | from dynamo3 import ( 13 | Binary, 14 | Dynamizer, 15 | DynamoDBConnection, 16 | DynamoDBError, 17 | DynamoKey, 18 | GlobalIndex, 19 | Limit, 20 | Table, 21 | ThroughputException, 22 | ) 23 | from dynamo3.constants import STRING 24 | from dynamo3.result import Capacity, ConsumedCapacity, Count, ResultSet, add_dicts 25 | 26 | 27 | class BaseSystemTest(unittest.TestCase): 28 | 29 | """Base class for system tests""" 30 | 31 | dynamo: DynamoDBConnection = None # type: ignore 32 | 33 | def setUp(self): 34 | super(BaseSystemTest, self).setUp() 35 | # Clear out any pre-existing tables 36 | for tablename in self.dynamo.list_tables(): 37 | self.dynamo.delete_table(tablename) 38 | 39 | def tearDown(self): 40 | super(BaseSystemTest, self).tearDown() 41 | for tablename in self.dynamo.list_tables(): 42 | self.dynamo.delete_table(tablename) 43 | self.dynamo.clear_hooks() 44 | 45 | 46 | class TestMisc(BaseSystemTest): 47 | 48 | """Tests that don't fit anywhere else""" 49 | 50 | def tearDown(self): 51 | super(TestMisc, self).tearDown() 52 | self.dynamo.default_return_capacity = False 53 | 54 | def test_connection_host(self): 55 | """Connection can access host of endpoint""" 56 | urlparse(self.dynamo.host) 57 | 58 | def test_connection_region(self): 59 | """Connection can access name of connected region""" 60 | self.assertTrue(isinstance(self.dynamo.region, str)) 61 | 62 | def test_connect_to_region(self): 63 | """Can connect to a dynamo region""" 64 | conn = DynamoDBConnection.connect("us-west-1") 65 | self.assertIsNotNone(conn.host) 66 | 67 | def test_connect_to_region_creds(self): 68 | """Can connect to a dynamo region with credentials""" 69 | conn = DynamoDBConnection.connect( 70 | "us-west-1", access_key="abc", secret_key="12345" 71 | ) 72 | self.assertIsNotNone(conn.host) 73 | 74 | def test_connect_to_host_without_session(self): 75 | """Can connect to a dynamo host without passing in a session""" 76 | conn = DynamoDBConnection.connect("us-west-1", host="localhost") 77 | self.assertIsNotNone(conn.host) 78 | 79 | @patch("dynamo3.connection.time") 80 | def test_retry_on_throughput_error(self, time): 81 | """Throughput exceptions trigger a retry of the request""" 82 | 83 | def call(*_, **__): 84 | """Dummy service call""" 85 | response = { 86 | "ResponseMetadata": { 87 | "HTTPStatusCode": 400, 88 | }, 89 | "Error": { 90 | "Code": "ProvisionedThroughputExceededException", 91 | "Message": "Does not matter", 92 | }, 93 | } 94 | raise ClientError(response, "list_tables") 95 | 96 | with patch.object(self.dynamo, "client") as client: 97 | client.list_tables.side_effect = call 98 | with self.assertRaises(ThroughputException): 99 | self.dynamo.call("list_tables") 100 | self.assertEqual(len(time.sleep.mock_calls), self.dynamo.request_retries - 1) 101 | self.assertTrue(time.sleep.called) 102 | 103 | def test_describe_missing(self): 104 | """Describing a missing table returns None""" 105 | ret = self.dynamo.describe_table("foobar") 106 | self.assertIsNone(ret) 107 | 108 | def test_magic_table_props(self): 109 | """Table can look up properties on response object""" 110 | hash_key = DynamoKey("id") 111 | self.dynamo.create_table("foobar", hash_key=hash_key) 112 | ret = self.dynamo.describe_table("foobar") 113 | assert ret is not None 114 | self.assertEqual(ret.item_count, ret["ItemCount"]) 115 | with self.assertRaises(KeyError): 116 | self.assertIsNotNone(ret["Missing"]) 117 | 118 | def test_magic_index_props(self): 119 | """Index can look up properties on response object""" 120 | index = GlobalIndex.all("idx-name", DynamoKey("id")) 121 | index.response = {"FooBar": 2} 122 | self.assertEqual(index["FooBar"], 2) 123 | with self.assertRaises(KeyError): 124 | self.assertIsNotNone(index["Missing"]) 125 | 126 | def test_describe_during_delete(self): 127 | """Describing a table during a delete operation should not crash""" 128 | response = { 129 | "ItemCount": 0, 130 | "ProvisionedThroughput": { 131 | "NumberOfDecreasesToday": 0, 132 | "ReadCapacityUnits": 5, 133 | "WriteCapacityUnits": 5, 134 | }, 135 | "TableName": "myTableName", 136 | "TableSizeBytes": 0, 137 | "TableStatus": "DELETING", 138 | } 139 | table = Table.from_response(response) 140 | self.assertEqual(table.status, "DELETING") 141 | 142 | def test_delete_missing(self): 143 | """Deleting a missing table returns False""" 144 | ret = self.dynamo.delete_table("foobar") 145 | self.assertTrue(not ret) 146 | 147 | def test_re_raise_passthrough(self): 148 | """DynamoDBError can re-raise itself if missing original exception""" 149 | err = DynamoDBError(400, Code="ErrCode", Message="Ouch", args={}) 150 | caught = False 151 | try: 152 | err.re_raise() 153 | except DynamoDBError as e: 154 | caught = True 155 | self.assertEqual(err, e) 156 | self.assertTrue(caught) 157 | 158 | def test_re_raise(self): 159 | """DynamoDBError can re-raise itself with stacktrace of original exc""" 160 | caught = False 161 | try: 162 | try: 163 | raise Exception("Hello") 164 | except Exception as e1: 165 | err = DynamoDBError( 166 | 400, 167 | Code="ErrCode", 168 | Message="Ouch", 169 | args={}, 170 | exc_info=sys.exc_info(), 171 | ) 172 | err.re_raise() 173 | except DynamoDBError as e: 174 | caught = True 175 | import traceback 176 | 177 | tb = traceback.format_tb(e.__traceback__) 178 | self.assertIn("Hello", tb[-1]) 179 | self.assertEqual(e.status_code, 400) 180 | self.assertTrue(caught) 181 | 182 | def test_default_return_capacity(self): 183 | """When default_return_capacity=True, always return capacity""" 184 | self.dynamo.default_return_capacity = True 185 | with patch.object(self.dynamo, "call") as call: 186 | call().get.return_value = None 187 | rs = self.dynamo.scan("foobar") 188 | list(rs) 189 | call.assert_called_with( 190 | "scan", 191 | TableName="foobar", 192 | ReturnConsumedCapacity="INDEXES", 193 | ConsistentRead=False, 194 | ) 195 | 196 | def test_list_tables_page(self): 197 | """Call to ListTables should page results""" 198 | hash_key = DynamoKey("id") 199 | for i in range(120): 200 | self.dynamo.create_table("table%d" % i, hash_key=hash_key) 201 | tables = list(self.dynamo.list_tables(110)) 202 | self.assertEqual(len(tables), 110) 203 | 204 | def test_limit_complete(self): 205 | """A limit with item_capacity = 0 is 'complete'""" 206 | limit = Limit(item_limit=0) 207 | self.assertTrue(limit.complete) 208 | 209 | def test_wait_create_table(self): 210 | """Create table shall wait for the table to come online.""" 211 | tablename = "foobar_wait" 212 | hash_key = DynamoKey("id") 213 | self.dynamo.create_table(tablename, hash_key=hash_key, wait=True) 214 | self.assertIsNotNone(self.dynamo.describe_table(tablename)) 215 | 216 | def test_wait_delete_table(self): 217 | """Delete table shall wait for the table to go offline.""" 218 | tablename = "foobar_wait" 219 | hash_key = DynamoKey("id") 220 | self.dynamo.create_table(tablename, hash_key=hash_key, wait=True) 221 | result = self.dynamo.delete_table(tablename, wait=True) 222 | self.assertTrue(result) 223 | 224 | 225 | class TestDataTypes(BaseSystemTest): 226 | 227 | """Tests for Dynamo data types""" 228 | 229 | def make_table(self): 230 | """Convenience method for making a table""" 231 | hash_key = DynamoKey("id") 232 | self.dynamo.create_table("foobar", hash_key=hash_key) 233 | 234 | def test_string(self): 235 | """Store and retrieve a string""" 236 | self.make_table() 237 | self.dynamo.put_item("foobar", {"id": "abc"}) 238 | item = list(self.dynamo.scan("foobar"))[0] 239 | self.assertEqual(item["id"], "abc") 240 | self.assertTrue(isinstance(item["id"], str)) 241 | 242 | def test_int(self): 243 | """Store and retrieve an int""" 244 | self.make_table() 245 | self.dynamo.put_item("foobar", {"id": "a", "num": 1}) 246 | item = list(self.dynamo.scan("foobar"))[0] 247 | self.assertEqual(item["num"], 1) 248 | 249 | def test_float(self): 250 | """Store and retrieve a float""" 251 | self.make_table() 252 | self.dynamo.put_item("foobar", {"id": "a", "num": 1.1}) 253 | item = list(self.dynamo.scan("foobar"))[0] 254 | self.assertAlmostEqual(float(item["num"]), 1.1) 255 | 256 | def test_decimal(self): 257 | """Store and retrieve a Decimal""" 258 | self.make_table() 259 | self.dynamo.put_item("foobar", {"id": "a", "num": Decimal("1.1")}) 260 | item = list(self.dynamo.scan("foobar"))[0] 261 | self.assertEqual(item["num"], Decimal("1.1")) 262 | 263 | def test_binary(self): 264 | """Store and retrieve a binary""" 265 | self.make_table() 266 | self.dynamo.put_item("foobar", {"id": "a", "data": Binary("abc")}) 267 | item = list(self.dynamo.scan("foobar"))[0] 268 | self.assertEqual(item["data"].value, b"abc") 269 | 270 | def test_binary_bytes(self): 271 | """Store and retrieve bytes as a binary""" 272 | self.make_table() 273 | data = {"a": 1, "b": 2} 274 | self.dynamo.put_item("foobar", {"id": "a", "data": Binary(dumps(data))}) 275 | item = list(self.dynamo.scan("foobar"))[0] 276 | self.assertEqual(loads(item["data"].value), data) 277 | 278 | def test_string_set(self): 279 | """Store and retrieve a string set""" 280 | self.make_table() 281 | item = { 282 | "id": "a", 283 | "datas": set(["a", "b"]), 284 | } 285 | self.dynamo.put_item("foobar", item) 286 | ret = list(self.dynamo.scan("foobar"))[0] 287 | self.assertEqual(ret, item) 288 | 289 | def test_number_set(self): 290 | """Store and retrieve a number set""" 291 | self.make_table() 292 | item = { 293 | "id": "a", 294 | "datas": set([1, 2, 3]), 295 | } 296 | self.dynamo.put_item("foobar", item) 297 | ret = list(self.dynamo.scan("foobar"))[0] 298 | self.assertEqual(ret, item) 299 | 300 | def test_binary_set(self): 301 | """Store and retrieve a binary set""" 302 | self.make_table() 303 | item = { 304 | "id": "a", 305 | "datas": set([Binary("a"), Binary("b")]), 306 | } 307 | self.dynamo.put_item("foobar", item) 308 | ret = list(self.dynamo.scan("foobar"))[0] 309 | self.assertEqual(ret, item) 310 | 311 | def test_binary_equal(self): 312 | """Binary should eq other Binaries and also raw bytestrings""" 313 | self.assertEqual(Binary("a"), Binary("a")) 314 | self.assertEqual(Binary("a"), b"a") 315 | self.assertFalse(Binary("a") != Binary("a")) 316 | 317 | def test_binary_repr(self): 318 | """Binary repr should wrap the contained value""" 319 | self.assertEqual(repr(Binary("a")), "Binary(%r)" % b"a") 320 | 321 | def test_binary_converts_unicode(self): 322 | """Binary will convert unicode to bytes""" 323 | b = Binary("a") 324 | self.assertTrue(isinstance(b.value, bytes)) 325 | 326 | def test_binary_force_string(self): 327 | """Binary must wrap a string type""" 328 | with self.assertRaises(TypeError): 329 | Binary(2) # type: ignore 330 | 331 | def test_bool(self): 332 | """Store and retrieve a boolean""" 333 | self.make_table() 334 | self.dynamo.put_item("foobar", {"id": "abc", "b": True}) 335 | item = list(self.dynamo.scan("foobar"))[0] 336 | self.assertEqual(item["b"], True) 337 | self.assertTrue(isinstance(item["b"], bool)) 338 | 339 | def test_list(self): 340 | """Store and retrieve a list""" 341 | self.make_table() 342 | self.dynamo.put_item("foobar", {"id": "abc", "l": ["a", 1, False]}) 343 | item = list(self.dynamo.scan("foobar"))[0] 344 | self.assertEqual(item["l"], ["a", 1, False]) 345 | 346 | def test_dict(self): 347 | """Store and retrieve a dict""" 348 | self.make_table() 349 | data = { 350 | "i": 1, 351 | "s": "abc", 352 | "n": None, 353 | "l": ["a", 1, True], 354 | "b": False, 355 | } 356 | self.dynamo.put_item("foobar", {"id": "abc", "d": data}) 357 | item = list(self.dynamo.scan("foobar"))[0] 358 | self.assertEqual(item["d"], data) 359 | 360 | def test_nested_dict(self): 361 | """Store and retrieve a nested dict""" 362 | self.make_table() 363 | data = { 364 | "s": "abc", 365 | "d": { 366 | "i": 42, 367 | }, 368 | } 369 | self.dynamo.put_item("foobar", {"id": "abc", "d": data}) 370 | item = list(self.dynamo.scan("foobar"))[0] 371 | self.assertEqual(item["d"], data) 372 | 373 | def test_nested_list(self): 374 | """Store and retrieve a nested list""" 375 | self.make_table() 376 | data = [ 377 | 1, 378 | [ 379 | True, 380 | None, 381 | "abc", 382 | ], 383 | ] 384 | self.dynamo.put_item("foobar", {"id": "abc", "l": data}) 385 | item = list(self.dynamo.scan("foobar"))[0] 386 | self.assertEqual(item["l"], data) 387 | 388 | def test_unrecognized_type(self): 389 | """Dynamizer throws error on unrecognized type""" 390 | value = { 391 | "ASDF": "abc", 392 | } 393 | with self.assertRaises(TypeError): 394 | self.dynamo.dynamizer.decode(value) 395 | 396 | 397 | class TestDynamizer(unittest.TestCase): 398 | 399 | """Tests for the Dynamizer""" 400 | 401 | def test_register_encoder(self): 402 | """Can register a custom encoder""" 403 | from datetime import datetime 404 | 405 | dynamizer = Dynamizer() 406 | dynamizer.register_encoder(datetime, lambda d, v: (STRING, v.isoformat())) 407 | now = datetime.utcnow() 408 | self.assertEqual(dynamizer.raw_encode(now), (STRING, now.isoformat())) 409 | 410 | def test_encoder_missing(self): 411 | """If no encoder is found, raise ValueError""" 412 | from datetime import datetime 413 | 414 | dynamizer = Dynamizer() 415 | with self.assertRaises(ValueError): 416 | dynamizer.encode(datetime.utcnow()) 417 | 418 | 419 | class TestResultModels(unittest.TestCase): 420 | 421 | """Tests for the model classes in results.py""" 422 | 423 | def test_add_dicts_base_case(self): 424 | """add_dict where one argument is None returns the other""" 425 | f = object() 426 | self.assertEqual(add_dicts(f, None), f) 427 | self.assertEqual(add_dicts(None, f), f) 428 | 429 | def test_add_dicts(self): 430 | """Merge two dicts of values together""" 431 | a = { 432 | "a": 1, 433 | "b": 2, 434 | } 435 | b = { 436 | "a": 3, 437 | "c": 4, 438 | } 439 | ret = add_dicts(a, b) 440 | self.assertEqual( 441 | ret, 442 | { 443 | "a": 4, 444 | "b": 2, 445 | "c": 4, 446 | }, 447 | ) 448 | 449 | def test_count_repr(self): 450 | """Count repr""" 451 | count = Count(0, 0) 452 | self.assertEqual(repr(count), "Count(0)") 453 | 454 | def test_count_addition(self): 455 | """Count addition""" 456 | count = Count(4, 2) 457 | self.assertEqual(count + 5, 9) 458 | 459 | def test_count_subtraction(self): 460 | """Count subtraction""" 461 | count = Count(4, 2) 462 | self.assertEqual(count - 2, 2) 463 | 464 | def test_count_multiplication(self): 465 | """Count multiplication""" 466 | count = Count(4, 2) 467 | self.assertEqual(2 * count, 8) 468 | 469 | def test_count_division(self): 470 | """Count division""" 471 | count = Count(4, 2) 472 | self.assertEqual(count / 2, 2) 473 | 474 | def test_count_add_none_capacity(self): 475 | """Count addition with one None consumed_capacity""" 476 | cap = Capacity(3, 0) 477 | count = Count(4, 2) 478 | count2 = Count(5, 3, cap) 479 | ret = count + count2 480 | self.assertEqual(ret, 9) 481 | self.assertEqual(ret.scanned_count, 5) 482 | self.assertEqual(ret.consumed_capacity, cap) 483 | 484 | def test_count_add_capacity(self): 485 | """Count addition with consumed_capacity""" 486 | count = Count(4, 2, Capacity(3, 0)) 487 | count2 = Count(5, 3, Capacity(2, 0)) 488 | ret = count + count2 489 | self.assertEqual(ret, 9) 490 | self.assertEqual(ret.scanned_count, 5) 491 | self.assertEqual(ret.consumed_capacity.read, 5) 492 | 493 | def test_capacity_math(self): 494 | """Capacity addition and equality""" 495 | cap = Capacity(2, 4) 496 | s = set([cap]) 497 | self.assertIn(Capacity(2, 4), s) 498 | self.assertNotEqual(Capacity(1, 4), cap) 499 | self.assertEqual(Capacity(1, 1) + Capacity(2, 2), Capacity(3, 3)) 500 | 501 | def test_capacity_format(self): 502 | """String formatting for Capacity""" 503 | c = Capacity(1, 3) 504 | self.assertEqual(str(c), "R:1.0 W:3.0") 505 | c = Capacity(0, 0) 506 | self.assertEqual(str(c), "0") 507 | 508 | def test_total_consumed_capacity(self): 509 | """ConsumedCapacity can parse results with only Total""" 510 | response = { 511 | "TableName": "foobar", 512 | "ReadCapacityUnits": 4, 513 | "WriteCapacityUnits": 5, 514 | } 515 | cap = ConsumedCapacity.from_response(response) 516 | self.assertEqual(cap.total, (4, 5)) 517 | self.assertIsNone(cap.table_capacity) 518 | 519 | def test_consumed_capacity_equality(self): 520 | """ConsumedCapacity addition and equality""" 521 | cap = ConsumedCapacity( 522 | "foobar", 523 | Capacity(0, 10), 524 | Capacity(0, 2), 525 | { 526 | "l-index": Capacity(0, 4), 527 | }, 528 | { 529 | "g-index": Capacity(0, 3), 530 | }, 531 | ) 532 | c2 = ConsumedCapacity( 533 | "foobar", 534 | Capacity(0, 10), 535 | Capacity(0, 2), 536 | { 537 | "l-index": Capacity(0, 4), 538 | "l-index2": Capacity(0, 7), 539 | }, 540 | ) 541 | 542 | self.assertNotEqual(cap, c2) 543 | c3 = ConsumedCapacity( 544 | "foobar", 545 | Capacity(0, 10), 546 | Capacity(0, 2), 547 | { 548 | "l-index": Capacity(0, 4), 549 | }, 550 | { 551 | "g-index": Capacity(0, 3), 552 | }, 553 | ) 554 | self.assertIn(cap, set([c3])) 555 | combined = cap + c2 556 | self.assertEqual( 557 | cap + c2, 558 | ConsumedCapacity( 559 | "foobar", 560 | Capacity(0, 20), 561 | Capacity(0, 4), 562 | { 563 | "l-index": Capacity(0, 8), 564 | "l-index2": Capacity(0, 7), 565 | }, 566 | { 567 | "g-index": Capacity(0, 3), 568 | }, 569 | ), 570 | ) 571 | self.assertIn(str(Capacity(0, 3)), str(combined)) 572 | 573 | def test_add_different_tables(self): 574 | """Cannot add ConsumedCapacity of two different tables""" 575 | c1 = ConsumedCapacity("foobar", Capacity(1, 28)) 576 | c2 = ConsumedCapacity("boofar", Capacity(3, 0)) 577 | with self.assertRaises(TypeError): 578 | c1 += c2 579 | 580 | def test_always_continue_query(self): 581 | """Regression test. 582 | If result has no items but does have LastEvaluatedKey, keep querying. 583 | """ 584 | conn = MagicMock() 585 | conn.dynamizer.decode_keys.side_effect = lambda x: x 586 | items = ["a", "b"] 587 | results = [ 588 | {"Items": [], "LastEvaluatedKey": {"foo": 1, "bar": 2}}, 589 | {"Items": [], "LastEvaluatedKey": {"foo": 1, "bar": 2}}, 590 | {"Items": items}, 591 | ] 592 | conn.call.side_effect = lambda *_, **__: results.pop(0) 593 | rs = ResultSet(conn, Limit()) 594 | results = list(rs) 595 | self.assertEqual(results, items) 596 | 597 | 598 | class TestHooks(BaseSystemTest): 599 | 600 | """Tests for connection callback hooks""" 601 | 602 | def tearDown(self): 603 | super(TestHooks, self).tearDown() 604 | for hooks in self.dynamo._hooks.values(): 605 | while hooks: 606 | hooks.pop() 607 | 608 | def test_precall(self): 609 | """precall hooks are called before an API call""" 610 | hook = MagicMock() 611 | self.dynamo.subscribe("precall", hook) 612 | 613 | def throw(**_): 614 | """Throw an exception to terminate the request""" 615 | raise Exception() 616 | 617 | with patch.object(self.dynamo, "client") as client: 618 | client.describe_table.side_effect = throw 619 | with self.assertRaises(Exception): 620 | self.dynamo.describe_table("foobar") 621 | hook.assert_called_with(self.dynamo, "describe_table", {"TableName": "foobar"}) 622 | 623 | def test_postcall(self): 624 | """postcall hooks are called after API call""" 625 | hash_key = DynamoKey("id") 626 | self.dynamo.create_table("foobar", hash_key=hash_key) 627 | calls = [] 628 | 629 | def hook(*args): 630 | """Log the call into a list""" 631 | calls.append(args) 632 | 633 | self.dynamo.subscribe("postcall", hook) 634 | self.dynamo.describe_table("foobar") 635 | self.assertEqual(len(calls), 1) 636 | args = calls[0] 637 | self.assertEqual(len(args), 4) 638 | conn, command, kwargs, response = args 639 | self.assertEqual(conn, self.dynamo) 640 | self.assertEqual(command, "describe_table") 641 | self.assertEqual(kwargs["TableName"], "foobar") 642 | self.assertEqual(response["Table"]["TableName"], "foobar") 643 | 644 | def test_capacity(self): 645 | """capacity hooks are called whenever response has ConsumedCapacity""" 646 | hash_key = DynamoKey("id") 647 | self.dynamo.create_table("foobar", hash_key=hash_key) 648 | hook = MagicMock() 649 | self.dynamo.subscribe("capacity", hook) 650 | with patch.object(self.dynamo, "client") as client: 651 | client.scan.return_value = { 652 | "Items": [], 653 | "ConsumedCapacity": { 654 | "TableName": "foobar", 655 | "ReadCapacityUnits": 4, 656 | }, 657 | } 658 | rs = self.dynamo.scan("foobar") 659 | list(rs) 660 | cap = ConsumedCapacity("foobar", Capacity(4, 0)) 661 | hook.assert_called_with(self.dynamo, "scan", ANY, ANY, cap) 662 | 663 | def test_subscribe(self): 664 | """Can subscribe and unsubscribe from hooks""" 665 | hook = lambda: None 666 | self.dynamo.subscribe("precall", hook) 667 | self.assertEqual(len(self.dynamo._hooks["precall"]), 1) 668 | self.dynamo.unsubscribe("precall", hook) 669 | self.assertEqual(len(self.dynamo._hooks["precall"]), 0) 670 | -------------------------------------------------------------------------------- /tests/test_fields.py: -------------------------------------------------------------------------------- 1 | """ Tests for dynamo3.fields """ 2 | import unittest 3 | 4 | from dynamo3 import DynamoKey, GlobalIndex, LocalIndex, Table, Throughput 5 | 6 | 7 | class TestEqHash(unittest.TestCase): 8 | 9 | """Tests for equality and hash methods""" 10 | 11 | def test_dynamo_key_eq(self): 12 | """Dynamo keys should be equal if names are equal""" 13 | a, b = DynamoKey("foo"), DynamoKey("foo") 14 | self.assertEqual(a, b) 15 | self.assertEqual(hash(a), hash(b)) 16 | self.assertFalse(a != b) 17 | 18 | def test_local_index_eq(self): 19 | """Local indexes should be equal""" 20 | range_key = DynamoKey("foo") 21 | a, b = LocalIndex.all("a", range_key), LocalIndex.all("a", range_key) 22 | self.assertEqual(a, b) 23 | self.assertEqual(hash(a), hash(b)) 24 | self.assertFalse(a != b) 25 | 26 | def test_global_index_eq(self): 27 | """Global indexes should be equal""" 28 | hash_key = DynamoKey("foo") 29 | a, b = GlobalIndex.all("a", hash_key), GlobalIndex.all("a", hash_key) 30 | self.assertEqual(a, b) 31 | self.assertEqual(hash(a), hash(b)) 32 | self.assertFalse(a != b) 33 | 34 | def test_global_local_ne(self): 35 | """Global indexes should not equal local indexes""" 36 | field = DynamoKey("foo") 37 | a, b = LocalIndex.all("a", field), GlobalIndex.all("a", field, field) 38 | self.assertNotEqual(a, b) 39 | 40 | def test_throughput_eq(self): 41 | """Throughputs should be equal""" 42 | a, b = Throughput(), Throughput() 43 | self.assertEqual(a, b) 44 | self.assertEqual(hash(a), hash(b)) 45 | self.assertFalse(a != b) 46 | 47 | def test_throughput_repr(self): 48 | """Throughput repr should wrap read/write values""" 49 | a = Throughput(1, 1) 50 | self.assertEqual(repr(a), "Throughput(1, 1)") 51 | 52 | def test_table_eq(self): 53 | """Tables should be equal""" 54 | field = DynamoKey("foo") 55 | a, b = Table("a", field), Table("a", field) 56 | self.assertEqual(a, b) 57 | self.assertEqual(hash(a), hash(b)) 58 | self.assertFalse(a != b) 59 | -------------------------------------------------------------------------------- /tests/test_rate_limit.py: -------------------------------------------------------------------------------- 1 | """ Tests for rate limiting """ 2 | import time 3 | from contextlib import contextmanager 4 | 5 | from mock import MagicMock, patch 6 | 7 | from dynamo3 import DynamoKey, GlobalIndex, RateLimit 8 | from dynamo3.rate import DecayingCapacityStore 9 | from dynamo3.result import Capacity, ConsumedCapacity 10 | 11 | from . import BaseSystemTest 12 | 13 | 14 | class TestRateLimit(BaseSystemTest): 15 | 16 | """Tests for rate limiting""" 17 | 18 | def setUp(self): 19 | super(TestRateLimit, self).setUp() 20 | hash_key = DynamoKey("id") 21 | index_key = DynamoKey("bar") 22 | index = GlobalIndex.all("bar", index_key) 23 | self.dynamo.create_table("foobar", hash_key, global_indexes=[index]) 24 | 25 | @contextmanager 26 | def inject_capacity(self, capacity, limiter): 27 | """Install limiter and inject consumed_capacity into response""" 28 | 29 | def injector(connection, command, kwargs, data): 30 | """Hook that injects consumed_capacity""" 31 | data.pop("ConsumedCapacity", None) 32 | data["consumed_capacity"] = capacity 33 | 34 | self.dynamo.subscribe("postcall", injector) 35 | try: 36 | with self.dynamo.limit(limiter): 37 | with patch.object(time, "sleep") as sleep: 38 | yield sleep 39 | finally: 40 | self.dynamo.unsubscribe("postcall", injector) 41 | 42 | def test_no_throttle(self): 43 | """Don't sleep if consumed capacity is within limits""" 44 | limiter = RateLimit(3, 3) 45 | cap = ConsumedCapacity("foobar", Capacity(0, 2)) 46 | with self.inject_capacity(cap, limiter) as sleep: 47 | list(self.dynamo.query("foobar", "id = :id", id="a")) 48 | sleep.assert_not_called() 49 | 50 | def test_throttle_total(self): 51 | """Sleep if consumed capacity exceeds total""" 52 | limiter = RateLimit(3, 3) 53 | cap = ConsumedCapacity("foobar", Capacity(3, 0)) 54 | with self.inject_capacity(cap, limiter) as sleep: 55 | list(self.dynamo.query("foobar", "id = :id", id="a")) 56 | sleep.assert_called_with(1) 57 | 58 | def test_throttle_total_cap(self): 59 | """Sleep if consumed capacity exceeds total""" 60 | limiter = RateLimit(total=Capacity(3, 3)) 61 | cap = ConsumedCapacity("foobar", Capacity(3, 0)) 62 | with self.inject_capacity(cap, limiter) as sleep: 63 | list(self.dynamo.query("foobar", "id = :id", id="a")) 64 | sleep.assert_called_with(1) 65 | 66 | def test_throttle_multiply(self): 67 | """Seconds to sleep is increades to match limit delta""" 68 | limiter = RateLimit(3, 3) 69 | cap = ConsumedCapacity("foobar", Capacity(8, 0)) 70 | with self.inject_capacity(cap, limiter) as sleep: 71 | list(self.dynamo.query("foobar", "id = :id", id="a")) 72 | sleep.assert_called_with(3) 73 | 74 | def test_throttle_multiple(self): 75 | """Sleep if the limit is exceeded by multiple calls""" 76 | limiter = RateLimit(4, 4) 77 | cap = ConsumedCapacity("foobar", Capacity(3, 0)) 78 | with self.inject_capacity(cap, limiter) as sleep: 79 | list(self.dynamo.query("foobar", "id = :id", id="a")) 80 | list(self.dynamo.query("foobar", "id = :id", id="a")) 81 | sleep.assert_called_with(2) 82 | 83 | def test_throttle_table(self): 84 | """Sleep if table limit is exceeded""" 85 | limiter = RateLimit( 86 | 3, 87 | 3, 88 | table_caps={ 89 | "foobar": Capacity(0, 4), 90 | }, 91 | ) 92 | cap = ConsumedCapacity("foobar", Capacity(8, 0), Capacity(0, 8)) 93 | with self.inject_capacity(cap, limiter) as sleep: 94 | list(self.dynamo.query("foobar", "id = :id", id="a")) 95 | sleep.assert_called_with(2) 96 | 97 | def test_throttle_table_default(self): 98 | """If no table limit provided, use the default""" 99 | limiter = RateLimit(default_read=4, default_write=4) 100 | cap = ConsumedCapacity("foobar", Capacity(8, 0), Capacity(8, 0)) 101 | with self.inject_capacity(cap, limiter) as sleep: 102 | list(self.dynamo.query("foobar", "id = :id", id="a")) 103 | sleep.assert_called_with(2) 104 | 105 | def test_throttle_table_default_cap(self): 106 | """If no table limit provided, use the default""" 107 | limiter = RateLimit(default=Capacity(4, 4)) 108 | cap = ConsumedCapacity("foobar", Capacity(8, 0), Capacity(8, 0)) 109 | with self.inject_capacity(cap, limiter) as sleep: 110 | list(self.dynamo.query("foobar", "id = :id", id="a")) 111 | sleep.assert_called_with(2) 112 | 113 | def test_local_index(self): 114 | """Local index capacities count towards the table limit""" 115 | limiter = RateLimit( 116 | table_caps={ 117 | "foobar": Capacity(4, 0), 118 | } 119 | ) 120 | cap = ConsumedCapacity( 121 | "foobar", 122 | Capacity(8, 0), 123 | local_index_capacity={ 124 | "local": Capacity(4, 0), 125 | }, 126 | ) 127 | with self.inject_capacity(cap, limiter) as sleep: 128 | list(self.dynamo.query("foobar", "id = :id", id="a")) 129 | sleep.assert_called_with(1) 130 | 131 | def test_global_index(self): 132 | """Sleep when global index limit is exceeded""" 133 | limiter = RateLimit( 134 | table_caps={ 135 | "foobar": { 136 | "baz": Capacity(4, 0), 137 | } 138 | } 139 | ) 140 | cap = ConsumedCapacity( 141 | "foobar", 142 | Capacity(8, 0), 143 | global_index_capacity={ 144 | "baz": Capacity(8, 0), 145 | }, 146 | ) 147 | with self.inject_capacity(cap, limiter) as sleep: 148 | list(self.dynamo.query("foobar", "id = :id", id="a")) 149 | sleep.assert_called_with(2) 150 | 151 | def test_global_index_by_name(self): 152 | """Global index limit can be specified as tablename:index_name""" 153 | limiter = RateLimit( 154 | table_caps={ 155 | "foobar:baz": Capacity(4, 0), 156 | } 157 | ) 158 | cap = ConsumedCapacity( 159 | "foobar", 160 | Capacity(8, 0), 161 | global_index_capacity={ 162 | "baz": Capacity(8, 0), 163 | }, 164 | ) 165 | with self.inject_capacity(cap, limiter) as sleep: 166 | list(self.dynamo.query("foobar", "id = :id", id="a")) 167 | sleep.assert_called_with(2) 168 | 169 | def test_global_default_table(self): 170 | """Global index limit defaults to table limit if not present""" 171 | limiter = RateLimit( 172 | table_caps={ 173 | "foobar": Capacity(4, 0), 174 | } 175 | ) 176 | cap = ConsumedCapacity( 177 | "foobar", 178 | Capacity(8, 0), 179 | global_index_capacity={ 180 | "baz": Capacity(8, 0), 181 | }, 182 | ) 183 | with self.inject_capacity(cap, limiter) as sleep: 184 | list(self.dynamo.query("foobar", "id = :id", id="a")) 185 | sleep.assert_called_with(2) 186 | 187 | def test_global_default(self): 188 | """Global index limit will fall back to table default limit""" 189 | limiter = RateLimit(default_read=4, default_write=4) 190 | cap = ConsumedCapacity( 191 | "foobar", 192 | Capacity(8, 0), 193 | global_index_capacity={ 194 | "baz": Capacity(8, 0), 195 | }, 196 | ) 197 | with self.inject_capacity(cap, limiter) as sleep: 198 | list(self.dynamo.query("foobar", "id = :id", id="a")) 199 | sleep.assert_called_with(2) 200 | 201 | def test_store_decays(self): 202 | """DecayingCapacityStore should drop points after time""" 203 | store = DecayingCapacityStore() 204 | store.add(time.time() - 2, 4) 205 | self.assertEqual(store.value, 0) 206 | 207 | def test_throttle_callback(self): 208 | """Callback is called when a query is throttled""" 209 | callback = MagicMock() 210 | callback.return_value = True 211 | limiter = RateLimit(3, 3, callback=callback) 212 | cap = ConsumedCapacity("foobar", Capacity(3, 0)) 213 | with self.inject_capacity(cap, limiter) as sleep: 214 | list(self.dynamo.query("foobar", "id = :id", id="a")) 215 | sleep.assert_not_called() 216 | self.assertTrue(callback.called) 217 | -------------------------------------------------------------------------------- /tests/test_write.py: -------------------------------------------------------------------------------- 1 | """ Test the write functions of Dynamo """ 2 | from mock import MagicMock, call, patch 3 | 4 | from dynamo3 import ( 5 | CheckFailed, 6 | DynamoKey, 7 | GlobalIndex, 8 | IndexUpdate, 9 | LocalIndex, 10 | Table, 11 | Throughput, 12 | TransactionCanceledException, 13 | ) 14 | from dynamo3.batch import BatchWriter 15 | from dynamo3.constants import ( 16 | ALL_NEW, 17 | ALL_OLD, 18 | NEW_AND_OLD_IMAGES, 19 | NUMBER, 20 | PAY_PER_REQUEST, 21 | PROVISIONED, 22 | STRING, 23 | TOTAL, 24 | ) 25 | from dynamo3.fields import TTL 26 | from dynamo3.result import Capacity 27 | 28 | from . import BaseSystemTest 29 | 30 | 31 | class TestCreate(BaseSystemTest): 32 | 33 | """Test creating a table""" 34 | 35 | def test_create_hash_table(self): 36 | """Create a table with just a hash key""" 37 | hash_key = DynamoKey("id", data_type=STRING) 38 | table = Table("foobar", hash_key) 39 | self.dynamo.create_table("foobar", hash_key=hash_key) 40 | desc = self.dynamo.describe_table("foobar") 41 | self.assertEqual(desc, table) 42 | 43 | def test_create_hash_range_table(self): 44 | """Create a table with a hash and range key""" 45 | hash_key = DynamoKey("id", data_type=STRING) 46 | range_key = DynamoKey("num", data_type=NUMBER) 47 | table = Table("foobar", hash_key, range_key) 48 | self.dynamo.create_table("foobar", hash_key, range_key) 49 | desc = self.dynamo.describe_table("foobar") 50 | self.assertEqual(desc, table) 51 | 52 | def test_create_local_index(self): 53 | """Create a table with a local index""" 54 | hash_key = DynamoKey("id", data_type=STRING) 55 | range_key = DynamoKey("num", data_type=NUMBER) 56 | index_field = DynamoKey("name") 57 | index = LocalIndex.all("name-index", index_field) 58 | table = Table("foobar", hash_key, range_key, [index]) 59 | self.dynamo.create_table("foobar", hash_key, range_key, indexes=[index]) 60 | desc = self.dynamo.describe_table("foobar") 61 | self.assertEqual(desc, table) 62 | 63 | def test_create_local_keys_index(self): 64 | """Create a table with a local KeysOnly index""" 65 | hash_key = DynamoKey("id", data_type=STRING) 66 | range_key = DynamoKey("num", data_type=NUMBER) 67 | index_field = DynamoKey("name") 68 | index = LocalIndex.keys("name-index", index_field) 69 | table = Table("foobar", hash_key, range_key, [index]) 70 | self.dynamo.create_table("foobar", hash_key, range_key, indexes=[index]) 71 | desc = self.dynamo.describe_table("foobar") 72 | self.assertEqual(desc, table) 73 | 74 | def test_create_local_includes_index(self): 75 | """Create a table with a local Includes index""" 76 | hash_key = DynamoKey("id", data_type=STRING) 77 | range_key = DynamoKey("num", data_type=NUMBER) 78 | index_field = DynamoKey("name") 79 | index = LocalIndex.include("name-index", index_field, includes=["foo", "bar"]) 80 | table = Table("foobar", hash_key, range_key, [index]) 81 | self.dynamo.create_table("foobar", hash_key, range_key, indexes=[index]) 82 | desc = self.dynamo.describe_table("foobar") 83 | self.assertEqual(desc, table) 84 | 85 | def test_create_global_index(self): 86 | """Create a table with a global index""" 87 | hash_key = DynamoKey("id", data_type=STRING) 88 | index_field = DynamoKey("name") 89 | index = GlobalIndex.all("name-index", index_field) 90 | table = Table("foobar", hash_key, global_indexes=[index]) 91 | self.dynamo.create_table("foobar", hash_key, global_indexes=[index]) 92 | desc = self.dynamo.describe_table("foobar") 93 | self.assertEqual(desc, table) 94 | 95 | def test_create_global_keys_index(self): 96 | """Create a table with a global KeysOnly index""" 97 | hash_key = DynamoKey("id", data_type=STRING) 98 | index_field = DynamoKey("name") 99 | index = GlobalIndex.keys("name-index", index_field) 100 | table = Table("foobar", hash_key, global_indexes=[index]) 101 | self.dynamo.create_table("foobar", hash_key, global_indexes=[index]) 102 | desc = self.dynamo.describe_table("foobar") 103 | self.assertEqual(desc, table) 104 | 105 | def test_create_global_includes_index(self): 106 | """Create a table with a global Includes index""" 107 | hash_key = DynamoKey("id", data_type=STRING) 108 | index_field = DynamoKey("name") 109 | index = GlobalIndex.include("name-index", index_field, includes=["foo", "bar"]) 110 | table = Table("foobar", hash_key, global_indexes=[index]) 111 | self.dynamo.create_table("foobar", hash_key, global_indexes=[index]) 112 | desc = self.dynamo.describe_table("foobar") 113 | self.assertEqual(desc, table) 114 | 115 | def test_create_global_hash_range_index(self): 116 | """Create a global index with a hash and range key""" 117 | hash_key = DynamoKey("id", data_type=STRING) 118 | index_hash = DynamoKey("foo") 119 | index_range = DynamoKey("bar") 120 | index = GlobalIndex.all("foo-index", index_hash, index_range) 121 | table = Table("foobar", hash_key, global_indexes=[index]) 122 | self.dynamo.create_table("foobar", hash_key, global_indexes=[index]) 123 | desc = self.dynamo.describe_table("foobar") 124 | self.assertEqual(desc, table) 125 | 126 | def test_create_table_throughput(self): 127 | """Create a table and set throughput""" 128 | hash_key = DynamoKey("id", data_type=STRING) 129 | throughput = Throughput(8, 2) 130 | table = Table("foobar", hash_key, throughput=throughput) 131 | self.dynamo.create_table("foobar", hash_key=hash_key, throughput=throughput) 132 | desc = self.dynamo.describe_table("foobar") 133 | self.assertEqual(desc, table) 134 | 135 | def test_create_global_index_throughput(self): 136 | """Create a table and set throughput on global index""" 137 | hash_key = DynamoKey("id", data_type=STRING) 138 | throughput = Throughput(8, 2) 139 | index_field = DynamoKey("name") 140 | index = GlobalIndex.all("name-index", index_field, throughput=throughput) 141 | table = Table("foobar", hash_key, global_indexes=[index], throughput=throughput) 142 | self.dynamo.create_table( 143 | "foobar", hash_key=hash_key, global_indexes=[index], throughput=throughput 144 | ) 145 | desc = self.dynamo.describe_table("foobar") 146 | self.assertEqual(desc, table) 147 | 148 | 149 | class TestUpdateTable(BaseSystemTest): 150 | 151 | """Test updating table/index throughput""" 152 | 153 | def test_update_table_throughput(self): 154 | """Update the table throughput""" 155 | hash_key = DynamoKey("id", data_type=STRING) 156 | self.dynamo.create_table("foobar", hash_key=hash_key, throughput=(1, 1)) 157 | tp = Throughput(3, 4) 158 | self.dynamo.update_table("foobar", throughput=tp) 159 | table = self.dynamo.describe_table("foobar") 160 | assert table is not None 161 | self.assertEqual(table.throughput, tp) 162 | 163 | def test_update_multiple_throughputs(self): 164 | """Update table and global index throughputs""" 165 | hash_key = DynamoKey("id", data_type=STRING) 166 | index_field = DynamoKey("name") 167 | index = GlobalIndex.all("name-index", index_field, throughput=(2, 3)) 168 | self.dynamo.create_table( 169 | "foobar", 170 | hash_key=hash_key, 171 | global_indexes=[index], 172 | throughput=Throughput(1, 1), 173 | ) 174 | tp = Throughput(3, 4) 175 | self.dynamo.update_table( 176 | "foobar", 177 | throughput=tp, 178 | index_updates=[IndexUpdate.update("name-index", tp)], 179 | ) 180 | table = self.dynamo.describe_table("foobar") 181 | assert table is not None 182 | self.assertEqual(table.throughput, tp) 183 | self.assertEqual(table.global_indexes[0].throughput, tp) 184 | 185 | def test_update_index_throughput(self): 186 | """Update the throughput on a global index""" 187 | hash_key = DynamoKey("id", data_type=STRING) 188 | index_field = DynamoKey("name") 189 | index = GlobalIndex.all("name-index", index_field) 190 | self.dynamo.create_table("foobar", hash_key=hash_key, global_indexes=[index]) 191 | tp = Throughput(2, 1) 192 | self.dynamo.update_table( 193 | "foobar", index_updates=[IndexUpdate.update("name-index", tp)] 194 | ) 195 | table = self.dynamo.describe_table("foobar") 196 | assert table is not None 197 | self.assertEqual(table.global_indexes[0].throughput, tp) 198 | 199 | def test_delete_index(self): 200 | """Delete a global index""" 201 | hash_key = DynamoKey("id", data_type=STRING) 202 | index_field = DynamoKey("name") 203 | index = GlobalIndex.all("name-index", index_field) 204 | self.dynamo.create_table("foobar", hash_key=hash_key, global_indexes=[index]) 205 | self.dynamo.update_table( 206 | "foobar", index_updates=[IndexUpdate.delete("name-index")] 207 | ) 208 | table = self.dynamo.describe_table("foobar") 209 | assert table is not None 210 | self.assertTrue( 211 | len(table.global_indexes) == 0 212 | or table.global_indexes[0].status == "DELETING" 213 | ) 214 | 215 | def test_create_index(self): 216 | """Create a global index""" 217 | hash_key = DynamoKey("id", data_type=STRING) 218 | self.dynamo.create_table("foobar", hash_key=hash_key) 219 | index_field = DynamoKey("name") 220 | index = GlobalIndex.all("name-index", index_field, hash_key) 221 | self.dynamo.update_table("foobar", index_updates=[IndexUpdate.create(index)]) 222 | table = self.dynamo.describe_table("foobar") 223 | assert table is not None 224 | self.assertEqual(len(table.global_indexes), 1) 225 | 226 | def test_index_update_equality(self): 227 | """IndexUpdates should have sane == behavior""" 228 | self.assertEqual(IndexUpdate.delete("foo"), IndexUpdate.delete("foo")) 229 | collection = set([IndexUpdate.delete("foo")]) 230 | self.assertIn(IndexUpdate.delete("foo"), collection) 231 | self.assertNotEqual(IndexUpdate.delete("foo"), IndexUpdate.delete("bar")) 232 | 233 | def test_update_billing_mode(self): 234 | """Update a table billing mode""" 235 | hash_key = DynamoKey("id", data_type=STRING) 236 | table = self.dynamo.create_table( 237 | "foobar", hash_key=hash_key, billing_mode=PAY_PER_REQUEST 238 | ) 239 | assert table is not None 240 | self.assertEqual(table.billing_mode, PAY_PER_REQUEST) 241 | new_table = self.dynamo.update_table( 242 | "foobar", billing_mode=PROVISIONED, throughput=(2, 3) 243 | ) 244 | assert new_table is not None 245 | self.assertEqual(new_table.billing_mode, PROVISIONED) 246 | self.assertEqual(new_table.throughput, Throughput(2, 3)) 247 | 248 | def test_update_streams(self): 249 | """Update a table streams""" 250 | hash_key = DynamoKey("id", data_type=STRING) 251 | table = self.dynamo.create_table( 252 | "foobar", 253 | hash_key=hash_key, 254 | ) 255 | assert table is not None 256 | self.assertIsNone(table.stream_type) 257 | table = self.dynamo.update_table("foobar", stream=NEW_AND_OLD_IMAGES) 258 | assert table is not None 259 | self.assertEqual(table.stream_type, NEW_AND_OLD_IMAGES) 260 | table = self.dynamo.update_table("foobar", stream=False) 261 | assert table is not None 262 | self.assertIsNone(table.stream_type) 263 | 264 | 265 | class TestTTL(BaseSystemTest): 266 | 267 | """Test the TTL features""" 268 | 269 | def test_missing_ttl(self): 270 | """If no table, TTL should be None""" 271 | ttl = self.dynamo.describe_ttl("foobar") 272 | self.assertIsNone(ttl) 273 | 274 | def test_default_ttl(self): 275 | """If no TTL configured, should be default value (disabled)""" 276 | hash_key = DynamoKey("id", data_type=STRING) 277 | self.dynamo.create_table("foobar", hash_key=hash_key) 278 | ttl = self.dynamo.describe_ttl("foobar") 279 | self.assertEqual(ttl, TTL.default()) 280 | 281 | def test_set_ttl(self): 282 | """Can set the TTL for a table""" 283 | hash_key = DynamoKey("id", data_type=STRING) 284 | self.dynamo.create_table("foobar", hash_key=hash_key) 285 | self.dynamo.update_ttl("foobar", "expire", True) 286 | ttl = self.dynamo.describe_ttl("foobar") 287 | self.assertEqual(ttl, TTL("expire", "ENABLED")) 288 | self.dynamo.update_ttl("foobar", "expire", False) 289 | ttl = self.dynamo.describe_ttl("foobar") 290 | self.assertEqual(ttl, TTL.default()) 291 | 292 | def test_describe_table_ttl(self): 293 | """Can make describe_table include the TTL""" 294 | hash_key = DynamoKey("id", data_type=STRING) 295 | self.dynamo.create_table("foobar", hash_key=hash_key) 296 | table = self.dynamo.describe_table("foobar") 297 | assert table is not None 298 | self.assertIsNone(table.ttl) 299 | table = self.dynamo.describe_table("foobar", include_ttl=True) 300 | assert table is not None 301 | self.assertEqual(table.ttl, TTL.default()) 302 | 303 | 304 | class TestBatchWrite(BaseSystemTest): 305 | 306 | """Test the batch write operation""" 307 | 308 | def test_write_items(self): 309 | """Batch write items to table""" 310 | hash_key = DynamoKey("id", data_type=STRING) 311 | self.dynamo.create_table("foobar", hash_key=hash_key) 312 | with self.dynamo.batch_write("foobar") as batch: 313 | batch.put({"id": "a"}) 314 | ret = list(self.dynamo.scan("foobar")) 315 | self.assertCountEqual(ret, [{"id": "a"}]) 316 | 317 | def test_delete_items(self): 318 | """Batch write can delete items from table""" 319 | hash_key = DynamoKey("id", data_type=STRING) 320 | self.dynamo.create_table("foobar", hash_key=hash_key) 321 | with self.dynamo.batch_write("foobar") as batch: 322 | batch.put({"id": "a"}) 323 | batch.put({"id": "b"}) 324 | with self.dynamo.batch_write("foobar") as batch: 325 | batch.delete({"id": "b"}) 326 | ret = list(self.dynamo.scan("foobar")) 327 | self.assertCountEqual(ret, [{"id": "a"}]) 328 | 329 | def test_write_many(self): 330 | """Can batch write arbitrary numbers of items""" 331 | hash_key = DynamoKey("id", data_type=STRING) 332 | self.dynamo.create_table("foobar", hash_key=hash_key) 333 | with self.dynamo.batch_write("foobar") as batch: 334 | for i in range(50): 335 | batch.put({"id": str(i)}) 336 | count = self.dynamo.scan("foobar", select="COUNT") 337 | self.assertEqual(count, 50) 338 | with self.dynamo.batch_write("foobar") as batch: 339 | for i in range(50): 340 | batch.delete({"id": str(i)}) 341 | count = self.dynamo.scan("foobar", select="COUNT") 342 | self.assertEqual(count, 0) 343 | 344 | def test_write_converts_none(self): 345 | """Write operation converts None values to a DELETE""" 346 | hash_key = DynamoKey("id", data_type=STRING) 347 | self.dynamo.create_table("foobar", hash_key=hash_key) 348 | self.dynamo.put_item("foobar", {"id": "a", "foo": "bar"}) 349 | with self.dynamo.batch_write("foobar") as batch: 350 | batch.put({"id": "a", "foo": None}) 351 | ret = list(self.dynamo.scan("foobar")) 352 | self.assertCountEqual(ret, [{"id": "a"}]) 353 | 354 | def test_handle_unprocessed(self): 355 | """Retry all unprocessed items""" 356 | conn = MagicMock() 357 | writer = BatchWriter(conn) 358 | action1, action2 = object(), object() 359 | unprocessed = [[action1], [action2], None] 360 | 361 | def replace_call(*_, **kwargs): 362 | actions = unprocessed.pop(0) 363 | ret = {} 364 | if actions is not None: 365 | ret["UnprocessedItems"] = { 366 | "foo": actions, 367 | } 368 | return ret 369 | 370 | conn.call.side_effect = replace_call 371 | with writer: 372 | writer.put("foo", {"id": "a"}) 373 | # Should insert the first item, and then the two sets we marked as 374 | # unprocessed 375 | self.assertEqual(len(conn.call.mock_calls), 3) 376 | kwargs = { 377 | "RequestItems": { 378 | "foo": [action1], 379 | }, 380 | } 381 | self.assertEqual(conn.call.mock_calls[1], call("batch_write_item", **kwargs)) 382 | kwargs["RequestItems"]["foo"][0] = action2 383 | self.assertEqual(conn.call.mock_calls[2], call("batch_write_item", **kwargs)) 384 | 385 | def test_exc_aborts(self): 386 | """Exception during a write will not flush data""" 387 | hash_key = DynamoKey("id", data_type=STRING) 388 | self.dynamo.create_table("foobar", hash_key=hash_key) 389 | try: 390 | with self.dynamo.batch_write("foobar") as batch: 391 | batch.put({"id": "a"}) 392 | raise Exception 393 | except Exception: 394 | pass 395 | ret = list(self.dynamo.scan("foobar")) 396 | self.assertEqual(len(ret), 0) 397 | 398 | def test_capacity(self): 399 | """Can return consumed capacity""" 400 | ret = { 401 | "Responses": { 402 | "foo": [], 403 | }, 404 | "ConsumedCapacity": [ 405 | { 406 | "TableName": "foobar", 407 | "ReadCapacityUnits": 6, 408 | "WriteCapacityUnits": 7, 409 | "Table": { 410 | "ReadCapacityUnits": 1, 411 | "WriteCapacityUnits": 2, 412 | }, 413 | "LocalSecondaryIndexes": { 414 | "l-index": { 415 | "ReadCapacityUnits": 2, 416 | "WriteCapacityUnits": 3, 417 | }, 418 | }, 419 | "GlobalSecondaryIndexes": { 420 | "g-index": { 421 | "ReadCapacityUnits": 3, 422 | "WriteCapacityUnits": 4, 423 | }, 424 | }, 425 | } 426 | ], 427 | } 428 | with patch.object(self.dynamo.client, "batch_write_item", return_value=ret): 429 | batch = self.dynamo.batch_write("foobar", return_capacity="INDEXES") 430 | with batch: 431 | batch.put({"id": "a"}) 432 | cap = batch.consumed_capacity 433 | assert cap is not None 434 | assert cap.table_capacity is not None 435 | assert cap.local_index_capacity is not None 436 | assert cap.global_index_capacity is not None 437 | self.assertEqual(cap.total, Throughput(6, 7)) 438 | self.assertEqual(cap.table_capacity, Throughput(1, 2)) 439 | self.assertEqual(cap.local_index_capacity["l-index"], Throughput(2, 3)) 440 | self.assertEqual(cap.global_index_capacity["g-index"], Throughput(3, 4)) 441 | 442 | 443 | class TestUpdateItem2(BaseSystemTest): 444 | 445 | """Test the new UpdateItem API""" 446 | 447 | def make_table(self): 448 | """Convenience method for creating a table""" 449 | hash_key = DynamoKey("id") 450 | self.dynamo.create_table("foobar", hash_key=hash_key) 451 | 452 | def test_update_field(self): 453 | """Update an item field""" 454 | self.make_table() 455 | self.dynamo.put_item("foobar", {"id": "a"}) 456 | self.dynamo.update_item("foobar", {"id": "a"}, "SET foo = :bar", bar="bar") 457 | item = list(self.dynamo.scan("foobar"))[0] 458 | self.assertEqual(item, {"id": "a", "foo": "bar"}) 459 | 460 | def test_atomic_add_num(self): 461 | """Update can atomically add to a number""" 462 | self.make_table() 463 | self.dynamo.put_item("foobar", {"id": "a"}) 464 | self.dynamo.update_item("foobar", {"id": "a"}, "ADD foo :foo", foo=1) 465 | self.dynamo.update_item("foobar", {"id": "a"}, "ADD foo :foo", foo=2) 466 | item = list(self.dynamo.scan("foobar"))[0] 467 | self.assertEqual(item, {"id": "a", "foo": 3}) 468 | 469 | def test_atomic_add_set(self): 470 | """Update can atomically add to a set""" 471 | self.make_table() 472 | self.dynamo.put_item("foobar", {"id": "a"}) 473 | self.dynamo.update_item("foobar", {"id": "a"}, "ADD foo :foo", foo=set([1])) 474 | self.dynamo.update_item("foobar", {"id": "a"}, "ADD foo :foo", foo=set([1, 2])) 475 | item = list(self.dynamo.scan("foobar"))[0] 476 | self.assertEqual(item, {"id": "a", "foo": set([1, 2])}) 477 | 478 | def test_delete_field(self): 479 | """Update can delete fields from an item""" 480 | self.make_table() 481 | self.dynamo.put_item("foobar", {"id": "a", "foo": "bar"}) 482 | self.dynamo.update_item("foobar", {"id": "a"}, "REMOVE foo") 483 | item = list(self.dynamo.scan("foobar"))[0] 484 | self.assertEqual(item, {"id": "a"}) 485 | 486 | def test_return_item(self): 487 | """Update can return the updated item""" 488 | self.make_table() 489 | self.dynamo.put_item("foobar", {"id": "a"}) 490 | ret = self.dynamo.update_item( 491 | "foobar", {"id": "a"}, "SET foo = :foo", returns=ALL_NEW, foo="bar" 492 | ) 493 | self.assertEqual(ret, {"id": "a", "foo": "bar"}) 494 | 495 | def test_return_metadata(self): 496 | """The Update return value contains capacity metadata""" 497 | self.make_table() 498 | self.dynamo.put_item("foobar", {"id": "a"}) 499 | ret = self.dynamo.update_item( 500 | "foobar", 501 | {"id": "a"}, 502 | "SET foo = :foo", 503 | returns=ALL_NEW, 504 | return_capacity=TOTAL, 505 | foo="bar", 506 | ) 507 | assert ret is not None 508 | assert ret.consumed_capacity is not None 509 | self.assertTrue(isinstance(ret.consumed_capacity.total, Capacity)) 510 | 511 | def test_expect_condition(self): 512 | """Update can expect a field to meet a condition""" 513 | self.make_table() 514 | self.dynamo.put_item("foobar", {"id": "a", "foo": 5}) 515 | with self.assertRaises(CheckFailed): 516 | self.dynamo.update_item( 517 | "foobar", 518 | {"id": "a"}, 519 | "SET foo = :foo", 520 | condition="foo < :max", 521 | foo=10, 522 | max=5, 523 | ) 524 | 525 | def test_expect_condition_or(self): 526 | """Expected conditionals can be OR'd together""" 527 | self.make_table() 528 | self.dynamo.put_item("foobar", {"id": "a", "foo": 5}) 529 | self.dynamo.update_item( 530 | "foobar", 531 | {"id": "a"}, 532 | "SET foo = :foo", 533 | condition="foo < :max OR NOT attribute_exists(baz)", 534 | foo=10, 535 | max=5, 536 | ) 537 | 538 | def test_expression_values(self): 539 | """Can pass in expression values directly""" 540 | self.make_table() 541 | self.dynamo.put_item("foobar", {"id": "a", "foo": 5}) 542 | self.dynamo.update_item( 543 | "foobar", 544 | {"id": "a"}, 545 | "SET #f = :foo", 546 | alias={"#f": "foo"}, 547 | expr_values={":foo": 10}, 548 | ) 549 | item = list(self.dynamo.scan("foobar"))[0] 550 | self.assertEqual(item, {"id": "a", "foo": 10}) 551 | 552 | 553 | class TestPutItem2(BaseSystemTest): 554 | 555 | """Tests for new PutItem API""" 556 | 557 | def make_table(self): 558 | """Convenience method for creating a table""" 559 | hash_key = DynamoKey("id") 560 | self.dynamo.create_table("foobar", hash_key=hash_key) 561 | 562 | def test_new_item(self): 563 | """Can Put new item into table""" 564 | self.make_table() 565 | self.dynamo.put_item("foobar", {"id": "a"}) 566 | ret = list(self.dynamo.scan("foobar"))[0] 567 | self.assertEqual(ret, {"id": "a"}) 568 | 569 | def test_overwrite_item(self): 570 | """Can overwrite an existing item""" 571 | self.make_table() 572 | self.dynamo.put_item("foobar", {"id": "a", "foo": "bar"}) 573 | self.dynamo.put_item("foobar", {"id": "a", "foo": "baz"}) 574 | ret = self.dynamo.get_item("foobar", {"id": "a"}) 575 | self.assertEqual(ret, {"id": "a", "foo": "baz"}) 576 | 577 | def test_expect_condition(self): 578 | """Put can expect a field to meet a condition""" 579 | self.make_table() 580 | self.dynamo.put_item("foobar", {"id": "a", "foo": 5}) 581 | with self.assertRaises(CheckFailed): 582 | self.dynamo.put_item( 583 | "foobar", 584 | {"id": "a", "foo": 13}, 585 | condition="#f < :v", 586 | alias={"#f": "foo"}, 587 | v=4, 588 | ) 589 | 590 | def test_expect_condition_or(self): 591 | """Expected conditionals can be OR'd together""" 592 | self.make_table() 593 | self.dynamo.put_item("foobar", {"id": "a", "foo": 5}) 594 | self.dynamo.put_item( 595 | "foobar", 596 | {"id": "a", "foo": 13}, 597 | condition="foo < :v OR attribute_not_exists(baz)", 598 | v=4, 599 | ) 600 | 601 | def test_return_item(self): 602 | """PutItem can return the item that was Put""" 603 | self.make_table() 604 | self.dynamo.put_item("foobar", {"id": "a"}) 605 | ret = self.dynamo.put_item("foobar", {"id": "a"}, returns=ALL_OLD) 606 | self.assertEqual(ret, {"id": "a"}) 607 | 608 | def test_return_capacity(self): 609 | """PutItem can return the consumed capacity""" 610 | self.make_table() 611 | self.dynamo.put_item("foobar", {"id": "a"}) 612 | ret = self.dynamo.put_item( 613 | "foobar", {"id": "a"}, returns=ALL_OLD, return_capacity=TOTAL 614 | ) 615 | assert ret.consumed_capacity is not None 616 | self.assertTrue(isinstance(ret.consumed_capacity.total, Capacity)) 617 | 618 | 619 | class TestDeleteItem2(BaseSystemTest): 620 | 621 | """Tests for the new DeleteItem API""" 622 | 623 | def make_table(self): 624 | """Convenience method for creating a table""" 625 | hash_key = DynamoKey("id") 626 | self.dynamo.create_table("foobar", hash_key=hash_key) 627 | 628 | def test_delete(self): 629 | """Delete an item""" 630 | self.make_table() 631 | self.dynamo.put_item("foobar", {"id": "a"}) 632 | self.dynamo.delete_item("foobar", {"id": "a"}) 633 | num = self.dynamo.scan("foobar", select="COUNT") 634 | self.assertEqual(num, 0) 635 | 636 | def test_return_item(self): 637 | """Delete can return the deleted item""" 638 | self.make_table() 639 | self.dynamo.put_item("foobar", {"id": "a", "foo": "bar"}) 640 | ret = self.dynamo.delete_item("foobar", {"id": "a"}, returns=ALL_OLD) 641 | self.assertEqual(ret, {"id": "a", "foo": "bar"}) 642 | 643 | def test_return_metadata(self): 644 | """The Delete return value contains capacity metadata""" 645 | self.make_table() 646 | self.dynamo.put_item("foobar", {"id": "a"}) 647 | ret = self.dynamo.delete_item( 648 | "foobar", {"id": "a"}, returns=ALL_OLD, return_capacity=TOTAL 649 | ) 650 | assert ret.consumed_capacity is not None 651 | self.assertTrue(isinstance(ret.consumed_capacity.total, Capacity)) 652 | 653 | def test_expect_not_exists(self): 654 | """Delete can expect a field to not exist""" 655 | self.make_table() 656 | self.dynamo.put_item("foobar", {"id": "a", "foo": "bar"}) 657 | with self.assertRaises(CheckFailed): 658 | self.dynamo.delete_item( 659 | "foobar", {"id": "a"}, condition="NOT attribute_exists(foo)" 660 | ) 661 | 662 | def test_expect_field(self): 663 | """Delete can expect a field to have a value""" 664 | self.make_table() 665 | self.dynamo.put_item("foobar", {"id": "a", "foo": "bar"}) 666 | with self.assertRaises(CheckFailed): 667 | self.dynamo.delete_item( 668 | "foobar", 669 | {"id": "a"}, 670 | condition="#f = :foo", 671 | alias={"#f": "foo"}, 672 | foo="baz", 673 | ) 674 | 675 | def test_expect_condition(self): 676 | """Delete can expect a field to meet a condition""" 677 | self.make_table() 678 | self.dynamo.put_item("foobar", {"id": "a", "foo": 5}) 679 | with self.assertRaises(CheckFailed): 680 | self.dynamo.delete_item( 681 | "foobar", {"id": "a"}, condition="foo < :low", expr_values={":low": 4} 682 | ) 683 | 684 | def test_expect_condition_or(self): 685 | """Expected conditionals can be OR'd together""" 686 | self.make_table() 687 | self.dynamo.put_item("foobar", {"id": "a", "foo": 5}) 688 | self.dynamo.delete_item( 689 | "foobar", 690 | {"id": "a"}, 691 | condition="foo < :foo OR NOT attribute_exists(baz)", 692 | foo=4, 693 | ) 694 | 695 | 696 | class TestTransactWrite(BaseSystemTest): 697 | 698 | """Test the TransactWriteItems operation""" 699 | 700 | def test_put_items(self): 701 | """Can put items""" 702 | hash_key = DynamoKey("id", data_type=STRING) 703 | self.dynamo.create_table("foobar", hash_key=hash_key) 704 | with self.dynamo.txn_write() as batch: 705 | batch.put("foobar", {"id": "a"}) 706 | ret = list(self.dynamo.scan("foobar")) 707 | self.assertCountEqual(ret, [{"id": "a"}]) 708 | 709 | def test_delete_items(self): 710 | """Can delete items from table""" 711 | hash_key = DynamoKey("id", data_type=STRING) 712 | self.dynamo.create_table("foobar", hash_key=hash_key) 713 | with self.dynamo.batch_write("foobar") as batch: 714 | batch.put({"id": "a"}) 715 | batch.put({"id": "b"}) 716 | with self.dynamo.txn_write() as batch: 717 | batch.delete("foobar", {"id": "b"}) 718 | ret = list(self.dynamo.scan("foobar")) 719 | self.assertCountEqual(ret, [{"id": "a"}]) 720 | 721 | def test_update(self): 722 | """Can update items in table""" 723 | hash_key = DynamoKey("id", data_type=STRING) 724 | self.dynamo.create_table("foobar", hash_key=hash_key) 725 | self.dynamo.put_item("foobar", {"id": "a"}) 726 | with self.dynamo.txn_write() as batch: 727 | batch.update("foobar", {"id": "a"}, "SET foo = :bar", bar="bar") 728 | item = list(self.dynamo.scan("foobar"))[0] 729 | self.assertEqual(item, {"id": "a", "foo": "bar"}) 730 | 731 | def test_check(self): 732 | """Will only perform actions if check passes""" 733 | hash_key = DynamoKey("id", data_type=STRING) 734 | self.dynamo.create_table("foobar", hash_key=hash_key) 735 | self.dynamo.put_item("foobar", {"id": "a"}) 736 | with self.assertRaises(TransactionCanceledException): 737 | with self.dynamo.txn_write() as batch: 738 | batch.update("foobar", {"id": "a"}, "SET foo = :bar", bar="bar") 739 | batch.check("foobar", {"id": "b"}, "attribute_exists(id)") 740 | item = list(self.dynamo.scan("foobar"))[0] 741 | self.assertEqual(item, {"id": "a"}) 742 | 743 | def test_fail_if_any_fail(self): 744 | """Transaction fails if any condition fails""" 745 | hash_key = DynamoKey("id", data_type=STRING) 746 | self.dynamo.create_table("foobar", hash_key=hash_key) 747 | items = [ 748 | {"id": "a", "foo": 1}, 749 | {"id": "b", "foo": 2}, 750 | {"id": "c", "foo": 3}, 751 | ] 752 | with self.dynamo.txn_write() as batch: 753 | for item in items: 754 | batch.put("foobar", item) 755 | with self.assertRaises(TransactionCanceledException): 756 | with self.dynamo.txn_write() as batch: 757 | batch.update( 758 | "foobar", 759 | {"id": "a"}, 760 | "SET bar = :bar", 761 | "foo < :limit", 762 | bar="bar", 763 | limit=3, 764 | ) 765 | batch.update( 766 | "foobar", 767 | {"id": "b"}, 768 | "SET bar = :bar", 769 | "foo < :limit", 770 | bar="bar", 771 | limit=3, 772 | ) 773 | batch.update( 774 | "foobar", 775 | {"id": "c"}, 776 | "SET bar = :bar", 777 | "foo < :limit", 778 | bar="bar", 779 | limit=3, 780 | ) 781 | scan_items = list(self.dynamo.scan("foobar")) 782 | self.assertCountEqual(scan_items, items) 783 | 784 | def test_idempotent(self): 785 | """If using token, operation is idempotent""" 786 | hash_key = DynamoKey("id", data_type=STRING) 787 | self.dynamo.create_table("foobar", hash_key=hash_key) 788 | self.dynamo.put_item("foobar", {"id": "a"}) 789 | token = "asdf" 790 | with self.dynamo.txn_write(token=token) as batch: 791 | batch.update("foobar", {"id": "a"}, "ADD foo :bar", bar=1) 792 | with self.dynamo.txn_write(token=token) as batch: 793 | batch.update("foobar", {"id": "a"}, "ADD foo :bar", bar=1) 794 | item = list(self.dynamo.scan("foobar"))[0] 795 | self.assertEqual(item, {"id": "a", "foo": 1}) 796 | 797 | def test_return_capacity(self): 798 | """Can return consumed capacity""" 799 | hash_key = DynamoKey("id", data_type=STRING) 800 | self.dynamo.create_table("foobar", hash_key=hash_key) 801 | self.dynamo.put_item("foobar", {"id": "a"}) 802 | with self.dynamo.txn_write(return_capacity=TOTAL) as batch: 803 | batch.update("foobar", {"id": "a"}, "ADD foo :bar", bar=1) 804 | assert batch.consumed_capacity is not None 805 | cap = batch.consumed_capacity["foobar"] 806 | self.assertTrue(cap.total.write > 0) 807 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | [tox] 2 | envlist = py36, py37, py38, py39, lint 3 | 4 | [testenv] 5 | deps = -rrequirements_test.txt 6 | commands = 7 | {envpython} setup.py nosetests --verbosity=2 8 | 9 | [testenv:py38] 10 | deps = 11 | {[testenv]deps} 12 | coverage 13 | commands = 14 | coverage run --source=dynamo3 --branch setup.py nosetests 15 | 16 | [testenv:lint] 17 | basepython = python3 18 | ignore_errors = true 19 | commands = 20 | {envpython} setup.py check -m -s 21 | black --check dynamo3 tests setup.py 22 | isort -c dynamo3 tests setup.py 23 | mypy dynamo3 tests 24 | pylint --rcfile=.pylintrc dynamo3 tests 25 | 26 | [testenv:coverage] 27 | deps = 28 | {[testenv]deps} 29 | coverage 30 | commands = 31 | coverage run --source=dynamo3 --branch setup.py nosetests 32 | coverage html 33 | 34 | [testenv:format] 35 | basepython = python3 36 | commands = 37 | isort --atomic godot_parser tests setup.py test_parse_files.py 38 | black dynamo3 tests setup.py 39 | 40 | [testenv:coveralls] 41 | deps = 42 | wheel 43 | coveralls 44 | passenv = 45 | GITHUB_ACTIONS 46 | GITHUB_TOKEN 47 | GITHUB_REF 48 | GITHUB_HEAD_REF 49 | commands = 50 | ls -lh .coverage 51 | coveralls --service=github 52 | 53 | [gh-actions] 54 | python = 55 | 3.6: py36 56 | 3.7: py37 57 | 3.8: py38 58 | 3.9: py39, lint 59 | --------------------------------------------------------------------------------