├── .gitignore ├── CHANGELOG.md ├── LICENSE ├── MANIFEST.in ├── README.md ├── queryish ├── __init__.py └── rest.py ├── setup.cfg ├── setup.py └── tests ├── __init__.py ├── test.py └── test_rest.py /.gitignore: -------------------------------------------------------------------------------- 1 | /queryish.egg-info 2 | /dist 3 | /build 4 | __pycache__ 5 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | Changelog 2 | ========= 3 | 4 | 0.2 (2023-09-05) 5 | ---------------- 6 | 7 | * Introduce virtual models as a closer drop-in replacement for model classes 8 | * Support `detail_url` endpoints on `APIQuerySet` for retrieving individual records 9 | * Implement `in_bulk` on `APIQuerySet` 10 | * Allow customising HTTP headers on `APIQuerySet` 11 | * Documentation 12 | 13 | 14 | 0.1 (2023-05-30) 15 | ---------------- 16 | 17 | * Initial release 18 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2023-present Torchbox Ltd and individual contributors. 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without modification, 5 | are permitted provided that the following conditions are met: 6 | 7 | * Redistributions of source code must retain the above copyright notice, 8 | this list of conditions and the following disclaimer. 9 | 10 | * Redistributions in binary form must reproduce the above copyright 11 | notice, this list of conditions and the following disclaimer in the 12 | documentation and/or other materials provided with the distribution. 13 | 14 | * Neither the name of Torchbox nor the names of its contributors may be used 15 | to endorse or promote products derived from this software without 16 | specific prior written permission. 17 | 18 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 19 | ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 20 | WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 21 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR 22 | ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 23 | (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 24 | LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 25 | ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 26 | (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 27 | SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 28 | -------------------------------------------------------------------------------- /MANIFEST.in: -------------------------------------------------------------------------------- 1 | include LICENSE *.md 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # queryish 2 | 3 | A Python library for constructing queries on arbitrary data sources following Django's QuerySet API. 4 | 5 | ## Motivation 6 | 7 | Django's QuerySet API is a powerful tool for constructing queries on a database. It allows you to compose queries incrementally, with the query only being executed when the results are needed: 8 | 9 | ```python 10 | books = Book.objects.all() 11 | python_books = books.filter(topic='python') 12 | latest_python_books = python_books.order_by('-publication_date')[:5] 13 | print(latest_python_books) # Query is executed here 14 | ``` 15 | 16 | This pattern is a good fit for building web interfaces for listing data, as it allows filtering, ordering and pagination to be handled as separate steps. 17 | 18 | We may often be required to implement similar interfaces for data taken from sources other than a database, such as a REST API or a search engine. In these cases, we would like to have a similarly rich API for constructing queries to these data sources. Even better would be to follow the QuerySet API as closely as possible, so that we can take advantage of ready-made tools such as [Django's generic class-based views](https://docs.djangoproject.com/en/stable/topics/class-based-views/) that are designed to work with this API. 19 | 20 | _queryish_ is a library for building wrappers around data sources that replicate the QuerySet API, allowing you to work with the data in the same way that you would with querysets and models. 21 | 22 | ## Installation 23 | 24 | Install using pip: 25 | 26 | ```bash 27 | pip install queryish 28 | ``` 29 | 30 | ## Usage - REST APIs 31 | 32 | _queryish_ provides a base class `queryish.rest.APIModel` for wrapping REST APIs. By default, this follows the out-of-the-box structure served by [Django REST Framework](https://www.django-rest-framework.org/), but various options are available to customise this. 33 | 34 | ```python 35 | from queryish.rest import APIModel 36 | 37 | class Party(APIModel): 38 | class Meta: 39 | base_url = "https://demozoo.org/api/v1/parties/" 40 | fields = ["id", "name", "start_date", "end_date", "location", "country_code"] 41 | pagination_style = "page-number" 42 | page_size = 100 43 | 44 | def __str__(self): 45 | return self.name 46 | ``` 47 | 48 | The resulting class has an `objects` property that supports the usual filtering, ordering and slicing operations familiar from Django's QuerySet API, although these may be limited by the capabilities of the REST API being accessed. 49 | 50 | ```python 51 | >>> Party.objects.count() 52 | 4623 53 | >>> Party.objects.filter(country_code="GB")[:10] 54 | , , , , , , , , , ]> 55 | >>> Party.objects.get(name="Nova 2023") 56 | 57 | ``` 58 | 59 | Methods supported include `all`, `count`, `filter`, `order_by`, `get`, `first`, and `in_bulk`. The result set can be sliced at arbitrary indices - these do not have to match the pagination supported by the underlying API. `APIModel` will automatically make multiple API requests as required. 60 | 61 | The following attributes are available on `APIModel.Meta`: 62 | 63 | * `base_url`: The base URL of the API from where results can be fetched. 64 | * `pk_field_name`: The name of the primary key field. Defaults to `"id"`. Lookups on the field name `"pk"` will be mapped to this field. 65 | * `detail_url`: A string template for the URL of a single object, such as `"https://demozoo.org/api/v1/parties/%s/"`. If this is specified, lookups on the primary key and no other fields will be directed to this URL rather than `base_url`. 66 | * `fields`: A list of field names defined in the API response that will be copied to attributes of the returned object. 67 | * `pagination_style`: The style of pagination used by the API. Recognised values are `"page-number"` and `"offset-limit"`; all others (including the default of `None`) indicate no pagination. 68 | * `page_size`: Required if `pagination_style` is `"page-number"` - the number of results per page returned by the API. 69 | * `page_query_param`: The name of the URL query parameter used to specify the page number. Defaults to `"page"`. 70 | * `offset_query_param`: The name of the URL query parameter used to specify the offset. Defaults to `"offset"`. 71 | * `limit_query_param`: The name of the URL query parameter used to specify the limit. Defaults to `"limit"`. 72 | * `ordering_query_param`: The name of the URL query parameter used to specify the ordering. Defaults to `"ordering"`. 73 | 74 | To accommodate APIs where the returned JSON does not map cleanly to the intended set of model attributes, the class methods `from_query_data` and `from_individual_data` on `APIModel` can be overridden: 75 | 76 | ```python 77 | class Pokemon(APIModel): 78 | class Meta: 79 | base_url = "https://pokeapi.co/api/v2/pokemon/" 80 | detail_url = "https://pokeapi.co/api/v2/pokemon/%s/" 81 | fields = ["id", "name"] 82 | pagination_style = "offset-limit" 83 | verbose_name_plural = "pokemon" 84 | 85 | @classmethod 86 | def from_query_data(cls, data): 87 | """ 88 | Given a record returned from the listing endpoint (base_url), return an instance of the model. 89 | """ 90 | # Records within the listing endpoint return a `url` field, from which we want to extract the ID 91 | return cls( 92 | id=int(re.match(r'https://pokeapi.co/api/v2/pokemon/(\d+)/', data['url']).group(1)), 93 | name=data['name'], 94 | ) 95 | 96 | @classmethod 97 | def from_individual_data(cls, data): 98 | """ 99 | Given a record returned from the detail endpoint (detail_url), return an instance of the model. 100 | """ 101 | return cls( 102 | id=data['id'], 103 | name=data['name'], 104 | ) 105 | 106 | def __str__(self): 107 | return self.name 108 | ``` 109 | 110 | ## Customising the REST API queryset class 111 | 112 | The `objects` attribute of an `APIModel` subclass is an instance of `queryish.rest.APIQuerySet` which initially consists of the complete set of records. As with Django's QuerySet, methods such as `filter` return a new instance. 113 | 114 | It may be necessary to subclass `APIQuerySet` and override methods in order to support certain API responses. For example, the base implementation expects unpaginated API endpoints to return a list as the top-level JSON object, and paginated API endpoints to return a dict with a `results` item. If the API you are working with returns a different structure, you can override the `get_results_from_response` method to extract the list of results from the response: 115 | 116 | ```python 117 | from queryish.rest import APIQuerySet 118 | 119 | class TreeQuerySet(APIQuerySet): 120 | base_url = "https://api.data.amsterdam.nl/v1/bomen/stamgegevens/" 121 | pagination_style = "page-number" 122 | page_size = 20 123 | http_headers = {"Accept": "application/hal+json"} 124 | 125 | def get_results_from_response(self, response): 126 | return response["_embedded"]["stamgegevens"] 127 | ``` 128 | 129 | `APIQuerySet` subclasses can be instantiated independently of an `APIModel`, but results will be returned as plain JSON values: 130 | 131 | ```python 132 | >>> TreeQuerySet().filter(jaarVanAanleg=1986).first() 133 | {'_links': {'schema': 'https://schemas.data.amsterdam.nl/datasets/bomen/dataset#stamgegevens', 'self': {'href': 'https://api.data.amsterdam.nl/v1/bomen/stamgegevens/1101570/', 'title': '1101570', 'id': 1101570}, 'gbdBuurt': {'href': 'https://api.data.amsterdam.nl/v1/gebieden/buurten/03630980000211/', 'title': '03630980000211', 'identificatie': '03630980000211'}}, 'id': 1101570, 'gbdBuurtId': '03630980000211', 'geometrie': {'type': 'Point', 'coordinates': [115162.72, 485972.68]}, 'boomhoogteklasseActueel': 'c. 9 tot 12 m.', 'jaarVanAanleg': 1986, 'soortnaam': "Salix alba 'Chermesina'", 'stamdiameterklasse': '0,5 tot 1 m.', 'typeObject': 'Gekandelaberde boom', 'typeSoortnaam': 'Bomen', 'soortnaamKort': 'Salix', 'soortnaamTop': 'Wilg (Salix)'} 134 | ``` 135 | 136 | This can be overridden by defining a `model` attribute on the queryset, or overriding the `get_instance` / `get_individual_instance` methods. To use a customised queryset with an `APIModel`, define the `base_query_class` attribute on the model class: 137 | 138 | ```python 139 | class Tree(APIModel): 140 | base_query_class = TreeQuerySet 141 | class Meta: 142 | fields = ["id", "geometrie", "boomhoogteklasseActueel", "jaarVanAanleg", "soortnaam", "soortnaamKort"] 143 | 144 | # >>> Tree.objects.filter(jaarVanAanleg=1986).first() 145 | # 146 | ``` 147 | 148 | ## Other data sources 149 | 150 | _queryish_ is not limited to REST APIs - the base class `queryish.Queryish` can be used to build a QuerySet-like API around any data source. At minimum, this requires defining a `run_query` method that returns an iterable of records that is filtered, ordered and sliced according to the queryset's attributes. For example, a queryset implementation that works from a simple in-memory list of objects might look like this: 151 | 152 | ```python 153 | from queryish import Queryish 154 | 155 | class CountryQuerySet(Queryish): 156 | def run_query(self): 157 | countries = [ 158 | {"code": "nl", "name": "Netherlands"}, 159 | {"code": "de", "name": "Germany"}, 160 | {"code": "fr", "name": "France"}, 161 | {"code": "gb", "name": "United Kingdom"}, 162 | {"code": "us", "name": "United States"}, 163 | ] 164 | 165 | # Filter the list of countries by `self.filters` - a list of (key, value) tuples 166 | for (key, val) in self.filters: 167 | countries = [c for c in countries if c[key] == val] 168 | 169 | # Sort the list of countries by `self.ordering` - a tuple of field names 170 | countries.sort(key=lambda c: [c.get(field, None) for field in self.ordering]) 171 | 172 | # Slice the list of countries by `self.offset` and `self.limit`. `offset` is always numeric 173 | # and defaults to 0 for an unsliced list; `limit` is either numeric or None (denoting no limit). 174 | return countries[self.offset : self.offset + self.limit if self.limit else None] 175 | ``` 176 | 177 | Subclasses will also typically override the method `run_count`, which returns the number of records in the queryset accounting for any filtering and slicing. If this is not overridden, the default implementation will call `run_query` and count the results. 178 | -------------------------------------------------------------------------------- /queryish/__init__.py: -------------------------------------------------------------------------------- 1 | import copy 2 | import re 3 | 4 | 5 | class Queryish: 6 | def __init__(self): 7 | self._results = None 8 | self._count = None 9 | self.offset = 0 10 | self.limit = None 11 | self.filters = [] 12 | self.filter_fields = None 13 | self.ordering = () 14 | self.ordering_fields = None 15 | 16 | def run_query(self): 17 | raise NotImplementedError 18 | 19 | def run_count(self): 20 | count = 0 21 | for i in self: 22 | count += 1 23 | return count 24 | 25 | def __iter__(self): 26 | if self._results is None: 27 | results = self.run_query() 28 | if isinstance(results, list): 29 | self._results = results 30 | for result in results: 31 | yield result 32 | else: 33 | results_list = [] 34 | for result in results: 35 | results_list.append(result) 36 | yield result 37 | self._results = results_list 38 | else: 39 | yield from self._results 40 | 41 | def count(self): 42 | if self._count is None: 43 | if self._results is not None: 44 | self._count = len(self._results) 45 | else: 46 | self._count = self.run_count() 47 | return self._count 48 | 49 | def __len__(self): 50 | # __len__ must run the full query 51 | if self._results is None: 52 | self._results = list(self.run_query()) 53 | return len(self._results) 54 | 55 | def clone(self, **kwargs): 56 | clone = copy.copy(self) 57 | clone._results = None 58 | clone._count = None 59 | clone.filters = self.filters.copy() 60 | for key, value in kwargs.items(): 61 | setattr(clone, key, value) 62 | return clone 63 | 64 | def filter_is_valid(self, key, val): 65 | if self.filter_fields is not None and key not in self.filter_fields: 66 | return False 67 | return True 68 | 69 | def filter(self, **kwargs): 70 | clone = self.clone() 71 | for key, val in kwargs.items(): 72 | if self.filter_is_valid(key, val): 73 | clone.filters.append((key, val)) 74 | else: 75 | raise ValueError("Invalid filter field: %s" % key) 76 | return clone 77 | 78 | def ordering_is_valid(self, key): 79 | if self.ordering_fields is not None and key not in self.ordering_fields: 80 | return False 81 | return True 82 | 83 | def order_by(self, *args): 84 | ordering = [] 85 | for key in args: 86 | if self.ordering_is_valid(key): 87 | ordering.append(key) 88 | else: 89 | raise ValueError("Invalid ordering field: %s" % key) 90 | return self.clone(ordering=tuple(ordering)) 91 | 92 | def get(self, **kwargs): 93 | results = list(self.filter(**kwargs)[:2]) 94 | if len(results) == 0: 95 | raise ValueError("No results found") 96 | elif len(results) > 1: 97 | raise ValueError("Multiple results found") 98 | else: 99 | return results[0] 100 | 101 | def first(self): 102 | results = list(self[:1]) 103 | try: 104 | return results[0] 105 | except IndexError: 106 | return None 107 | 108 | def all(self): 109 | return self 110 | 111 | @property 112 | def ordered(self): 113 | return bool(self.ordering) 114 | 115 | def __getitem__(self, key): 116 | if isinstance(key, slice): 117 | if key.step is not None: 118 | raise ValueError("%r does not support slicing with a step" % self.__class__.__name__) 119 | 120 | # Adjust the requested start/stop values to be relative to the full queryset 121 | absolute_start = (key.start or 0) + self.offset 122 | if key.stop is None: 123 | absolute_stop = None 124 | else: 125 | absolute_stop = key.stop + self.offset 126 | 127 | # find the absolute stop value corresponding to the current limit 128 | if self.limit is None: 129 | current_absolute_stop = None 130 | else: 131 | current_absolute_stop = self.offset + self.limit 132 | 133 | if absolute_stop is None: 134 | final_absolute_stop = current_absolute_stop 135 | elif current_absolute_stop is None: 136 | final_absolute_stop = absolute_stop 137 | else: 138 | final_absolute_stop = min(current_absolute_stop, absolute_stop) 139 | 140 | if final_absolute_stop is None: 141 | new_limit = None 142 | else: 143 | new_limit = final_absolute_stop - absolute_start 144 | 145 | clone = self.clone(offset=absolute_start, limit=new_limit) 146 | if self._results: 147 | clone._results = self._results[key] 148 | return clone 149 | elif isinstance(key, int): 150 | if key < 0: 151 | raise IndexError("Negative indexing is not supported") 152 | if self._results is None: 153 | self._results = list(self.run_query()) 154 | return self._results[key] 155 | else: 156 | raise TypeError( 157 | "%r indices must be integers or slices, not %s" 158 | % (self.__class__.__name__, type(key).__name__) 159 | ) 160 | 161 | def __repr__(self): 162 | items = list(self[:21]) 163 | if len(items) > 20: 164 | items[-1] = "...(remaining elements truncated)..." 165 | return "<%s %r>" % (self.__class__.__name__, items) 166 | 167 | 168 | class VirtualModelOptions: 169 | def __init__(self, model_name, fields, verbose_name, verbose_name_plural): 170 | self.model_name = model_name 171 | self.fields = fields 172 | self.verbose_name = verbose_name 173 | self.verbose_name_plural = verbose_name_plural 174 | 175 | 176 | class VirtualModelMetaclass(type): 177 | def __new__(cls, name, bases, attrs): 178 | model = super().__new__(cls, name, bases, attrs) 179 | meta = getattr(model, "Meta", None) 180 | 181 | if model.base_query_class: 182 | # construct a queryset subclass with a 'model' attribute 183 | # and any additional attributes defined on the Meta class 184 | dct = { 185 | "model": model, 186 | } 187 | if meta: 188 | for attr in dir(meta): 189 | # attr must be defined on base_query_class to be valid 190 | if hasattr(model.base_query_class, attr) and not attr.startswith("_"): 191 | dct[attr] = getattr(meta, attr) 192 | 193 | # create the queryset subclass 194 | model.query_class = type("%sQuerySet" % name, (model.base_query_class,), dct) 195 | 196 | # Make an `objects` attribute available on the class 197 | model.objects = model._default_manager = model.query_class() 198 | 199 | # construct a VirtualModelOptions instance to use as the _meta attribute 200 | verbose_name = getattr(meta, "verbose_name", None) 201 | if verbose_name is None: 202 | re_camel_case = re.compile(r"(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))") 203 | verbose_name = re_camel_case.sub(r" \1", name).strip().lower() 204 | 205 | model._meta = VirtualModelOptions( 206 | model_name=name.lower(), 207 | fields=getattr(meta, "fields", []), 208 | verbose_name=verbose_name, 209 | verbose_name_plural=getattr(meta, "verbose_name_plural", verbose_name + "s"), 210 | ) 211 | 212 | return model 213 | 214 | 215 | class VirtualModel(metaclass=VirtualModelMetaclass): 216 | base_query_class = None 217 | pk_field_name = "id" 218 | 219 | @classmethod 220 | def from_query_data(cls, data): 221 | return cls(**data) 222 | 223 | @classmethod 224 | def from_individual_data(cls, data): 225 | return cls.from_query_data(data) 226 | 227 | def __init__(self, **kwargs): 228 | for field in self._meta.fields: 229 | setattr(self, field, kwargs.get(field)) 230 | self.pk = kwargs.get(self.pk_field_name) 231 | 232 | def __str__(self): 233 | return f"{self.__class__.__name__} object ({self.pk})" 234 | 235 | def __repr__(self): 236 | return f"<{self.__class__.__name__}: {str(self)}>" 237 | -------------------------------------------------------------------------------- /queryish/rest.py: -------------------------------------------------------------------------------- 1 | from functools import cached_property 2 | import requests 3 | 4 | from queryish import Queryish, VirtualModel 5 | 6 | 7 | class APIQuerySet(Queryish): 8 | base_url = None 9 | detail_url = None 10 | pagination_style = None 11 | pk_field_name = "id" 12 | limit_query_param = "limit" 13 | offset_query_param = "offset" 14 | page_query_param = "page" 15 | ordering_query_param = "ordering" 16 | model = None 17 | page_size = None 18 | http_headers = {"Accept": "application/json"} 19 | 20 | def __init__(self): 21 | super().__init__() 22 | self._responses = {} # cache for API responses 23 | 24 | @cached_property 25 | def filter_field_aliases(self): 26 | return {"pk": self.pk_field_name} 27 | 28 | def filter_is_valid(self, key, val): 29 | if key in self.filter_field_aliases: 30 | key = self.filter_field_aliases[key] 31 | return super().filter_is_valid(key, val) 32 | 33 | def get_filters_as_query_dict(self): 34 | params = {} 35 | for key, val in self.filters: 36 | # map key to the real API field name, if present in filter_field_aliases 37 | key = self.filter_field_aliases.get(key, key) 38 | 39 | if key in params: 40 | if isinstance(params[key], list): 41 | params[key].append(val) 42 | else: 43 | params[key] = [params[key], val] 44 | else: 45 | params[key] = val 46 | return params 47 | 48 | def get_instance(self, val): 49 | if self.model: 50 | return self.model.from_query_data(val) 51 | else: 52 | return val 53 | 54 | def get_individual_instance(self, val): 55 | if self.model: 56 | return self.model.from_individual_data(val) 57 | else: 58 | return val 59 | 60 | def get_detail_url(self, pk): 61 | return self.detail_url % pk 62 | 63 | def run_query(self): 64 | params = self.get_filters_as_query_dict() 65 | 66 | if list(params.keys()) == [self.pk_field_name] and self.detail_url: 67 | # if the only filter is the pk, we can use the detail view 68 | # to fetch the single instance 69 | yield self.get_individual_instance(self.fetch_api_response( 70 | url=self.get_detail_url(params[self.pk_field_name]), 71 | )) 72 | return 73 | 74 | if self.ordering: 75 | params[self.ordering_query_param] = ",".join(self.ordering) 76 | 77 | if self.pagination_style == "offset-limit": 78 | offset = self.offset 79 | limit = self.limit 80 | returned_result_count = 0 81 | 82 | while True: 83 | # continue fetching pages of results until we reach either 84 | # the end of the result set or the end of the slice 85 | response_json = self.fetch_api_response(params={ 86 | self.offset_query_param: offset, 87 | self.limit_query_param: limit, 88 | **params, 89 | }) 90 | results_page = self.get_results_from_response(response_json) 91 | for result in results_page: 92 | yield self.get_instance(result) 93 | returned_result_count += 1 94 | if limit is not None and returned_result_count >= limit: 95 | return 96 | if len(results_page) == 0 or offset + len(results_page) >= response_json["count"]: 97 | # we've reached the end of the result set 98 | return 99 | 100 | offset += len(results_page) 101 | if limit is not None: 102 | limit -= len(results_page) 103 | elif self.pagination_style == "page-number": 104 | offset = self.offset 105 | limit = self.limit 106 | returned_result_count = 0 107 | 108 | while True: 109 | # continue fetching pages of results until we reach either 110 | # the end of the result set or the end of the slice 111 | page = 1 + offset // self.page_size 112 | response_json = self.fetch_api_response(params={ 113 | self.page_query_param: page, 114 | **params, 115 | }) 116 | results_page = self.get_results_from_response(response_json) 117 | results_page_offset = offset % self.page_size 118 | for result in results_page[results_page_offset:]: 119 | yield self.get_instance(result) 120 | returned_result_count += 1 121 | if self.limit is not None and returned_result_count >= self.limit: 122 | return 123 | if len(results_page) == 0 or offset + len(results_page) >= response_json["count"]: 124 | # we've reached the end of the result set 125 | return 126 | 127 | offset += len(results_page) 128 | if limit is not None: 129 | limit -= len(results_page) 130 | else: 131 | response_json = self.fetch_api_response(params=params) 132 | if self.limit is None: 133 | stop = None 134 | else: 135 | stop = self.offset + self.limit 136 | results = self.get_results_from_response(response_json) 137 | for item in results[self.offset:stop]: 138 | yield self.get_instance(item) 139 | 140 | def run_count(self): 141 | params = self.get_filters_as_query_dict() 142 | 143 | if self.pagination_style == "offset-limit" or self.pagination_style == "page-number": 144 | if self.pagination_style == "offset-limit": 145 | params[self.limit_query_param] = 1 146 | else: 147 | params[self.page_query_param] = 1 148 | 149 | response_json = self.fetch_api_response(params=params) 150 | count = response_json["count"] 151 | # count is the full result set without considering slicing; 152 | # we need to adjust it to the slice 153 | if self.limit is not None: 154 | count = min(count, self.limit) 155 | count = max(0, count - self.offset) 156 | return count 157 | 158 | else: 159 | # default to standard behaviour of getting all results and counting them 160 | return super().run_count() 161 | 162 | def fetch_api_response(self, url=None, params=None): 163 | # construct a hashable key for the params 164 | if url is None: 165 | url = self.base_url 166 | 167 | if params is None: 168 | params = {} 169 | key = tuple([url] + sorted(params.items())) 170 | if key not in self._responses: 171 | self._responses[key] = requests.get( 172 | url, 173 | params=params, 174 | headers=self.http_headers, 175 | ).json() 176 | return self._responses[key] 177 | 178 | def get_results_from_response(self, response): 179 | if self.pagination_style == "offset-limit" or self.pagination_style == "page-number": 180 | return response["results"] 181 | else: 182 | return response 183 | 184 | def in_bulk(self, id_list=None, field_name="pk"): 185 | return { 186 | id: self.get(**{field_name: id}) 187 | for id in (id_list or []) 188 | } 189 | 190 | 191 | class APIModel(VirtualModel): 192 | base_query_class = APIQuerySet 193 | -------------------------------------------------------------------------------- /setup.cfg: -------------------------------------------------------------------------------- 1 | [bdist_wheel] 2 | python-tag = py3 3 | -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | from setuptools import setup, find_packages 4 | 5 | with open("README.md", "r", encoding="utf-8") as fh: 6 | long_description = fh.read() 7 | 8 | 9 | setup( 10 | name='queryish', 11 | version='0.2', 12 | description="A library for constructing queries on arbitrary data sources following Django's QuerySet API", 13 | author='Matthew Westcott', 14 | author_email='matthew.westcott@torchbox.com', 15 | url='https://github.com/wagtail/queryish', 16 | packages=["queryish"], 17 | include_package_data=True, 18 | license='BSD', 19 | long_description=long_description, 20 | long_description_content_type="text/markdown", 21 | python_requires=">=3.7", 22 | install_requires=[ 23 | "requests>=2.28,<3.0", 24 | ], 25 | extras_require={ 26 | "testing": [ 27 | "responses>=0.23,<1.0", 28 | ] 29 | }, 30 | classifiers=[ 31 | 'Development Status :: 3 - Alpha', 32 | 'Intended Audience :: Developers', 33 | 'License :: OSI Approved :: BSD License', 34 | 'Operating System :: OS Independent', 35 | 'Programming Language :: Python', 36 | 'Programming Language :: Python :: 3', 37 | 'Programming Language :: Python :: 3 :: Only', 38 | 'Framework :: Django', 39 | ], 40 | ) 41 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/wagtail/queryish/556e69a86fe24b0033ca528f58237c1a11bbdc70/tests/__init__.py -------------------------------------------------------------------------------- /tests/test.py: -------------------------------------------------------------------------------- 1 | from unittest import TestCase 2 | 3 | from queryish import Queryish 4 | 5 | 6 | class CounterQuerySetWithoutCount(Queryish): 7 | def __init__(self, max_count=10): 8 | super().__init__() 9 | self.max_count = max_count 10 | self.run_query_call_count = 0 11 | 12 | def _get_real_limits(self): 13 | start = min(self.offset, self.max_count) 14 | if self.limit is not None: 15 | stop = min(self.offset + self.limit, self.max_count) 16 | else: 17 | stop = self.max_count 18 | 19 | return (start, stop) 20 | 21 | def run_query(self): 22 | self.run_query_call_count += 1 23 | start, stop = self._get_real_limits() 24 | for i in range(start, stop): 25 | yield i 26 | 27 | def clone(self, **kwargs): 28 | clone = super().clone(**kwargs) 29 | clone.run_query_call_count = 0 30 | return clone 31 | 32 | 33 | class CounterQuerySet(CounterQuerySetWithoutCount): 34 | def __init__(self, **kwargs): 35 | super().__init__(**kwargs) 36 | self.run_count_call_count = 0 37 | 38 | def run_count(self): 39 | self.run_count_call_count += 1 40 | start, stop = self._get_real_limits() 41 | return stop - start 42 | 43 | def clone(self, **kwargs): 44 | clone = super().clone(**kwargs) 45 | clone.run_count_call_count = 0 46 | return clone 47 | 48 | 49 | class TestQueryish(TestCase): 50 | def test_get_results_as_list(self): 51 | qs = CounterQuerySet() 52 | self.assertEqual(list(qs), list(range(0, 10))) 53 | self.assertEqual(qs.run_query_call_count, 1) 54 | 55 | def test_all(self): 56 | qs = CounterQuerySet() 57 | self.assertEqual(list(qs.all()), list(range(0, 10))) 58 | self.assertEqual(qs.run_query_call_count, 1) 59 | 60 | def test_query_is_only_run_once(self): 61 | qs = CounterQuerySet() 62 | list(qs) 63 | list(qs) 64 | self.assertEqual(qs.run_query_call_count, 1) 65 | 66 | def test_count_uses_results_by_default(self): 67 | qs = CounterQuerySetWithoutCount() 68 | self.assertEqual(qs.count(), 10) 69 | self.assertEqual(qs.count(), 10) 70 | self.assertEqual(qs.run_query_call_count, 1) 71 | 72 | def test_count_does_not_use_results_when_run_count_provided(self): 73 | qs = CounterQuerySet() 74 | self.assertEqual(qs.count(), 10) 75 | self.assertEqual(qs.count(), 10) 76 | self.assertEqual(qs.run_count_call_count, 1) 77 | self.assertEqual(qs.run_query_call_count, 0) 78 | 79 | def test_count_uses_results_when_available(self): 80 | qs = CounterQuerySet() 81 | list(qs) 82 | self.assertEqual(qs.count(), 10) 83 | self.assertEqual(qs.count(), 10) 84 | self.assertEqual(qs.run_count_call_count, 0) 85 | self.assertEqual(qs.run_query_call_count, 1) 86 | 87 | def test_len_does_not_use_count(self): 88 | qs = CounterQuerySet() 89 | self.assertEqual(len(qs), 10) 90 | self.assertEqual(qs.run_count_call_count, 0) 91 | self.assertEqual(qs.run_query_call_count, 1) 92 | 93 | def test_slicing(self): 94 | qs = CounterQuerySet()[1:3] 95 | self.assertEqual(qs.offset, 1) 96 | self.assertEqual(qs.limit, 2) 97 | self.assertEqual(list(qs), [1, 2]) 98 | self.assertEqual(qs.run_query_call_count, 1) 99 | 100 | def test_slicing_without_start(self): 101 | qs = CounterQuerySet()[:3] 102 | self.assertEqual(qs.offset, 0) 103 | self.assertEqual(qs.limit, 3) 104 | self.assertEqual(list(qs), [0, 1, 2]) 105 | self.assertEqual(qs.run_query_call_count, 1) 106 | 107 | def test_slicing_without_stop(self): 108 | qs = CounterQuerySet()[3:] 109 | self.assertEqual(qs.offset, 3) 110 | self.assertEqual(qs.limit, None) 111 | self.assertEqual(list(qs), [3, 4, 5, 6, 7, 8, 9]) 112 | self.assertEqual(qs.run_query_call_count, 1) 113 | 114 | def test_multiple_slicing(self): 115 | qs1 = CounterQuerySet() 116 | qs2 = qs1[1:9] 117 | self.assertEqual(qs2.offset, 1) 118 | self.assertEqual(qs2.limit, 8) 119 | qs3 = qs2[2:4] 120 | self.assertEqual(qs3.offset, 3) 121 | self.assertEqual(qs3.limit, 2) 122 | 123 | self.assertEqual(list(qs3), [3, 4]) 124 | self.assertEqual(qs1.run_query_call_count, 0) 125 | self.assertEqual(qs2.run_query_call_count, 0) 126 | self.assertEqual(qs3.run_query_call_count, 1) 127 | 128 | def test_multiple_slicing_without_start(self): 129 | qs1 = CounterQuerySet() 130 | qs2 = qs1[1:9] 131 | self.assertEqual(qs2.offset, 1) 132 | self.assertEqual(qs2.limit, 8) 133 | qs3 = qs2[:4] 134 | self.assertEqual(qs3.offset, 1) 135 | self.assertEqual(qs3.limit, 4) 136 | 137 | self.assertEqual(list(qs3), [1, 2, 3, 4]) 138 | self.assertEqual(qs1.run_query_call_count, 0) 139 | self.assertEqual(qs2.run_query_call_count, 0) 140 | self.assertEqual(qs3.run_query_call_count, 1) 141 | 142 | def test_multiple_slicing_without_stop(self): 143 | qs1 = CounterQuerySet() 144 | qs2 = qs1[1:9] 145 | self.assertEqual(qs2.offset, 1) 146 | self.assertEqual(qs2.limit, 8) 147 | qs3 = qs2[2:] 148 | self.assertEqual(qs3.offset, 3) 149 | self.assertEqual(qs3.limit, 6) 150 | 151 | self.assertEqual(list(qs3), [3, 4, 5, 6, 7, 8]) 152 | self.assertEqual(qs1.run_query_call_count, 0) 153 | self.assertEqual(qs2.run_query_call_count, 0) 154 | self.assertEqual(qs3.run_query_call_count, 1) 155 | 156 | def test_multiple_slicing_is_limited_by_first_slice(self): 157 | qs1 = CounterQuerySet() 158 | qs2 = qs1[1:3] 159 | self.assertEqual(qs2.offset, 1) 160 | self.assertEqual(qs2.limit, 2) 161 | qs3 = qs2[1:10] 162 | self.assertEqual(qs3.offset, 2) 163 | self.assertEqual(qs3.limit, 1) 164 | 165 | self.assertEqual(list(qs3), [2]) 166 | self.assertEqual(qs1.run_query_call_count, 0) 167 | self.assertEqual(qs2.run_query_call_count, 0) 168 | self.assertEqual(qs3.run_query_call_count, 1) 169 | 170 | def test_slice_reuses_results(self): 171 | qs1 = CounterQuerySet() 172 | list(qs1) 173 | qs2 = qs1[1:9] 174 | self.assertEqual(list(qs2), [1, 2, 3, 4, 5, 6, 7, 8]) 175 | self.assertEqual(qs1.run_query_call_count, 1) 176 | self.assertEqual(qs2.run_query_call_count, 0) 177 | 178 | def test_indexing(self): 179 | qs = CounterQuerySet() 180 | self.assertEqual(qs[1], 1) 181 | self.assertEqual(qs.run_query_call_count, 1) 182 | self.assertEqual(qs[2], 2) 183 | self.assertEqual(qs.run_query_call_count, 1) 184 | 185 | def test_indexing_after_slice(self): 186 | qs = CounterQuerySet()[1:5] 187 | self.assertEqual(qs[1], 2) 188 | self.assertEqual(qs.run_query_call_count, 1) 189 | self.assertEqual(qs[2], 3) 190 | self.assertEqual(qs.run_query_call_count, 1) 191 | 192 | def test_invalid_index_type(self): 193 | qs = CounterQuerySet() 194 | with self.assertRaises(TypeError): 195 | qs['a'] 196 | 197 | def test_repr(self): 198 | qs = CounterQuerySet() 199 | self.assertEqual(repr(qs), "") 200 | qs = CounterQuerySet(max_count=30) 201 | self.assertEqual( 202 | repr(qs), 203 | "" 204 | ) 205 | 206 | def test_first(self): 207 | qs = CounterQuerySet() 208 | self.assertEqual(qs.first(), 0) 209 | self.assertEqual(qs[20:30].first(), None) 210 | -------------------------------------------------------------------------------- /tests/test_rest.py: -------------------------------------------------------------------------------- 1 | import re 2 | from unittest import TestCase 3 | import responses 4 | from responses import matchers 5 | 6 | from queryish.rest import APIModel, APIQuerySet 7 | 8 | 9 | class CountryAPIQuerySet(APIQuerySet): 10 | base_url = "http://example.com/api/countries/" 11 | filter_fields = ["id", "name", "continent"] 12 | ordering_fields = ["id", "name", "continent"] 13 | 14 | 15 | class UnpaginatedCountryAPIQuerySet(CountryAPIQuerySet): 16 | pass 17 | 18 | 19 | class LimitOffsetPaginatedCountryAPIQuerySet(CountryAPIQuerySet): 20 | pagination_style = "offset-limit" 21 | 22 | 23 | class PageNumberPaginatedCountryAPIQuerySet(CountryAPIQuerySet): 24 | pagination_style = "page-number" 25 | page_size = 2 26 | 27 | 28 | class Country(APIModel): 29 | class Meta: 30 | base_url = "http://example.com/api/countries/" 31 | fields = ["id", "name", "continent"] 32 | 33 | def __str__(self): 34 | return self.name 35 | 36 | 37 | class Pokemon(APIModel): 38 | class Meta: 39 | base_url = "https://pokeapi.co/api/v2/pokemon/" 40 | detail_url = "https://pokeapi.co/api/v2/pokemon/%d/" 41 | fields = ["id", "name"] 42 | pagination_style = "offset-limit" 43 | 44 | @classmethod 45 | def from_query_data(cls, data): 46 | return cls( 47 | id=int(re.match(r'https://pokeapi.co/api/v2/pokemon/(\d+)/', data['url']).group(1)), 48 | name=data['name'], 49 | ) 50 | 51 | @classmethod 52 | def from_individual_data(cls, data): 53 | return cls( 54 | id=data['id'], 55 | name=data['name'], 56 | ) 57 | 58 | def __str__(self): 59 | return self.name 60 | 61 | 62 | class TestAPIQuerySet(TestCase): 63 | @responses.activate 64 | def test_fetch_unpaginated(self): 65 | responses.add( 66 | responses.GET, "http://example.com/api/countries/", 67 | body=""" 68 | [ 69 | { 70 | "id": 1, 71 | "name": "France", 72 | "continent": "europe" 73 | }, 74 | { 75 | "id": 2, 76 | "name": "Germany", 77 | "continent": "europe" 78 | }, 79 | { 80 | "id": 3, 81 | "name": "Italy", 82 | "continent": "europe" 83 | }, 84 | { 85 | "id": 4, 86 | "name": "Japan", 87 | "continent": "asia" 88 | }, 89 | { 90 | "id": 5, 91 | "name": "China", 92 | "continent": "asia" 93 | } 94 | ] 95 | """ 96 | ) 97 | 98 | self.assertEqual(UnpaginatedCountryAPIQuerySet().count(), 5) 99 | 100 | results = UnpaginatedCountryAPIQuerySet()[1:3] 101 | self.assertFalse(results.ordered) 102 | self.assertEqual(list(results), [ 103 | {"id": 2, "name": "Germany", "continent": "europe"}, 104 | {"id": 3, "name": "Italy", "continent": "europe"}, 105 | ]) 106 | 107 | @responses.activate 108 | def test_fetch_limit_offset_paginated(self): 109 | responses.add( 110 | responses.GET, "http://example.com/api/countries/", 111 | match=[matchers.query_param_matcher({"limit": 1})], 112 | body=""" 113 | { 114 | "count": 5, 115 | "next": "http://example.com/api/countries/?limit=1&offset=1", 116 | "previous": null, 117 | "results": [ 118 | { 119 | "id": 1, 120 | "name": "France", 121 | "continent": "europe" 122 | } 123 | ] 124 | } 125 | """ 126 | ) 127 | 128 | responses.add( 129 | responses.GET, "http://example.com/api/countries/", 130 | match=[matchers.query_param_matcher({"offset": 0})], 131 | body=""" 132 | { 133 | "count": 5, 134 | "next": "http://example.com/api/countries/?limit=2&offset=2", 135 | "previous": null, 136 | "results": [ 137 | { 138 | "id": 1, 139 | "name": "France", 140 | "continent": "europe" 141 | }, 142 | { 143 | "id": 2, 144 | "name": "Germany", 145 | "continent": "europe" 146 | } 147 | ] 148 | } 149 | """ 150 | ) 151 | 152 | responses.add( 153 | responses.GET, "http://example.com/api/countries/", 154 | match=[matchers.query_param_matcher({"offset": 2})], 155 | body=""" 156 | { 157 | "count": 5, 158 | "next": "http://example.com/api/countries/?limit=2&offset=4", 159 | "previous": "http://example.com/api/countries/?limit=2", 160 | "results": [ 161 | { 162 | "id": 3, 163 | "name": "Italy", 164 | "continent": "europe" 165 | }, 166 | { 167 | "id": 4, 168 | "name": "Japan", 169 | "continent": "asia" 170 | } 171 | ] 172 | } 173 | """ 174 | ) 175 | 176 | responses.add( 177 | responses.GET, "http://example.com/api/countries/", 178 | match=[matchers.query_param_matcher({"offset": 4})], 179 | body=""" 180 | { 181 | "count": 5, 182 | "next": null, 183 | "previous": "http://example.com/api/countries/?limit=2&offset=2", 184 | "results": [ 185 | { 186 | "id": 5, 187 | "name": "China", 188 | "continent": "asia" 189 | } 190 | ] 191 | } 192 | """ 193 | ) 194 | 195 | responses.add( 196 | responses.GET, "http://example.com/api/countries/", 197 | match=[matchers.query_param_matcher({"limit": 2, "offset": 2})], 198 | body=""" 199 | { 200 | "count": 5, 201 | "next": "http://example.com/api/countries/?limit=2&offset=4", 202 | "previous": "http://example.com/api/countries/?limit=2", 203 | "results": [ 204 | { 205 | "id": 3, 206 | "name": "Italy", 207 | "continent": "europe" 208 | }, 209 | { 210 | "id": 4, 211 | "name": "Japan", 212 | "continent": "asia" 213 | } 214 | ] 215 | } 216 | """ 217 | ) 218 | 219 | self.assertEqual(LimitOffsetPaginatedCountryAPIQuerySet().count(), 5) 220 | 221 | full_results = list(LimitOffsetPaginatedCountryAPIQuerySet()) 222 | self.assertEqual(full_results[2], {"id": 3, "name": "Italy", "continent": "europe"}) 223 | 224 | partial_results = list(LimitOffsetPaginatedCountryAPIQuerySet()[2:4]) 225 | self.assertEqual(partial_results, [ 226 | {"id": 3, "name": "Italy", "continent": "europe"}, 227 | {"id": 4, "name": "Japan", "continent": "asia"}, 228 | ]) 229 | 230 | @responses.activate 231 | def test_fetch_page_number_paginated(self): 232 | responses.add( 233 | responses.GET, "http://example.com/api/countries/", 234 | match=[matchers.query_param_matcher({"page": 1})], 235 | body=""" 236 | { 237 | "count": 5, 238 | "next": "http://example.com/api/countries/?page=2", 239 | "previous": null, 240 | "results": [ 241 | { 242 | "id": 1, 243 | "name": "France", 244 | "continent": "europe" 245 | }, 246 | { 247 | "id": 2, 248 | "name": "Germany", 249 | "continent": "europe" 250 | } 251 | ] 252 | } 253 | """ 254 | ) 255 | responses.add( 256 | responses.GET, "http://example.com/api/countries/", 257 | match=[matchers.query_param_matcher({"page": 2})], 258 | body=""" 259 | { 260 | "count": 5, 261 | "next": "http://example.com/api/countries/?page=3", 262 | "previous": "http://example.com/api/countries/", 263 | "results": [ 264 | { 265 | "id": 3, 266 | "name": "Italy", 267 | "continent": "europe" 268 | }, 269 | { 270 | "id": 4, 271 | "name": "Japan", 272 | "continent": "asia" 273 | } 274 | ] 275 | } 276 | """ 277 | ) 278 | responses.add( 279 | responses.GET, "http://example.com/api/countries/", 280 | match=[matchers.query_param_matcher({"page": 3})], 281 | body=""" 282 | { 283 | "count": 5, 284 | "next": null, 285 | "previous": "http://example.com/api/countries/?page=2", 286 | "results": [ 287 | { 288 | "id": 5, 289 | "name": "China", 290 | "continent": "asia" 291 | } 292 | ] 293 | } 294 | """ 295 | ) 296 | 297 | self.assertEqual(PageNumberPaginatedCountryAPIQuerySet().count(), 5) 298 | 299 | full_results = list(PageNumberPaginatedCountryAPIQuerySet()) 300 | self.assertEqual(full_results[2], {"id": 3, "name": "Italy", "continent": "europe"}) 301 | 302 | partial_results = list(PageNumberPaginatedCountryAPIQuerySet()[2:4]) 303 | self.assertEqual(partial_results, [ 304 | {"id": 3, "name": "Italy", "continent": "europe"}, 305 | {"id": 4, "name": "Japan", "continent": "asia"}, 306 | ]) 307 | 308 | @responses.activate 309 | def test_filter(self): 310 | responses.add( 311 | responses.GET, "http://example.com/api/countries/", 312 | match=[matchers.query_param_matcher({"continent": "asia"})], 313 | body=""" 314 | [ 315 | { 316 | "id": 4, 317 | "name": "Japan", 318 | "continent": "asia" 319 | }, 320 | { 321 | "id": 5, 322 | "name": "China", 323 | "continent": "asia" 324 | } 325 | ] 326 | """ 327 | ) 328 | responses.add( 329 | responses.GET, "http://example.com/api/countries/", 330 | match=[matchers.query_param_matcher({})], 331 | body=""" 332 | [ 333 | { 334 | "id": 1, 335 | "name": "France", 336 | "continent": "europe" 337 | }, 338 | { 339 | "id": 2, 340 | "name": "Germany", 341 | "continent": "europe" 342 | }, 343 | { 344 | "id": 3, 345 | "name": "Italy", 346 | "continent": "europe" 347 | }, 348 | { 349 | "id": 4, 350 | "name": "Japan", 351 | "continent": "asia" 352 | }, 353 | { 354 | "id": 5, 355 | "name": "China", 356 | "continent": "asia" 357 | } 358 | ] 359 | """ 360 | ) 361 | 362 | all_results = UnpaginatedCountryAPIQuerySet() 363 | results = all_results.filter(continent="asia") 364 | self.assertEqual(results.count(), 2) 365 | # filter should not affect the original queryset 366 | self.assertEqual(all_results.count(), 5) 367 | self.assertEqual(list(results), [ 368 | {"id": 4, "name": "Japan", "continent": "asia"}, 369 | {"id": 5, "name": "China", "continent": "asia"}, 370 | ]) 371 | 372 | @responses.activate 373 | def test_multiple_filters(self): 374 | # multiple filters should be ANDed together 375 | responses.add( 376 | responses.GET, "http://example.com/api/countries/", 377 | match=[matchers.query_param_matcher({"continent": "asia", "name": "Japan"})], 378 | body=""" 379 | [ 380 | { 381 | "id": 4, 382 | "name": "Japan", 383 | "continent": "asia" 384 | } 385 | ] 386 | """ 387 | ) 388 | 389 | results = UnpaginatedCountryAPIQuerySet().filter(continent="asia", name="Japan") 390 | self.assertEqual(results.count(), 1) 391 | self.assertEqual(list(results), [{"id": 4, "name": "Japan", "continent": "asia"}]) 392 | 393 | # filters can also be chained 394 | results = UnpaginatedCountryAPIQuerySet().filter(continent="asia").filter(name="Japan") 395 | self.assertEqual(results.count(), 1) 396 | self.assertEqual(list(results), [{"id": 4, "name": "Japan", "continent": "asia"}]) 397 | 398 | @responses.activate 399 | def test_filter_by_field_alias(self): 400 | responses.add( 401 | responses.GET, "http://example.com/api/countries/", 402 | match=[matchers.query_param_matcher({"id": 4})], 403 | body=""" 404 | [ 405 | { 406 | "id": 4, 407 | "name": "Japan", 408 | "continent": "asia" 409 | } 410 | ] 411 | """ 412 | ) 413 | 414 | results = UnpaginatedCountryAPIQuerySet().filter(pk=4) 415 | self.assertEqual(results.count(), 1) 416 | self.assertEqual(list(results), [{"id": 4, "name": "Japan", "continent": "asia"}]) 417 | 418 | @responses.activate 419 | def test_ordering(self): 420 | responses.add( 421 | responses.GET, "http://example.com/api/countries/", 422 | match=[matchers.query_param_matcher({"continent": "asia", "ordering": "name"})], 423 | body=""" 424 | [ 425 | { 426 | "id": 5, 427 | "name": "China", 428 | "continent": "asia" 429 | }, 430 | { 431 | "id": 4, 432 | "name": "Japan", 433 | "continent": "asia" 434 | } 435 | ] 436 | """ 437 | ) 438 | 439 | results = UnpaginatedCountryAPIQuerySet().filter(continent="asia").order_by("name") 440 | self.assertTrue(results.ordered) 441 | self.assertEqual(results.count(), 2) 442 | self.assertEqual(list(results), [ 443 | {"id": 5, "name": "China", "continent": "asia"}, 444 | {"id": 4, "name": "Japan", "continent": "asia"}, 445 | ]) 446 | 447 | @responses.activate 448 | def test_get(self): 449 | responses.add( 450 | responses.GET, "http://example.com/api/countries/", 451 | match=[matchers.query_param_matcher({"name": "France"})], 452 | body=""" 453 | [ 454 | { 455 | "id": 1, 456 | "name": "France", 457 | "continent": "europe" 458 | } 459 | ] 460 | """ 461 | ) 462 | 463 | responses.add( 464 | responses.GET, "http://example.com/api/countries/", 465 | match=[matchers.query_param_matcher({"name": "Wakanda"})], 466 | body=""" 467 | [] 468 | """ 469 | ) 470 | 471 | responses.add( 472 | responses.GET, "http://example.com/api/countries/", 473 | match=[matchers.query_param_matcher({"continent": "europe"})], 474 | body=""" 475 | [ 476 | { 477 | "id": 1, 478 | "name": "France", 479 | "continent": "europe" 480 | }, 481 | { 482 | "id": 2, 483 | "name": "Germany", 484 | "continent": "europe" 485 | }, 486 | { 487 | "id": 3, 488 | "name": "Italy", 489 | "continent": "europe" 490 | } 491 | ] 492 | """ 493 | ) 494 | 495 | self.assertEqual( 496 | UnpaginatedCountryAPIQuerySet().get(name="France"), 497 | {"id": 1, "name": "France", "continent": "europe"} 498 | ) 499 | with self.assertRaises(ValueError): 500 | UnpaginatedCountryAPIQuerySet().get(name="Wakanda") 501 | 502 | with self.assertRaises(ValueError): 503 | UnpaginatedCountryAPIQuerySet().get(continent="europe") 504 | 505 | 506 | class TestAPIModel(TestCase): 507 | @responses.activate 508 | def test_query(self): 509 | responses.add( 510 | responses.GET, "http://example.com/api/countries/", 511 | match=[matchers.query_param_matcher({"continent": "europe", "ordering": "-name"})], 512 | body=""" 513 | [ 514 | { 515 | "id": 3, 516 | "name": "Italy", 517 | "continent": "europe" 518 | }, 519 | { 520 | "id": 2, 521 | "name": "Germany", 522 | "continent": "europe" 523 | }, 524 | { 525 | "id": 1, 526 | "name": "France", 527 | "continent": "europe" 528 | } 529 | ] 530 | """ 531 | ) 532 | 533 | results = Country.objects.filter(continent="europe").order_by("-name") 534 | self.assertEqual(results.count(), 3) 535 | self.assertIsInstance(results[0], Country) 536 | country_names = [country.name for country in results] 537 | self.assertEqual(country_names, ["Italy", "Germany", "France"]) 538 | self.assertEqual(repr(results[0]), "") 539 | self.assertEqual(str(results[0]), "Italy") 540 | 541 | @responses.activate 542 | def test_instance_from_query_data(self): 543 | responses.add( 544 | responses.GET, "https://pokeapi.co/api/v2/pokemon/", 545 | match=[matchers.query_param_matcher({"offset": "0", "limit": "1"})], 546 | body=""" 547 | {"count":1281,"next":"https://pokeapi.co/api/v2/pokemon/?offset=1&limit=1","previous":null,"results":[{"name":"bulbasaur","url":"https://pokeapi.co/api/v2/pokemon/1/"}]} 548 | """ 549 | ) 550 | result = Pokemon.objects.first() 551 | self.assertEqual(result.name, "bulbasaur") 552 | self.assertEqual(result.id, 1) 553 | 554 | @responses.activate 555 | def test_instance_from_detail_lookup(self): 556 | responses.add( 557 | responses.GET, "https://pokeapi.co/api/v2/pokemon/3/", 558 | body=""" 559 | {"name":"venusaur", "id":3} 560 | """ 561 | ) 562 | result = Pokemon.objects.get(id=3) 563 | self.assertEqual(result.name, "venusaur") 564 | self.assertEqual(result.id, 3) 565 | 566 | 567 | @responses.activate 568 | def test_in_bulk(self): 569 | responses.add( 570 | responses.GET, "https://pokeapi.co/api/v2/pokemon/3/", 571 | body=""" 572 | {"name":"venusaur", "id":3} 573 | """ 574 | ) 575 | responses.add( 576 | responses.GET, "https://pokeapi.co/api/v2/pokemon/6/", 577 | body=""" 578 | {"name":"charizard", "id":6} 579 | """ 580 | ) 581 | result = Pokemon.objects.in_bulk([3, 6]) 582 | self.assertEqual(len(result), 2) 583 | self.assertEqual(result[3].name, "venusaur") 584 | self.assertEqual(result[3].id, 3) 585 | self.assertEqual(result[6].name, "charizard") 586 | self.assertEqual(result[6].id, 6) 587 | --------------------------------------------------------------------------------