├── LICENSE
├── README.md
├── framework-specific-patterns.md
├── intermediate-python-testing.md
├── intro-to-javascript-testing.md
└── intro-to-python-testing.md
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2017 datamade
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Writing Tests, the DataMade Way
2 |
3 | These are the principles, procedure and patterns that guide test writing at DataMade. Looking for something a little more didactic? [Skip to the tutorials](#tutorials).
4 |
5 | ## Testing principles
6 |
7 | 1. Tests address **what a method should do, given specific contexts**. This philosophy is called behavior-driven development, or BDD (supplementary reading [here](https://dannorth.net/introducing-bdd/) and [here](https://www.toptal.com/freelance/your-boss-won-t-appreciate-tdd-try-bdd)). BDD testing, as the name indicates, centralizes the behavior of the code: "this function should send an email," “this function should hyphenate a PIN,” “this function should return a queryset,” etc. BDD reduces the number of unhandled edge cases by requiring the developer to think about code functionality from a multidimensional perspective. Likewise, BDD rewards early communication and deliberate boundary setting with our clients, which helps us better understand and communicate the scope of our work and the time needed to complete it.
8 |
9 | 2. Testing is an **essential part** of dynamic applications. Include time for writing and running tests in the scope and statement of work for projects, where appropriate.
10 |
11 | 3. Tests **speed up the feedback loop** during development, enabling rapid iteration on a feature prior to deployment.
12 |
13 | 4. Tests and your codebase are **reciprocally beneficial**. In other words, a good test enables you to write more modular code, which enables you to write more modular tests, and so on.
14 |
15 | 5. Tests are conducted on **real data**, whenever possible, and realistic data otherwise.
16 |
17 | ## Testing procedure
18 |
19 | **What should I test?**
20 |
21 | Behavior-driven development (BDD) is based on an understanding of what should happen, given certain parameters. This principle is also useful to consider when writing tests: If a user does X, this application should do Y. Examples of test-worthy functionality include:
22 |
23 | * Actions that alter state, e.g. session or database transactions
24 | * Exception handling
25 | * Dynamically displayed user-interface
26 | * User interactions
27 |
28 | Note that compatibility and style are also elements of testing. If a feature doesn’t work across environments and browsers, it doesn’t work. If the the code doesn’t adhere to agreed-upon style conventions, it isn’t merged. Use of `flake8` and `jshint` will get you well on your way to compatibility and style compliance.
29 |
30 | Also note that it is only necessary to test your own code because languages and libraries (ideally) have tests of their own. For instance, you don't need to test that `print('foo')` prints "foo". (Seriously, [it's covered](https://github.com/python/cpython/blob/6f0eb93183519024cb360162bdd81b9faec97ba6/Lib/test/test_print.py).) On the other hand, if you write a method that prints "foo" if certain conditions are met, you should test that your method prints "foo" provided those conditions are met and doesn't print "foo" otherwise.
31 |
32 | **How do I write tests?**
33 |
34 | Practicing BDD involves **creating a [technical specification](https://www.joelonsoftware.com/2000/10/03/painless-functional-specifications-part-2-whats-a-spec/) (or "spec") for each feature prior to implementation**. A spec can exist as a separate document, in the context of an issue, or in the comments of the code, outside or within a testing environment. Outlining a spec with tests in mind can help identify the goals of the code. Development can then be done at the developer’s own pace – writing a piece of code or implementing the entire feature – then testing that it behaves as expected.
35 |
36 | Whether or not you use tests to spec, **document what you are testing in docstrings or comments**, particularly where queries or clever assertions are involved.
37 |
38 | Tests should operate independently of the outside world, and of each other. **Use fixtures to create reusable data**, whether it’s in a database or available through the fixture itself. Ideally, all fixtures should be scoped to the function level and your test environment should be automatically returned to its original state after each test. Where this is not possible, document where fixtures violate this expectation, and be sure your tests clean up after themselves. For instance, if you use a module-scoped database table fixture in a test that test alters said table, any alterations should be reverted at the end of the test.
39 |
40 | **When should I run tests?**
41 |
42 | Tests should be run as part of the continuous integration setup. (DataMaders: See [our internal guide to configuring deploy hooks](https://github.com/datamade/deploy-a-site/blob/master/Setup-deployment-hook.md#setup-codedeploy-hook-for-github--travis-ci).) Code that does not pass the tests is neither merged nor deployed.
43 |
44 | In the event that a test fails, unless the functionality of the method has materially changed, fix the method, not the test.
45 |
46 | ## Tutorials
47 |
48 | * [`pytest` 101](/intro-to-python-testing.md)
49 | * Organizing tests
50 | * Configuring tests
51 | * Running tests
52 | * A note on filepaths
53 | * [`pytest` 201](/intermediate-python-testing.md)
54 | * Intro to fixtures
55 | * Parameterized fixtures
56 | * Database fixtures
57 | * Scope, or How to avoid confusing dependencies between your tests
58 | * mock
59 | * [`pytest` 301](/framework-specific-patterns.md)
60 | * Testing Flask applications
61 | * Testing Django applications
62 | * Setup
63 | * Interacting with the database
64 | * Transactional context
65 | * Model object fixtures
66 | * Management commands
67 | * [Front-end testing](/intro-to-javascript-testing.md)
68 | * The right style
69 | * Test your syntax with `jshint`
70 | * Test your logic with `jasmine`
71 |
72 | ## Examples
73 |
74 | ### Django
75 |
76 | * [`django-councilmatic`](https://github.com/datamade/django-councilmatic/blob/master/tests/test_routes.py) – test all endpoints in a Councilmatic instance are valid
77 | * [`la-metro-councilmatic`](https://github.com/datamade/la-metro-councilmatic/tree/master/tests) - test assorted formatting and display behaviors in a custom Councilmatic instance
78 | * [`large-lots`](https://github.com/datamade/large-lots/tree/master/tests) - test a variety of functionality in a Django app, from form submission and validation, to distributed review tasks, to Django management commands
79 |
80 | ### Flask
81 |
82 | * [`occrp`](https://github.com/datamade/occrp-timeline-tool/tree/master/tests) – test Flask form submission and page content rendering
83 |
84 | ### Python libraries
85 |
86 | * [`dedupe`](https://github.com/dedupeio/dedupe) – test functionality of DataMade's fuzzy matching library
87 |
--------------------------------------------------------------------------------
/framework-specific-patterns.md:
--------------------------------------------------------------------------------
1 | # `pytest` 301
2 |
3 | * [Testing Flask applications](#flask)
4 | * [Testing Django applications](#django)
5 | * [Setup](#setup)
6 | * [Interacting with the database](#interacting-with-the-database)
7 | * [Transactional context](#transactional-context)
8 | * [Model object fixtures](#model-object-fixtures)
9 | * [Management commands](#management-commands)
10 |
11 | ## Flask
12 |
13 | Vanilla Flask tests can quickly become a confusing jungle of context managers and cryptic jargon, without which you are plagued by `working outside of [ request / application ] context` errors.
14 |
15 | `pytest-flask` is a `pytest` extension does the heavy lifting for you. All you need to do is define an `app` fixture.
16 |
17 | To get started,
18 |
19 | ```bash
20 | pip install pytest-flask
21 | ```
22 |
23 | Add `pytest-flask` to your `requirements.txt` file.
24 |
25 | Next, add an `app` fixture to the relevant `conftest.py` file or files. The pytest-flask docs provide a very basic example:
26 |
27 | ```python
28 | @pytest.fixture
29 | def app():
30 | app = create_app()
31 | return app
32 | ```
33 |
34 | If your Flask app has a settings file, add a file containing the required settings to your test directory and load it into your app fixture by passing the location of your test config to the `config` keyword argument of `create_app`:
35 |
36 | ```python
37 | @pytest.fixture
38 | def app():
39 | # Assumes settings file called `tests/test_config.py`, adjust accordingly
40 | app = create_app(config='tests.test_config')
41 | return app
42 | ```
43 |
44 | Once you complete this setup, you do not need to include the `app` fixture in any other fixtures or methods; it just lives there in the ether as context.
45 |
46 | Meanwhile, `pytest-flask` also includes a built-in `client` fixture. Simply include it in tests where you'd like to GET or POST to Flask routes –
47 |
48 | ```python
49 | def test_get_index(client):
50 | rv = client.get('/')
51 | assert rv.status_code == 200
52 | ```
53 |
54 | – use [a Flask method](http://flask.pocoo.org/docs/0.12/api/#useful-functions-and-classes), such as `url_for` –
55 |
56 | ```python
57 | def test_builds_https(client):
58 | assert url_for('/').startswith('https')
59 | ```
60 |
61 | – or complete a Flask session transaction.
62 |
63 | ```python
64 | def test_delete_from_session(client):
65 | del session['foo']
66 | session.modified = True
67 | assert 'foo' not in session
68 | ```
69 |
70 | ## Django
71 |
72 | [`pytest-django`](https://pytest-django.readthedocs.io/en/latest/), a plugin for pytest, simplifies the task of integrating pytest with Django apps.
73 |
74 | ### Setup
75 |
76 | First, install it.
77 |
78 | ```bash
79 | pip install pytest-django
80 | ```
81 |
82 | Then, add your freshly installed version of pytest-django to `requirements.txt`.
83 |
84 | Most likely, your app has local/production, general, or custom settings. For testing, some content from these files should be included in `test_config.py` in your tests directory. Set up `test_config.py`:
85 |
86 | **`tests/test_config.py`** (YMMV)
87 | ```python
88 | import os
89 |
90 |
91 | SECRET_KEY = 'some secret key'
92 |
93 | INSTALLED_APPS = [
94 | 'django.contrib.admin',
95 | 'django.contrib.auth',
96 | 'django.contrib.contenttypes',
97 | 'django.contrib.sessions',
98 | 'django.contrib.messages',
99 | 'django.contrib.staticfiles',
100 | ... YOUR APPS ...
101 | 'django.contrib.postgres',
102 | ]
103 |
104 | MIDDLEWARE = [
105 | 'django.middleware.security.SecurityMiddleware',
106 | 'django.contrib.sessions.middleware.SessionMiddleware',
107 | 'django.middleware.common.CommonMiddleware',
108 | 'django.middleware.csrf.CsrfViewMiddleware',
109 | 'django.contrib.auth.middleware.AuthenticationMiddleware',
110 | 'django.contrib.messages.middleware.MessageMiddleware',
111 | 'django.middleware.clickjacking.XFrameOptionsMiddleware',
112 | 'debug_toolbar.middleware.DebugToolbarMiddleware',
113 | ]
114 |
115 | ROOT_URLCONF = 'YOUR_PROJECT.urls'
116 |
117 | DATABASES = {
118 | 'default': {
119 | 'ENGINE': 'django.db.backends.postgresql',
120 | 'NAME': 'YOUR_DATABASE',
121 | 'USER': '',
122 | 'PASSWORD': '',
123 | 'HOST': 'localhost',
124 | 'PORT': '5432',
125 | }
126 | }
127 |
128 | TEMPLATES = [
129 | {
130 | 'BACKEND': 'django.template.backends.django.DjangoTemplates',
131 | 'DIRS': [],
132 | 'APP_DIRS': True,
133 | 'OPTIONS': {
134 | 'context_processors': [
135 | 'django.template.context_processors.debug',
136 | 'django.template.context_processors.request',
137 | 'django.contrib.auth.context_processors.auth',
138 | 'django.contrib.messages.context_processors.messages',
139 | ],
140 | },
141 | },
142 | ]
143 |
144 | STATIC_URL = '/static/'
145 |
146 | BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
147 |
148 | STATICFILES_DIRS = (
149 | os.path.join(BASE_DIR, "YOUR_PROJECT", "static"),
150 | )
151 | ```
152 |
153 | Then, tell Django where to find it in [`setup.cfg`](/intro-to-python-testing.md#configure-pytest-and-coordinate-additional-utilities):
154 |
155 | **`setup.cfg`**
156 | ```
157 | [pytest]
158 | DJANGO_SETTINGS_MODULE = tests.test_config
159 | ```
160 |
161 | ### Interacting with the database
162 |
163 | `pytest-django` handles the work of creating and tearing down your test database, for you. (By default, it's called `test_` plus the name of the database you defined in `test_config.py`)
164 |
165 | However, `pytest-django` also treats the database as lava: Your tests will fail if they try to interact with it. This is great if you agree that the database is lava. At DataMade, we aren't so sure.
166 |
167 | To grant a test access to your database, [use the `django_db` mark](https://pytest-django.readthedocs.io/en/latest/helpers.html#pytest-mark-django-db-transaction-false-request-database-access).
168 |
169 | ```python
170 | import pytest
171 |
172 | @pytest.mark.django_db
173 | def test_cat_detail():
174 | # a test with access to the database
175 | ```
176 |
177 | Now you are ready to write a test!
178 |
179 | Among [other very useful fixtures](http://pytest-django.readthedocs.io/en/latest/helpers.html), `pytest-django` includes a `client` fixture providing you access to the [Django test client](https://docs.djangoproject.com/en/dev/topics/testing/tools/#the-test-client), so you can GET and POST to routes, to your heart's content. Let's use the `client` fixture to test the detail route for a `Cat` object.
180 |
181 | ```python
182 | import pytest
183 |
184 | from django.urls import reverse
185 |
186 | from my_app.models import Cat
187 |
188 |
189 | @pytest.mark.django_db
190 | def test_cat_detail(client):
191 | # Create a new cat
192 | kitten = Cat.create(name='Teresa')
193 |
194 | # Create and call the URL
195 | url = reverse('detail', kwargs={'pk': kitten.id})
196 | response = client.get(url)
197 |
198 | # Test the success of the response code
199 | assert response.status_code == 200
200 | ```
201 |
202 | ### Transactional context
203 |
204 | But wait! Doesn't creating a new `Cat` object without cleaning it up violate [our standard to return test context to its original state](/intermediate-python-testing.md#scope-or-how-to-avoid-confusing-dependencies-between-your-tests), every time? Nope! `pytest-django` runs all database operations in a transaction, then rolls them back at the end of the test.
205 |
206 | It is important to note that there are two, distinct types of transactional context in `pytest-django` world. The default **does not insert any data into the database**. Objects you create in the ORM will be accessible via the ORM, but if the code you are testing runs raw SQL against the database, your new object/s will not be there.
207 |
208 | If the code you are testing queries the database directly, you can configure Django to push new data to the database with the `transaction` argument.
209 |
210 | ```python
211 | @pytest.mark.django_db(transaction=True)
212 | def test_that_pushes_data_to_the_database():
213 | Cat.objects.create(name='Felix')
214 |
215 | with connection.cursor() as cursor:
216 | cursor.execute('''
217 | SELECT * FROM cats WHERE name = 'Felix'
218 | ''')
219 | ```
220 |
221 | If you're interested in the mechanics of Django's transactional test context, our very own Jean Cochrane [wrote an excellent blog post](https://jeancochrane.com/blog/django-test-transactions) that's well worth your time!
222 |
223 | #### Special case: Fixtures
224 |
225 | If you need to push data to the database in a fixture, use the `django_db` mark as above, and include the `transactional_db` fixture in your fixture. (This fixture comes from `pytest-django`.)
226 |
227 | ```python
228 | @pytest.mark.django_db(transaction=True)
229 | def a_fixture(transactional_db):
230 | # data is pushed to the database
231 | ```
232 |
233 | ### Model object fixtures
234 |
235 | The pattern used above is great for one test, but creating the same model instance over and over becomes wearisome, fast, especially when your models have many fields to populate, and many of them don't need to change between tests. That's where fixtures come in.
236 |
237 | In the most basic approach, you could define a fixture that yields a model object.
238 |
239 | **`conftest.py`**
240 | ```python
241 | @pytest.fixture
242 | @pytest.mark.django_db
243 | def cat():
244 | return Cat.objects.create(name='Felix')
245 | ```
246 |
247 | However, if you need to change an attribute of your object, you have to do it in the body of your test, which isn't so efficient.
248 |
249 | **`test_cat.py`**
250 | ```python
251 | @pytest.mark.django_db
252 | def test_cat(cat):
253 | cat.name = 'Thomas'
254 | cat.save()
255 | ```
256 |
257 | If you have a standard set of test cases, you could always [parameterize your fixture](/intermediate-python-testing.md#parameterizing-fixtures). This means each test that includes the fixture will run for each set of parameters.
258 |
259 | **`conftest.py`**
260 | ```python
261 | colors = ['orange', 'black', 'calico']
262 |
263 | @pytest.mark.django_db
264 | @pytest.mark.fixture(params=colors)
265 | def cat(request):
266 | # Tests that include this fixture will run for a Cat of each color
267 | return Cat.objects.create(name='Felix', color=request.param)
268 | ```
269 |
270 | If your test cases _aren't_ standard, though, you might want to write a fixture that "accepts parameters" on a case-by-case basis. The pattern goes like this:
271 |
272 | **`conftest.py`**
273 | ```python
274 | @pytest.fixture
275 | @pytest.mark.django_db
276 | def cat():
277 | class CatFactory():
278 | def build(self, **kwargs):
279 | cat_info = {
280 | 'name': 'Seymour',
281 | 'color': 'orange',
282 | 'favorite_food': 'tuna',
283 | }
284 |
285 | cat_info.update(kwargs)
286 |
287 | return Cat.objects.create(**cat_info)
288 |
289 | return CatFactory()
290 | ```
291 |
292 | First, we define a factory class, `CatFactory`, with a `build` method that accepts unspecified `kwargs`. The `build` method defines standard dummy attributes, `cat_info`, for the model we'd like to build. It then `update`s the dummy data with any `kwargs` passed to the `build` method, and uses them to create and return a `Cat` object.
293 |
294 | You use this brand of fixture like so:
295 |
296 | **`test_cat.py`**
297 | ```python
298 | def test_cat(cat):
299 | basic_cat = cat.build() # Cat Seymour, loves tuna
300 | custom_cat = cat.build(name='Darlene', favorite_food='chicken') # Cat Darlene, loves chicken
301 | ```
302 |
303 | This becomes even more useful when your models have foreign keys to other models. Rather than having to create an instance of the foreign key model, you can just include the foreign key model object fixture and call its build method.
304 |
305 | **`conftest.py`**
306 | ```python
307 | @pytest.fixture
308 | @pytest.mark.django_db
309 | def owner():
310 | class OwnerFactory():
311 | def build(self, **kwargs):
312 | owner_info = {
313 | 'name': 'Hannah',
314 | 'age': 26,
315 | }
316 |
317 | owner_info.update(kwargs)
318 |
319 | return Owner.objects.create(**owner_info)
320 |
321 | return OwnerFactory()
322 |
323 | @pytest.fixture
324 | @pytest.mark.django_db
325 | def cat(owner):
326 | class CatFactory():
327 | def build(self, **kwargs):
328 | cat_info = {
329 | 'name': 'Seymour',
330 | 'color': 'orange',
331 | 'favorite_food': 'tuna',
332 | 'owner': owner.build(),
333 | }
334 |
335 | cat_info.update(kwargs)
336 |
337 | return Cat.objects.create(**cat_info)
338 |
339 | return CatFactory()
340 | ```
341 |
342 | ### Management commands
343 |
344 | Your Django apps may have management commands, which may aid our application in significant ways – from sending emails to importing new content into your database. Such commands require tests. The below outlines how to create a database, call a management command, and test the results.
345 |
346 | ```python
347 | import pytest
348 | import sys
349 |
350 | from django.core.management import import_data
351 |
352 | from my_app.models import Cat
353 |
354 | @pytest.mark.django_db
355 | def test_command(mocker):
356 | with mocker.patch.object(Command, 'import_data') as mock_import:
357 | call_command('import_data', stdout=sys.stdout)
358 |
359 | # `import_data` adds cats to the database – check that those cats exist.
360 | kittens = Cats.objects.all()
361 |
362 | assert len(kittens) > 0
363 | ```
364 |
--------------------------------------------------------------------------------
/intermediate-python-testing.md:
--------------------------------------------------------------------------------
1 | # `pytest` 201
2 |
3 | * [Fixtures](#fixtures)
4 | * [Parameterized fixtures](#parameterizing-fixtures)
5 | * [Database fixtures](#database-fixtures)
6 | * [Scope, or How to avoid confusing dependencies between your tests](#scope-or-how-to-avoid-confusing-dependencies-between-your-tests)
7 | * [mock](#mock)
8 | * [requests-mock](#requests-mock)
9 |
10 | ## Fixtures
11 |
12 | Fixtures are [explained in the `pytest` docs](https://docs.pytest.org/en/latest/fixture.html) as "a fixed baseline upon which tests can reliably and repeatedly execute." More simply, fixtures are methods that establish context or generate values for the tests you’re about to run.
13 |
14 | As opposed to monolithic `setUp` and `tearDown` methods sometimes used in unit testing, fixtures allow you to create just enough context to run your tests. Smart fixture use makes your tests simultaneously faster and more thorough.
15 |
16 | Creating a fixture is a two-step process. First, define a method that returns something in `conftest.py`. Then, add the `@pytest.fixture` decorator. That’s it! Fixtures defined in `tests/conftest.py` are available to all tests, whether they’re in the root `tests/` directory or a topical subdirectory.
17 |
18 | To use your new fixture, pass the fixture name as an argument to your test. You can then access the value/s your fixture returns through the fixture name.
19 |
20 | For example, let’s say you want to do some math, but you want to make sure your computer is smart enough to do it. To avoid the arduous task of typing more numbers than necessary, let’s create a `numbers` fixture that returns two integers.
21 |
22 | **`conftest.py`**
23 |
24 | ```python
25 | import pytest
26 |
27 | @pytest.fixture
28 | def numbers():
29 | return 1, 5
30 | ```
31 |
32 | Next, let’s use our numbers fixture to test whether the `+` operator does what it purports to do.
33 |
34 | **`test_math.py`**
35 |
36 | ```python
37 | def test_addition(numbers):
38 | a, b = numbers
39 | assert a + b == sum([a, b])
40 | ```
41 |
42 | Finally, run your test.
43 |
44 | ```bash
45 | (gut) that-cat-over-there:schmesting Hannah$ pytest -v
46 |
47 | ========================================= test session starts =========================================
48 |
49 | platform darwin -- Python 3.5.2, pytest-3.2.1, py-1.4.34, pluggy-0.4.0 -- /Users/Hannah/.virtualenvs/gut/bin/python3.5
50 |
51 | cachedir: .cache
52 |
53 | rootdir: /Users/Hannah/hackz/datamade/schmesting, inifile:
54 |
55 | plugins: flask-0.10.0
56 |
57 | collected 1 item
58 |
59 | tests/test_blah.py::test_addition PASSED
60 |
61 | ====================================== 1 passed in 0.01 seconds =======================================
62 | ```
63 |
64 | Success!
65 |
66 | **Note:** You may create additional `conftest.py` files with special fixtures in your topical subdirectories. Test modules within the subdirectory will then have access to both global and special fixtures. This a great way to stay organized in complex testing situations.
67 |
68 | ## Parameterizing fixtures
69 |
70 | Behavior-driven development hinges on the question, "How should my code behave, given a certain context?" In our math example, we only test one context: summing two positive integers. What if one or both integers is negative? What if they’re floats? What if they’re not numbers at all?
71 |
72 | Lucky for us, the `pytest` framework can test multiple contexts with one fixture – testing sugar, known as parameterized fixtures. To parameterize your fixture, first create an iterable containing your test cases. Then, pass your iterable to the `params` keyword argument of the `fixture` decorator. Tests that include your fixture will now run once for every parameter.
73 |
74 | **Note:** the below example uses [`pytest.raises` as a context manager](https://docs.pytest.org/en/latest/assert.html#assertions-about-expected-exceptions) to test the `TypeError` exception.
75 |
76 | **`conftest.py`**
77 |
78 | ```python
79 |
80 | import pytest
81 |
82 | # Contains values for a, b, and the error we expect or None
83 | cases = [
84 | (1, 5, None), # two positive integers
85 | (-10, 3, None), # one positive integer, one negative integer
86 | (-29, -32, None), # two positive integers
87 | (0.45, 100, None), # one integer, one decimal
88 | (495.325, 99.3, None), # two decimals
89 | ('foo', 5, TypeError), # one string, one int
90 | ]
91 |
92 | @pytest.fixture(params=cases)
93 | def numbers(request):
94 | # Access the parameter through request.param
95 | return request.param
96 | ```
97 |
98 | **`test_math.py`**
99 |
100 | ```python
101 | import pytest
102 | def test_addition(numbers):
103 | a, b, err = numbers
104 | if err:
105 | with pytest.raises(err):
106 | assert a + b == sum([a, b])
107 | else:
108 | assert a + b == sum([a, b])
109 | ```
110 |
111 | ```bash
112 | (gut) that-cat-over-there:schmesting Hannah$ pytest -v
113 |
114 | ========================================= test session starts =========================================
115 |
116 | platform darwin -- Python 3.5.2, pytest-3.2.1, py-1.4.34, pluggy-0.4.0 -- /Users/Hannah/.virtualenvs/gut/bin/python3.5
117 |
118 | cachedir: .cache
119 |
120 | rootdir: /Users/Hannah/hackz/datamade/schmesting, inifile:
121 |
122 | plugins: flask-0.10.0
123 |
124 | collected 6 items
125 |
126 | tests/test_blah.py::test_addition[numbers0] PASSED
127 |
128 | tests/test_blah.py::test_addition[numbers1] PASSED
129 |
130 | tests/test_blah.py::test_addition[numbers2] PASSED
131 |
132 | tests/test_blah.py::test_addition[numbers3] PASSED
133 |
134 | tests/test_blah.py::test_addition[numbers4] PASSED
135 |
136 | tests/test_blah.py::test_addition[numbers5] PASSED
137 |
138 | ====================================== 6 passed in 0.02 seconds =======================================
139 | ```
140 |
141 | Success times 6!
142 |
143 | **Note:** You can write cumulative fixtures by defining one fixture, then defining another fixture that includes (accepts as an argument) the first one.
144 |
145 | ```python
146 | @pytest.fixture
147 | def foo():
148 | return ‘foo’
149 |
150 | @pytest.fixture
151 | def bar(foo):
152 | # returns "foo!!!"
153 | return ‘{0}!!!’.format(foo)
154 | ```
155 |
156 | If two or more fixtures that make up your cumulative fixtures are parameterized, any tests that include them will run for all possible combinations of the parameters. For example, if `foo` is run for parameters A, B, and C, and `bar` is run for parameters 1, 2, and 3, tests that include `bar` will run for A1, A2, A3, B1, B2, B3, C1, C2, and C3 – nine times in all.
157 |
158 | ## Database fixtures
159 |
160 | Many of our web applications include databases. When they do, it’s important to test your SQL or ORM queries to ensure you’re accessing – or updating – the information you expect.
161 |
162 | **If you are writing a Django app,** you get much of the following for free. Head over to [`pytest 301`](/framework-specific-patterns.md) to learn more. If you're new to `pytest`, [don't miss the introduction to fixture scope](#scope-or-how-to-avoid-confusing-dependencies-between-your-tests) below.
163 |
164 | **For roll-your-own database testing (i.e., you're using Flask),** we need to define database fixtures manually.
165 |
166 | First, define a test database connection string in `test_config.py`.
167 |
168 | ```python
169 | DB_OPTS = {
170 | host=’localhost’,
171 | database=’cooltestdatabase’,
172 | username=’postgres’,
173 | password=’’,
174 | port=5432,
175 | }
176 | ```
177 |
178 | `pytest_postgresql` is a `pytest` extension that provides convenience methods for creating and dropping your test database.
179 |
180 | To get started,
181 |
182 | ```bash
183 | pip install pytest_postgresql
184 | ```
185 |
186 | and add pytest_postgresql to your `requirements.txt` file.
187 |
188 | Then, import the convenience methods, `init_postgresql_database` and `drop_postgresql_database`, and use them to create a `database` fixture in your `conftest.py` file.
189 |
190 | **`conftest.py`**
191 |
192 | ```python
193 | from pytest_postgresql.factories import init_postgresql_database, drop_postgresql_database
194 | from .test_config import DB_OPTS
195 |
196 | @pytest.fixture(scope='session')
197 | def database(request):
198 | pg_host = DB_OPTS["host"]
199 | pg_port = DB_OPTS["port"]
200 | pg_user = DB_OPTS["username"]
201 | pg_db = DB_OPTS["database"]
202 |
203 | init_postgresql_database(pg_user, pg_host, pg_port, pg_db)
204 | ```
205 |
206 | ### Scope, or How to avoid confusing dependencies between your tests
207 |
208 | Notice the scope keyword argument in the `fixture` decorator above. Scope determines the lifespan of the context created by a fixture. In `pytest`, there are three options:
209 |
210 | * `session` – Fixture is run once for the entire test suite
211 | * `module` – Fixture is run once per test module
212 | * `function` – Fixture is run every time a test includes it
213 |
214 | Fixtures are function-scoped by default. You should use this default unless you have a compelling reason. For example, it would be silly to re-create a database for every test. It is appropriate to use the `session` scope for fixtures that establish context that you aren’t going to change over the course of your testing, such as creating a database, initializing an application, or inserting _immutable_ dummy data.
215 |
216 | **Be cautious!** Changes made to broadly scoped fixtures persist for all other tests in that session or module. This can lead to confusing circumstances where you are unsure whether your test is failing or the fixture context you expect was changed by a previous test.
217 |
218 | To diagnose these unintended dependencies, you can run your tests in a random order with [`pytest-randomly`](https://pypi.python.org/pypi/pytest-randomly). Simply:
219 |
220 | ```bash
221 | pip install pytest-randomly
222 | ```
223 |
224 | Then run your tests as normal and behold! They will be shuffled by default, and you will be kept honest.
225 |
226 | If you need to run your tests in sequential order, toggle random functionality with the `--randomly-dont-reorganize` flag. If you need to run your tests in the same random order, say for debugging a tricky failure, grab the `--randomly-seed=X` value from the top of your last run –
227 |
228 | ```
229 | ============================================================= test session starts =============================================================
230 | platform darwin -- Python 3.5.2, pytest-3.2.1, py-1.4.34, pluggy-0.4.0 -- /Users/Hannah/.virtualenvs/dedupe/bin/python3.5
231 | cachedir: .cache
232 | Using --randomly-seed=1510172112
233 | ```
234 |
235 | – and use it to run the tests again.
236 |
237 | ```bash
238 | pytest --randomly-seed=1510172112
239 | ```
240 |
241 | #### Finalizers
242 |
243 | If you define a fixture that creates a table or inserts data that you intend to alter in your test, **use the `function` scope** and write a finalizer.
244 |
245 | A finalizer is a method, defined within a fixture and decorated with `@request.addfinalizer`, that is run when that fixture falls out of scope. In effect, finalizers should undo your fixture. If you insert data, remove the data; if you create a table, delete the table; and so on. Let's add a finalizer to our `database` fixture.
246 |
247 | **`conftest.py`**
248 |
249 | ```python
250 | @pytest.fixture(scope='session')
251 | def database(request):
252 |
253 | # define fixture
254 |
255 | @request.addfinalizer
256 | def drop_database():
257 | # Last arg is postgres version
258 | drop_postgresql_database(pg_user, pg_host, pg_port, pg_db, 9.6)
259 | ```
260 |
261 | If a more broadly scoped fixture is unavoidable, **clean up any changes** made to the fixture context at the end of every test that uses it.
262 |
263 | Even better, reconsider whether tests need alter the database at all. If you are testing a method that accepts the result of a query, do you need to create a table, populate it with dummy data, and query it just so you can run the test? Probably not! Insert the query result into a fixture, and use the fixture to run your method directly.
264 |
265 | Working with more than one layer of state-dependent function calls? There's probably a way around it! Read on for an introduction to testing with `mock`.
266 |
267 | ## mock
268 |
269 | ### What is mock?
270 |
271 | `mock` is a `unittest` submodule that allows you to remove external dependencies from your test suite by coercing the methods that rely on them to return state _you_ control.
272 |
273 | At DataMade, we like [`pytest-mock`](https://github.com/pytest-dev/pytest-mock/), a `pytest` extension that exposes the `mock` API to your tests with the `mocker` fixture.
274 |
275 | Let's say you have a method that calls a method that queries your database and returns a result, then performs some operation on the returned result.
276 |
277 | **`foo.py`**
278 | ```python
279 | def get_result():
280 | # query the database and return result
281 |
282 | def do_a_thing():
283 | results = get_result()
284 |
285 | for result in results:
286 | # do a thing
287 | ```
288 |
289 | With `mock`, you can tell `get_result` to return a pre-determined list of things, instead of querying a database as written. This allows you to bypass the getting of the result (e.g., you don't need to create a database with a table to query) so you can focus on testing whether your operation does what you expect.
290 |
291 | **`test_foo.py`**
292 | ```python
293 | def test_do_a_thing(mocker):
294 | mocker.patch('foo.get_result', return_value=[1, 2, 3])
295 |
296 | do_a_thing()
297 |
298 | # does things on [1, 2, 3]
299 |
300 | # test things were done as expected
301 | ```
302 |
303 | Tantalizing, right?
304 |
305 | ### Getting started
306 |
307 | #### `patch()`
308 |
309 | [`patch()`](https://docs.python.org/3/library/unittest.mock.html#patch) is the main method that allows you to replace a dependency with a mock, which you can read more about in the [mock documentation](https://docs.python.org/3/library/unittest.mock.html). There are two main ways to use this method -- as a decorator or as a context manager.
310 |
311 | As a decorator, `@patch()` can be used on a testing class or a single test function. The mock that it creates will live for the duration of the class or function. Once the test (or all the tests in the class, as the case may be) is complete, the mock is cleaned up and any other tests will use the real dependency instead of the mock. For example, as a class decorator:
312 |
313 | ```python
314 | @patch('path.to.patched_dependency')
315 | class SomeClass:
316 | def first_test(self, patched_dependency):
317 | # do the things in the first test using the mock
318 | pass
319 |
320 | def second_test(self, patched_dependency):
321 | # do the things in the second test using the mock
322 | pass
323 | ```
324 |
325 | Or as a method decorator:
326 | ```python
327 |
328 | class SomeClass:
329 | @patch('path.to.patched_dependency')
330 | def first_test(self, patched_dependency):
331 | # do the things in the first test using the mock
332 | pass
333 | ```
334 |
335 | As a context manager, `patch()` allows your mock to be scoped to just part of a function, like this:
336 | ```python
337 | def first_test(self):
338 | patched_dependency = patch('path.to.patched_dependency)
339 | patched_dependency.return_value = "this is what I expect"
340 | with patch('path.to.patched_dependency') as patched_dependency:
341 | # do the things in the first test using the mock
342 | pass
343 | ```
344 |
345 | #### Useful arguments
346 |
347 | Part of a mock object's magic is that it does not necessarily behave like the object it's patching. For example, any attribute you call on your mock pops into existence by virtue of being called, even if it doesn't exist on the real dependency. While this can be helpful at times, mock objects can be more useful if they throw errors, return values, and otherwise behave like the objects, functions, or other dependencies they temporarily replace.
348 |
349 | You can configure mocks to more closely mimic the behavior of the dependency you want to replace by passing a few simple arguments to the `patch()` method.
350 |
351 | `spec`, `auto_spec`, `set_spec` are arguments you can pass into `patch()` that allow the created mock to have the same attributes as the original dependency. Check out the [docs on `patch()`](https://docs.python.org/3/library/unittest.mock.html#patch) for more info about these arguments.
352 | - `spec` only goes so far as to copy the attributes of the mocked object itself.
353 | - `autospec` can copy the attributes of the mocked object’s attributes as well
354 | - `spec_set` makes sure you can’t make attributes that don’t already exist. That is, it limits your mock’s ability to be a blank in your code by confining the attributes your mock can have to be only attributes your mocked object already has
355 |
356 | For example, say we have an instance of a class, `class_instance` that has an attribute `real_attribute`, and we are trying to patch `class_instance`. We could make a mock that also has an attribute `real_attribute` (by using `spec` or `autospec`), but does not have an attribute `fake_attribute` (by using `spec_set`) like this:
357 |
358 | ``` python
359 | def first_test(self):
360 | mock_instance = patch('path.to.class_instance', spec=True, spec_set=True)
361 | ```
362 |
363 | The mock also has couple of other useful attributes:
364 | - `return_value`, which gives your mock's attribute a value. Since mock objects do not necessarily mimic the original dependency, this can be very helpful
365 | - `side_effect` allows you to make an attribute throw an error or, if you use a callable like a function or class, it returns the value of that callable
366 |
367 | ### Lessons learned
368 |
369 | Here are a few of our own hard-won lessons from early mock use.
370 |
371 | #### Mock methods where they're used, not defined.
372 |
373 | Returning to our example, let's say our code were instead organized like so:
374 |
375 | **`utils.py`**
376 | ```python
377 | def get_result():
378 | # query the database and return result
379 | ```
380 |
381 | **`tasks.py`**
382 | ```python
383 | from utils import get_result
384 |
385 | def do_a_thing():
386 | results = get_result()
387 |
388 | for result in results:
389 | # do a thing
390 | ```
391 |
392 | If we want to mock `get_result` in a test, we must patch it in the `tasks` module, where it's used, not where it's defined:
393 |
394 | **`test_tasks.py`**
395 | ```python
396 | def test_do_a_thing(mocker):
397 | mocker.patch('tasks.get_result', return_value=[1, 2, 3])
398 |
399 | do_a_thing()
400 |
401 | # does things on [1, 2, 3]
402 |
403 | # test things were done as expected
404 | ```
405 |
406 | #### Mocking classes is unconventional, but not impossible.
407 |
408 | Per [the `mock` documentation](https://docs.python.org/3/library/unittest.mock.html#unittest.mock.patch):
409 |
410 | > Patching a class replaces the class with a MagicMock instance. If the class is instantiated in the code under test then it will be the return_value of the mock that will be used.
411 |
412 | This means you first need to create a `MagicMock` instance "spec'ed" to the class you are mocking, with any methods you need to override (i.e., to return a value or raise an Exception), overridden.
413 |
414 | ```python
415 | mocked_class = MagicMock(spec=CLASS_YOU_ARE_MOCKING)
416 |
417 | mocked_class.this_method.side_effect = AttributeError
418 | mocked_class.that_method.return_value = 'nyan nyan nyan'
419 | ```
420 |
421 | **Note:** Spec'ing means your mock object shares attributes and methods with your class. If you try to access an attribute or call a method incorrectly, i.e., without positional arguments, on a spec'ed mock object, it will raise an exception. This is in contrast to an unspec'ed mock object, which will let you access any attribute or call any function you want without complaint. Spec'ing ensures your tests align with the API in your code base and stay in line as it changes (because your mock objects will break if they fall out of date).
422 |
423 | Then, you need to patch the class you'd like to mock, and set its `return_value` to the `MagicMock` we just made.
424 |
425 | ```python
426 | mock_handle = mock.patch('path.to.CLASS_YOU_ARE_MOCKING')
427 | mock_handle.return_value = mocked_class
428 | ```
429 |
430 | ### Next steps
431 |
432 | You can also use `mock` to raise exceptions to test error handling, return canned website responses without hitting a live endpoint (as we do [in Metro](https://github.com/datamade/la-metro-councilmatic/blob/master/tests/test_events.py#L117)), or simply turn off state-altering parts of your code that aren't relevant to the test at hand. Finally, `mock` keeps track of whether and how mocked methods are called, so you can test how your code is used (called _n_ times, or with this or that argument), without necessarily having to run it.
433 |
434 | For more on how to use `mock`, see [the quickstart](https://docs.python.org/3/library/unittest.mock.html#quick-guide) in the `mock` documention, as well as [this excellent tutorial](https://www.toptal.com/python/an-introduction-to-mocking-in-python). Meanwhile, here are a few of our own hard-won lessons from early `mock` use.
435 |
436 | ### requests-mock
437 |
438 | Sometimes, your app sends requests to external URIs. And sometimes, these requests either have very little to do with the needs of your tests or fully interfere with them. And all the times, sending requests to external URIs violates the principle of [test isolation](https://www.obeythetestinggoat.com/book/chapter_purist_unit_tests.html). Thankfully, [`requests-mock`](https://requests-mock.readthedocs.io/en/latest/overview.html) leaps into action!
439 |
440 | `requests-mock` sets up [a transport adapter](http://docs.python-requests.org/en/latest/user/advanced/#transport-adapters) (a mechanism which defines how to handle HTTP requests). With a `requests-mock` adapter, requests to the specified URIs are automatically patched, giving you control of the data they return. How to use it? `requests-mock` comes with [multiple patterns for instantiating the `requests_mock.Mocker` class](https://requests-mock.readthedocs.io/en/latest/mocker.html#activation). For instance, use it as a context manager:
441 |
442 | ```python
443 | import requests
444 | import requests_mock
445 |
446 | def test_get_resp():
447 | with requests_mock.Mocker() as m:
448 | m.get("http://propane.org", text="that boy ain't right")
449 | rsp = requests.get("http://propane.org").text
450 |
451 | assert rsp.text == "that boy ain't right"
452 | ```
453 |
454 | Your app might hit variations of the same URI, such as different endpoints of an API. `requests-mock` saves you the tedium of mocking each path. You can define an adaptor that matches multiple paths, [e.g., with Regex](https://requests-mock.readthedocs.io/en/latest/matching.html#regular-expressions). Let's return to the example above, but build out some of its functionality.
455 |
456 | **`utils.py`**
457 | ```python
458 | def get_result():
459 | # query the database and return result
460 |
461 | def scrape_an_endpoint(result_id):
462 | # get data from https://api.com/v1/endpoint/{result_id}
463 |
464 | def scrape_another_endpoint(result_id):
465 | # get data from https://api.com/v1/another_endpoint/{result_id}
466 | ```
467 |
468 | **`tasks.py`**
469 | ```python
470 | from utils import get_result
471 |
472 | def do_a_thing():
473 | results = get_result()
474 |
475 | for result in results:
476 | # get data and do a thing
477 | data = scrape_an_endpoint(result.id)
478 | more_data = scrape_an_endpoint(result.id)
479 | ```
480 |
481 | We can mock `get_result` in a test, as described above. However, `do_a_thing` now makes actual calls to an external, possibly unstable API. Such data could likely interfere with the predictability of `test_do_a_thing`. We have two options: (1) mock the results of every `scrape_an_endpoint` and `scrape_another_endpoint`, or (2) mock the requests to `https://api.com/v1`. The former can be tedious, particularly if you need to mock loads of utility functions. The latter option, happily, requires minimal effort with `requests-mock`:
482 |
483 | **`test_tasks.py`**
484 | ```python
485 | import pytest
486 | import requests_mock
487 |
488 | def test_do_a_thing(mocker):
489 | with requests_mock.Mocker() as m:
490 | # here, we tell the requests-mock adaptor to match
491 | # any URL with "api.com/v1," i.e., different endpoints,
492 | # https and http, query params, etc.
493 | matcher = re.compile('api.com/v1')
494 | m.get(matcher, json={}, status_code=200)
495 |
496 | mocker.patch('tasks.get_result', return_value=[1, 2, 3])
497 |
498 | do_a_thing()
499 |
500 | # does things on [1, 2, 3]
501 |
502 | # test things were done as expected
503 | ```
504 |
505 | Need more? Check out [the tests for `scrapers-us-municipal`](https://github.com/datamade/scrapers-us-municipal/blob/master/tests/lametro), which makes healthy use of `requests-mock`.
506 |
--------------------------------------------------------------------------------
/intro-to-javascript-testing.md:
--------------------------------------------------------------------------------
1 | # Intro to JavaScript Testing
2 |
3 | * [The right style](#the-right-style)
4 | * [Test your syntax with jshint](#jshint)
5 | * [Test your logic with jasmine](#jasmine)
6 | * [Friendly reminders](#friendly-reminders)
7 |
8 | ## The right style
9 |
10 | JavaScript, as a language itself, does not have too many opinions. Functional, reliable, error-free JS comes in a variety of colors and shapes: in fact, the number of ways to [declare a function](https://www.bryanbraun.com/2014/11/27/every-possible-way-to-define-a-javascript-function/) reaches double digits. The flexibility of JS can create headaches, inspire pure rage, and ruin an afternoon.
11 |
12 | Narrow your options! As you write code and tests-for-your-code, check your JS against a style guide. Consistent style makes code easier to test and debug, while also minimizing cross-browser errors – a considerable pain point in JS.
13 |
14 | At DataMade, we recommend [our fork of the Node.js style guide](https://github.com/datamade/node-style-guide), a simple overview of best JS practices identified by [Felix Geisendörfer](http://felixge.de/). The [Crockford conventions](http://javascript.crockford.com/code.html) nicely complement the Node.js style guide: in it, [Douglas Crockford](https://en.wikipedia.org/wiki/Douglas_Crockford) gives a no-nonsense discussion of how to write "publication quality" JavaScript.
15 |
16 | ## jshint
17 |
18 | An impeccable style guide, however, does not prevent human error. Every coder has at least one horror story about an hours-long hunt for a missing semicolon that caused the (sometimes silent) crash of an entire program.
19 |
20 | [`jshint`](http://jshint.com/about/) can be the hero in these stories. `jshint` analyzes JavaScript line-by-line and reports typos and common bug-causing mistakes.
21 |
22 | Easily integrate `jshint` into your workflow using one of three strategies:
23 |
24 | * Spotcheck code! Copy-and-paste select lines of potentially error-raising JS in the [`jshint` online console](http://jshint.com/).
25 | * Enable `jshint` as a constant, nagging reminder of JS best practices: add a JSHint plugin to your preferred code editor. At DataMade, many of us start on and continue using Sublime. For those Sublime users, install [JSHint Gutter](https://github.com/victorporof/Sublime-JSHint) through the Sublime package manager, and activate it by right clicking and selecting `JSHint --> Lint Code`. The results may be enlightening, but also exhausting: fix what needs fixing, and then clear the annotations with `CMD + esc`.
26 | * Install the `jshint` [command line interface](http://jshint.com/docs/cli/) and [the standard DataMade linting rules](https://github.com/datamade/node-style-guide/blob/master/.jshintrc), and integrate it with the regular running of your test suite. Run `npm install -g jshint` to install JSHint globally. Then, in terminal, tell JSHint to lint your files.
27 |
28 | The `jshint` CLI has several options. Most simply, the `jshint` command accepts a specific file as a parameter:
29 |
30 | ``` bash
31 | # Inspect this JS file!
32 | jshint my_unique_app/static/js/custom.js
33 |
34 | # Discouraging output...
35 | my_unique_app/static/js/custom.js: line 14, col 51, Missing semicolon.
36 | my_unique_app/static/js/custom.js: line 18, col 48, Missing semicolon.
37 | my_unique_app/static/js/custom.js: line 52, col 34, Missing semicolon.
38 |
39 | 3 errors
40 | ```
41 |
42 | Alternatively, JSHint can recursively discover all JS files in a directory and return a report: `jshint .` The results, in this case, may not be useful, since JSHint lints third-party libraries, too. For greater precision, add a configuration file in the root of your app, called `.jshintignore`. Here, tell JSHint to ignore, let's say, the directory where you store external libraries.
43 |
44 | ```
45 | my_unique_app/static/js/lib/*
46 | ```
47 |
48 | You can configure the rules JSHint uses to lint your code via a [`.jshintrc` file](http://jshint.com/docs/). This file contains a JSON object that activates JSHint options. DataMade has [a standard `.jshintrc` template](https://github.com/datamade/node-style-guide/blob/master/.jshintrc). Use it to initiate testing with JSHint, and [customize it](http://jshint.com/docs/options/) to fit the needs of your project.
49 |
50 | If you know a discrete portion of your code is not `jshint` compliant, but you want to leave your config in tact for the rest, you can tell `jshint` to ignore a line or code block using the `ignore` directive, like so:
51 |
52 | ```javascript
53 | // Ignore a line.
54 | console.log('please leave me be!'); // jshint ignore:line
55 |
56 | // Ignore a code block.
57 | /* jshint ignore:start */
58 | function rulebreaker(i) {
59 | alert(i);
60 | }
61 | /* jshint ignore:end */
62 | ```
63 |
64 | Note that this maneuver should _not_ be used just to get the tests to pass, but rather when there is not a better alternative. For example, JSHint complains when you leave a `console.log` in your code, but there are legitimate use cases for logging to the console in production, as in Councilmatic, where [we log the OCD ID of the entity being viewed](https://github.com/datamade/django-councilmatic/blob/5cc8ae50e8b0cf305afae563dd8deb29731204e7/councilmatic_core/templates/councilmatic_core/person.html#L232).
65 |
66 | The JSHint CLI allows for automated testing with Travis. Include in your `.travis.yml` file instructions for installing `jshint` and directive(s) for running the linter:
67 |
68 | ```
69 | ...
70 |
71 | install:
72 | - pip install --upgrade pip
73 | - pip install --upgrade -r requirements.txt
74 | - npm install -g jshint
75 |
76 | ...
77 |
78 | script:
79 | - pytest tests
80 | - jshint my_unique_app/static/js/custom.js
81 |
82 | ...
83 | ```
84 |
85 | ## jasmine
86 |
87 | The right style and perfect syntax make for clean, readable, debuggable JavaScipt. Yet, testing JS functionality requires something distinct. For this, we recommend [Jasmine](https://jasmine.github.io/2.0/introduction.html), a testing framework for behavior-driven development.
88 |
89 | ### Python integration
90 |
91 | Jasmine [plays nicely with Django and Flask apps](https://jasmine.github.io/setup/python.html). In the virtualenv of your app, install Jasmine and initialize a project:
92 |
93 | ```
94 | # Install
95 | pip install jasmine
96 |
97 | # Initialize a project
98 | jasmine-install
99 | ```
100 |
101 | `jasmine-install` creates a `spec` repo, where all testing specs and the configuration file (i.e., `jasmine.yml`) live. Before writing and running tests, tell Jasmine where to find relevant files by specifying, most importantly, the `src_files` and `src_dir` attributes:
102 |
103 | ```
104 | ...
105 |
106 | src_files:
107 | - js/*
108 |
109 | ...
110 |
111 | src_dir: 'my_app/static'
112 |
113 | ...
114 | ```
115 |
116 | Then, in terminal, run:
117 |
118 | ```bash
119 | jasmine
120 | ```
121 |
122 | Visit `http://127.0.0.1:8888`, and view your test results.
123 |
124 | Of course, without any tests, you should see relatively sparse output, something along these lines: "finished in 0.008s No specs found". Let's change that by writing a test! Open a JS file, and create a simple function – but not within `$(document).ready`. Why? `$(document).ready` [hides the functions inside a closure](http://bittersweetryan.github.io/jasmine-presentation/#slide-17).
125 |
126 | ```javascript
127 | function makeTheTruth() {
128 | return true;
129 | };
130 | ```
131 |
132 | Then, make a spec file:
133 |
134 | ```bash
135 | touch spec/javascripts/custom_spec.js
136 | ```
137 |
138 | Open the newly created spec file, and add a test:
139 |
140 | ```javascript
141 | describe("Test custom.js", function() {
142 | it("returns true", function () {
143 | var result = makeTheTruth();
144 | expect(result).toBe(true);
145 | });
146 | });
147 | ```
148 |
149 | Reload `http://127.0.0.1:8888`, and the browser should show the number of tests and the number of failures: "finished in 0.007s 1 spec, 0 failures".
150 |
151 | ### Resources to consider for advanced integrated testing
152 |
153 | At DataMade, we are actively trying to improve our testing protocols. We hope to consider more robust ways of Jasmine integration with our python apps, as suggested by these sources:
154 |
155 | * [django-jasmine](https://github.com/jakeharding/django-jasmine)
156 | * [django.js for Jasmine views](http://djangojs.readthedocs.io/en/latest/test.html)
157 |
158 |
--------------------------------------------------------------------------------
/intro-to-python-testing.md:
--------------------------------------------------------------------------------
1 | # `pytest` 101
2 |
3 | * [Directory structure](#directory-structure)
4 | * [Configure `pytest` and coordinate additional utilities](#configure-pytest-and-coordinate-additional-utilities)
5 | * [Assert `True` == `True`](#assert-true--true)
6 | * [Commands to run the tests](#commands-to-run-the-tests)
7 | * [A note on filepaths](#a-note-on-filepaths)
8 |
9 | ## Directory structure
10 |
11 | DataMade uses the `pytest` framework for testing Python. `pytest` auto-discovers all test modules by recursing into test directories and searching for `test_*.py` and `*_test.py` files. This mode of test discovery allows for some flexibility in directory-structure design: the tests can reside within nested app directories or in a stand-alone directory at the root of the main repo. At DataMade, we prefer the latter structure. Here’s how to get started.
12 |
13 | To install pytest, run:
14 |
15 | ```bash
16 | pip install pytest
17 | ```
18 |
19 | Create a tests directory at root of your main directory:
20 |
21 | ```bash
22 | ├── a_unique_app/
23 | ├── another_unique_app/
24 | ├── tests/
25 | └── ...
26 | ```
27 |
28 | Within the tests directory, add the following files:
29 |
30 | * **`__init__.py`**
31 |
32 | An empty init file to transform your tests directory into a tests module, so imports (i.e., `import from .test_config`) work correctly.
33 |
34 | * **`test_config.py`**
35 |
36 | A configuration file that contains essential environment variables, i.e., INSTALLED_APPS, DATABASES, etc. (Careful! Do not put any secret data in here.) Read more about what goes in this file in the [Django set-up section](/framework-specific-patterns.md#setup).
37 |
38 | * **`conftest.py`**
39 |
40 | A file that defines fixtures, i.e., instances of objects or reusable behaviors (e.g., [the LargeLots conftest file](https://github.com/datamade/large-lots/blob/master/tests/conftest.py) has fixtures that create instances of the Lot, ApplicationStep, and Address classes)
41 |
42 | * **`test_*.py`**
43 |
44 | A file with tests (as many as your application needs). Note: if you have multiple applications that require testing, these files can go inside an app-specific repo within the `tests` directory, as the below example shows.
45 |
46 | ```bash
47 | └── tests/
48 | ├── conftest.py
49 | ├── test_config.py
50 | ├── a_unique_app/
51 | │ └── test_commands.py
52 | └── another_unique_app/
53 | └── test_form_submission.py
54 | ```
55 |
56 | ## Configure `pytest` and coordinate additional utilities
57 |
58 | If you don't already have one, add a `setup.cfg` file to the root of your directory. This file provides a space for specifying configuration options and enabling integration of tools (testing and otherwise) – helpful, particularly, with flake8. (If you are new to testing and unfamiliar with flake8, then move ahead to the section on "Assert True is True." If not, then read on!)
59 |
60 | Many DataMade projects use flake8 to enforce consistent and standard style patterns in Python. Yet, remembering to run both the pytest and flake8 test suites before pushing a branch to Github requires a certain degree of testing heedfulness. A `setup.cfg` file helps reduce the number of things to remember.
61 |
62 | At the top of your `setup.cfg` file, add a section for pytest: this tells your application to assign testing options (as they would appear in `pytest.ini`, i.e., the initialization file for `pytest`).
63 |
64 | Then, add parameters for testing setup. The below example, taken from [Dedupe service](https://github.com/datamade/dedupe-service/blob/master/setup.cfg), includes precise options for flake8 + pytest integration.
65 |
66 | ```cfg
67 | [tool:pytest]
68 |
69 | # Run the flake8 option every time you execute `pytest`
70 | addopts = --flake -v -p no:warnings
71 |
72 | # Tell pytest to avoid specified patterns when recursing for test discovery
73 | norecursedirs=tools .env alembic .git
74 |
75 | # Provide parameters for flake8
76 | flake8-max-line-length=160
77 | flake8-select=E F W C90
78 | flake8-ignore=E121 E123 E126 E226 E24 E704 W503 W504
79 | ```
80 |
81 | ## Assert `True == True`
82 |
83 | Your application proudly contains the framework for productive, gainful testing practices. Well done! Now, it’s time to write a simple test.
84 |
85 | [N.B. The above example directory assumes two generic apps: a_unique_app and another_unique_app. Likely, your site has apps of a different color – and, indeed, you may not need individual app folders in `tests` at all.]
86 |
87 | Go to the tests directory (again, at the root of your main repository). Either within an app-test-repo of your choice or at the root of `tests`, create a test file, say, `test_my_app.py`. (Caution! The file should have `test_*.py` or `*_test.py` as its name.)
88 |
89 | At the top, add `import pytest`. Then, create your first test. (Watch out! It should start or end with "test.")
90 |
91 | ```python
92 | import pytest
93 |
94 | def test_the_truth():
95 | assert True == True
96 | ```
97 |
98 | The `assert` statement appears ubiquitously in python tests: it determines if both sides of a comparator are equivalent and, then, returns `True` or throws an `AssertionError`, causing a test to fail. Clearly, this simple test passes, but let’s run it to know for sure. The next section explains how to execute tests.
99 |
100 | ## Commands to run the tests
101 |
102 | Executing the `pytest` framework is easy. It takes one command. Go to the root of your directory, and run:
103 |
104 | ```bash
105 | pytest
106 | ```
107 |
108 | Pytest discovers all test modules, executes them, and prints the results to terminal.
109 |
110 | You can hone in on particular tests, too. As suggested above, you should have a `tests` directory with individual files (i.e., test modules) or app-specific repositories. Indicate the particular module or repo, and pytest will only execute those tests:
111 |
112 | ```bash
113 | # Run all tests in a directory
114 | pytest a_unique_app/
115 | ```
116 |
117 | ```bash
118 | # Run all tests in a module
119 | pytest test_my_app.py
120 | ```
121 |
122 | ```bash
123 | # Run a particular test
124 | pytest test_my_app.py::test_the_truth
125 | ```
126 |
127 | You may have print statements in your tests: these can help you debug, better understand your codebase, and refactor. To turn off output capturing, add the `-s` flag, which allows printed output to appear in terminal:
128 |
129 | ```bash
130 | pytest -s
131 | ```
132 |
133 | Want to see more? less? You can increase verbosity with the `-v` flag: this tells pytest to include pretty-print statements, e.g. a brilliant green PASSED next to each successful test function. Alternatively, you can decrease verbosity with the `-q` or "quiet" flag.
134 |
135 | ```bash
136 | # Increase verbosity
137 | pytest -v
138 |
139 | # Decrease verbosity
140 | pytest -q
141 | ```
142 |
143 | Tests fail. Running an entire test suite after a failure can be time-consuming and tedious. Tell pytest to stop after a failure with the `-x` flag:
144 |
145 | ```bash
146 | pytest -x
147 | ```
148 |
149 | You can debug a failing test with [Python debugger](https://docs.python.org/3.6/library/pdb.html), which transforms your terminal into a sandbox where you can play with code. Tell pytest to activate a pdb shell with the `--pdb` flag (note: two hyphens needed):
150 |
151 | ```bash
152 | pytest --pdb
153 | ```
154 |
155 | Did pytest not discover your tests? Check the names of your modules and test functions.
156 |
157 | * Test modules should be labeled as `test_*.py` or `*_test.py`.
158 | * Test functions should start with `test_`.
159 |
160 | ### A note on filepaths
161 |
162 | Relative filepaths may work in your local development environment, but they aren't so friendly to Travis or our deployment environment. Save yourself the failed build: If your application includes filepaths, create an absolute filepath using `os`.
163 |
164 | ```python
165 | import os
166 |
167 |
168 | # Get the name of the directory your file lives in.
169 | file_directory = os.path.dirname(__file__)
170 |
171 | # Get the absolute path of the directory your file lives in.
172 | absolute_file_directory = os.path.abspath(file_directory)
173 | ```
174 |
175 | You can use this absolute path to build out other filepaths as needed using `os.path.join`. Note that your filepath **will be relative to the file you build it in**, i.e., if you are in `project_dir/some_module/script.py`, then `absolute_file_directory` will be `/path/to/project_dir/some_module`.
176 |
177 | ```python
178 | # Build absolute path to data in the same directory as the script.
179 | local_data = os.path.join(absolute_file_directory, 'local_data.csv')
180 |
181 | # Build absolute path to data in a directory adjacent to (e.g., up one level) the script directory.
182 | remote_data = os.path.join(absolute_file_directory, '..', 'another_directory', 'remote_data.json')
183 | ```
184 |
185 | If you're building filepaths in a testing context, you can [include a fixture](intermediate-python-testing.md#fixtures) to return your project directory in `tests/conftest.py`, like this:
186 |
187 | ```python
188 | @pytest.fixture
189 | def project_directory():
190 | test_directory = os.path.abspath(os.path.dirname(__file__))
191 | return os.path.join(test_directory, '..')
192 | ```
193 |
194 | Then, as before, use that path to build filepaths in your tests.
195 |
196 | ```python
197 | def test_some_file(project_directory):
198 | some_file = os.path.join(project_directory, 'some_file.txt')
199 | ```
200 |
--------------------------------------------------------------------------------