├── .gitignore ├── README.md ├── conftest.py ├── other_code ├── __init__.py └── services.py ├── requirements.txt ├── tests ├── 00_empty_test.py ├── 01_basic_test.py ├── 02_special_assertions_test.py ├── 03_simple_fixture_test.py ├── 04_fixture_returns_test.py ├── 05_yield_fixture_test.py ├── 06_request_test.py ├── 07_request_finalizer_test.py ├── 08_params_test.py ├── 09_params-ception_test.py ├── 10_advanced_params-ception_test.py ├── 11_mark_test.py ├── 12_special_marks.py ├── 13_mark_parametrization.py ├── 14_class_based_test.py ├── 15_advanced_class_test.py ├── 16_scoped_and_meta_fixtures_test.py ├── 17_marked_meta_fixtures.py ├── 18_the_mocker_fixture.py ├── 19_re_usable_mock_test.py ├── __init__.py └── other_stuff.py ├── tox.ini └── tutorials ├── 00_empty_test.md ├── 01_basic_test.md ├── 02_special_assertions.md ├── 03_reviewing_the_basics.md ├── 04_intro_to_fixtures.md ├── 05_fixture_return_values.md ├── 06_yield_fixtures.md ├── 07_request_fixtures.md ├── 08_request_finalizers.md ├── 09_intro_to_parameters.md ├── 10_parameter-ception.md ├── 11_advanced_parameter-ception.md ├── 12_reviewing_fixtures.md ├── 13_intro_to_test_marking.md └── 14_mark_based_parameters.md /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | *.py[cod] 3 | .pytest_cache/ 4 | .cache/ 5 | .wheelhouses/ 6 | .tox/ 7 | .vscode/ 8 | Pipfile 9 | Pipfile.lock 10 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # intro-to-pytest 2 | An introduction to PyTest with lots of simple, hackable examples (currently Python 2.7 / 3.6+ compatible). 3 | 4 | These examples are intended to be self-explanatory to a Python developer, with minimal setup - In addition to Python 2.7 or 3.6+, you'll also need `pytest` and the `pytest-mock` plugin installed to use all these examples, which you can install by running: 5 | 6 | ``` 7 | pip install -r requirements.txt 8 | ``` 9 | 10 | In this folder (ideally, inside a virtual environment, to keep this from affecting your local Python libraries). 11 | 12 | Once you've got all the requirements in place, you should be able to simply run 13 | 14 | ``` 15 | pytest 16 | ``` 17 | 18 | In this folder, and see 109 items being collected, and 109 tests passing, in each of the example files, in less than a second. 19 | 20 | (PyTest will list the names of each test module file that it found, and then a period for each test case that passed, or other symbols for tests that failed, were skipped, etc.) 21 | 22 | But if you're seeing all that, congratulations! You're ready to get started. 23 | 24 | The recommended approach is to read each example file, then run it directly with pytest, with the `v` flag (so that each Test Case is listed "verbosely", by name) and the `s` flag, so that we can all the standard output (prints) from the Tests, which will help explain how each example is working; PyTest normally captures and hides this output, except for tests that are currently failing. (In the examples below, we'll shorten these arguements to `-vs`.) 25 | 26 | Each example test was intended to be self-explanatory, but I have begun adding short tutorial guides to explain more of the context, suggest experiments and hacks you can attempt on th examples, and to provide recaps and reviews for each major section. The tutorial track starts with: 27 | 28 | [Tutorial Zero: An Empty Test](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/00_empty_test.md) 29 | 30 | Not all of the examples have an accompanying tutorial (yet), but were written to be self-explanatory, and should at least include basic comments to explain the feature being demonstrated. 31 | 32 | If you have any feedback, questions, or PyTest features you'd like to see covered, please let me know on Pluralsight Slack as [@david.sturgis](https://pluralsight.slack.com/team/U036DTQQ1), or via email at [david-sturgis@pluralsight.com](mailto:david-sturgis@pluralsight.com), or via [GitHub Issues](https://github.com/pluralsight/intro-to-pytest/issues) (or a PR, now that I have PR notifcations turned on!). 33 | -------------------------------------------------------------------------------- /conftest.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | from pytest import fixture 3 | 4 | 5 | @fixture 6 | def global_fixture(): 7 | print("\n(Doing global fixture setup stuff!)") 8 | 9 | 10 | def pytest_configure(config): 11 | config.addinivalue_line( 12 | "markers", "db: Example marker for tagging Database related tests" 13 | ) 14 | config.addinivalue_line( 15 | "markers", "slow: Example marker for tagging extremely slow tests" 16 | ) 17 | -------------------------------------------------------------------------------- /other_code/__init__.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | -------------------------------------------------------------------------------- /other_code/services.py: -------------------------------------------------------------------------------- 1 | import time 2 | from collections import namedtuple 3 | 4 | 5 | class ExpensiveClass(object): 6 | """ 7 | A fake Class that takes a long time to fully initialize 8 | """ 9 | 10 | def __init__(self): 11 | print("(Initializing ExpensiveClass instance...)") 12 | time.sleep(0.2) 13 | print("(ExpensiveClass instance complete!)") 14 | 15 | 16 | FakeRow = namedtuple("FakeRow", ("id", "name", "value")) 17 | 18 | 19 | def db_service(query_parameters): 20 | """ 21 | A fake DB service that takes a remarkably long time to yield results 22 | """ 23 | print("(Doing expensive database stuff!)") 24 | 25 | time.sleep(5.0) 26 | 27 | data = [FakeRow(0, "Foo", 19.95), FakeRow(1, "Bar", 1.99), FakeRow(2, "Baz", 9.99)] 28 | 29 | print("(Done doing expensive database stuff)") 30 | return data 31 | 32 | 33 | def count_service(query_parameters): 34 | print("count_service: Performing a query (and counting the results)...") 35 | 36 | data = db_service(query_parameters) 37 | 38 | count = len(data) 39 | 40 | print("Found {} result(s)!".format(count)) 41 | return count 42 | 43 | 44 | DATA_SET_A = { 45 | "Foo": "Bar", 46 | "Baz": [5, 7, 11], 47 | "Qux": {"A": "Boston", "B": "Python", "C": "TDD"}, 48 | } 49 | 50 | DATA_SET_B = DATA_SET_A 51 | 52 | DATA_SET_C = { 53 | "Foo": "Bar", 54 | "Baz": [3, 5, 7], 55 | "Qux": {"A": "Boston", "B": "Python", "C": "TDD"}, 56 | } 57 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | -i https://pypi.org/simple 2 | atomicwrites==1.4.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3' 3 | attrs==20.3.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3' 4 | colorama==0.4.4; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4' 5 | docopt==0.6.2 6 | more-itertools==8.7.0; python_version > '2.7' 7 | pluggy==0.13.1; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3' 8 | py==1.10.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3' 9 | pytest-mock==1.10.3 10 | pytest-watch==4.2.0 11 | pytest==4.4.0 12 | six==1.15.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2' 13 | watchdog==2.0.2; python_version >= '3.6' 14 | -------------------------------------------------------------------------------- /tests/00_empty_test.py: -------------------------------------------------------------------------------- 1 | def test_empty(): 2 | """ 3 | PyTest tests are callables whose names start with "test" 4 | (by default) 5 | 6 | It looks for them in modules whose name starts with "test_" or ends with "_test" 7 | (by default) 8 | """ 9 | pass 10 | 11 | 12 | def empty_test(): 13 | """ 14 | My name doesn't start with "test", so I won't get run. 15 | (by default ;-) 16 | """ 17 | pass 18 | -------------------------------------------------------------------------------- /tests/01_basic_test.py: -------------------------------------------------------------------------------- 1 | from other_code.services import DATA_SET_A, DATA_SET_B, DATA_SET_C 2 | 3 | 4 | def test_example(): 5 | """ 6 | But really, test cases should be callables containing assertions: 7 | """ 8 | print("\nRunning test_example...") 9 | assert DATA_SET_A == DATA_SET_B 10 | -------------------------------------------------------------------------------- /tests/02_special_assertions_test.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | 4 | def test_div_zero_exception(): 5 | """ 6 | pytest.raises can assert that exceptions are raised (catching them) 7 | """ 8 | with pytest.raises(ZeroDivisionError): 9 | x = 1 / 0 10 | 11 | 12 | def test_keyerror_details(): 13 | """ 14 | The raised exception can be referenced, and further inspected (or asserted) 15 | """ 16 | my_map = {"foo": "bar"} 17 | 18 | with pytest.raises(KeyError) as ke: 19 | baz = my_map["baz"] 20 | 21 | # Our KeyError should reference the missing key, "baz" 22 | assert "baz" in str(ke) 23 | 24 | 25 | def test_approximate_matches(): 26 | """ 27 | pytest.approx can be used to assert "approximate" numerical equality 28 | (compare to "assertAlmostEqual" in unittest.TestCase) 29 | """ 30 | assert 0.1 + 0.2 == pytest.approx(0.3) 31 | -------------------------------------------------------------------------------- /tests/03_simple_fixture_test.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | 4 | def test_with_local_fixture(local_fixture): 5 | """ 6 | Fixtures can be invoked simply by having a positional arg 7 | with the same name as a fixture: 8 | """ 9 | print("Running test_with_local_fixture...") 10 | assert True 11 | 12 | 13 | @pytest.fixture 14 | def local_fixture(): 15 | """ 16 | Fixtures are callables decorated with @fixture 17 | """ 18 | print("\n(Doing Local Fixture setup stuff!)") 19 | 20 | 21 | def test_with_global_fixture(global_fixture): 22 | """ 23 | Fixtures can also be shared across test files (see conftest.py) 24 | """ 25 | print("Running test_with_global_fixture...") 26 | -------------------------------------------------------------------------------- /tests/04_fixture_returns_test.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | 4 | def test_with_data_fixture(one_fixture): 5 | """ 6 | PyTest finds the fixture whose name matches the argument, 7 | calls it, and passes that return value into our test case: 8 | """ 9 | print("\nRunning test_with_data_fixture: {}".format(one_fixture)) 10 | assert one_fixture == 1 11 | 12 | 13 | @pytest.fixture 14 | def one_fixture(): 15 | """ 16 | Beyond just "doing stuff", fixtures can return data, which 17 | PyTest will pass to the test cases that refer to it... 18 | """ 19 | print("\n(Returning 1 from data_fixture)") 20 | return 1 21 | -------------------------------------------------------------------------------- /tests/05_yield_fixture_test.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | 4 | def test_with_yield_fixture(yield_fixture): 5 | print("\n Running test_with_yield_fixture: {}".format(yield_fixture)) 6 | assert "foo" in yield_fixture 7 | 8 | 9 | @pytest.fixture 10 | def yield_fixture(): 11 | """ 12 | Fixtures can yield their data 13 | (additional code will run after the test) 14 | """ 15 | print("\n\n(Initializing yield_fixture)") 16 | x = {"foo": "bar"} 17 | 18 | # Remember, unlike generators, fixtures should only yield once (if at all) 19 | yield x 20 | 21 | print("\n(Cleaning up yield_fixture)") 22 | del(x) 23 | -------------------------------------------------------------------------------- /tests/06_request_test.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | 4 | def test_with_introspection(introspective_fixture): 5 | print("\nRunning test_with_introspection...") 6 | assert True 7 | 8 | 9 | @pytest.fixture 10 | def introspective_fixture(request): 11 | """ 12 | The request fixture allows introspection into the 13 | "requesting" test case 14 | """ 15 | print("\n\nintrospective_fixture:") 16 | print("...Called at {}-level scope".format(request.scope)) 17 | print(" ...In the {} module".format(request.module)) 18 | print(" ...On the {} node".format(request.node)) 19 | -------------------------------------------------------------------------------- /tests/07_request_finalizer_test.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | 4 | def test_with_safe_cleanup_fixture(safe_fixture): 5 | print("\nRunning test_with_safe_cleanup_fixture...") 6 | assert True 7 | 8 | 9 | @pytest.fixture 10 | def safe_fixture(request): 11 | """ 12 | The request can also be used to apply post-test callbacks 13 | (these will run even if the Fixture itself fails!) 14 | """ 15 | print("\n(Begin setting up safe_fixture)") 16 | request.addfinalizer(safe_cleanup) 17 | risky_function() 18 | 19 | 20 | def safe_cleanup(): 21 | print("\n(Cleaning up after safe_fixture!)") 22 | 23 | 24 | def risky_function(): 25 | # # Uncomment to simulate a failure during Fixture setup! 26 | # raise Exception("Whoops, I guess that risky function didn't work...") 27 | print(" (Risky Function: Totally worth it!)") 28 | -------------------------------------------------------------------------------- /tests/08_params_test.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | 4 | def test_parameterization(letter): 5 | print("\n Running test_parameterization with {}".format(letter)) 6 | 7 | 8 | def test_modes(mode): 9 | print("\n Running test_modes with {}".format(mode)) 10 | 11 | 12 | @pytest.fixture(params=["a", "b", "c", "d", "e"]) 13 | def letter(request): 14 | """ 15 | Fixtures with parameters will run once per param 16 | (You can access the current param via the request fixture) 17 | """ 18 | yield request.param 19 | 20 | 21 | @pytest.fixture(params=[1, 2, 3], ids=['foo', 'bar', 'baz']) 22 | def mode(request): 23 | """ 24 | Fixtures with parameters will run once per param 25 | (You can access the current param via the request fixture) 26 | """ 27 | yield request.param 28 | -------------------------------------------------------------------------------- /tests/09_params-ception_test.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | 4 | @pytest.fixture(params=["a", "b", "c", "d"]) 5 | def letters_fixture(request): 6 | """ 7 | Fixtures can cause tests to be run multiple times (once per parameter) 8 | """ 9 | yield request.param 10 | 11 | 12 | @pytest.fixture(params=[1, 2, 3, 4]) 13 | def numbers_fixture(request): 14 | """ 15 | Fixtures can invoke each other (producing cartesian products of parameters) 16 | """ 17 | yield request.param 18 | 19 | 20 | def test_fixtureception(letters_fixture, numbers_fixture): 21 | """ 22 | Print out our combined fixture "product" 23 | """ 24 | coordinate = letters_fixture + str(numbers_fixture) 25 | 26 | print('\nRunning test_fixtureception with "{}"'.format(coordinate)) 27 | -------------------------------------------------------------------------------- /tests/10_advanced_params-ception_test.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | 4 | @pytest.fixture(params=[1, 2, 3, 4]) 5 | def numbers_fixture(request): 6 | """ 7 | Fixtures can cause tests to be run multiple times (once per parameter) 8 | """ 9 | yield request.param 10 | 11 | 12 | @pytest.fixture(params=["a", "b", "c", "d"]) 13 | def coordinates_fixture(request, numbers_fixture): 14 | """ 15 | Fixtures can invoke each other (producing cartesian products of params) 16 | """ 17 | coordinate = request.param + str(numbers_fixture) 18 | yield coordinate 19 | # # Uncomment for fun 80s board game reference (and fixture filtering) 20 | # if coordinate == 'b2': 21 | # print "(Don't sink my Battleship!)" 22 | # pytest.skip() 23 | 24 | 25 | def test_advanced_fixtureception(coordinates_fixture): 26 | print( 27 | '\nRunning test_advanced_fixtureception with "{}"'.format(coordinates_fixture) 28 | ) 29 | -------------------------------------------------------------------------------- /tests/11_mark_test.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | 4 | @pytest.mark.db 5 | def test_fake_query(): 6 | """ 7 | pytest.mark can be used to "tag" tests for later reference 8 | """ 9 | assert True 10 | 11 | 12 | @pytest.mark.slow 13 | def test_fake_stats_function(): 14 | assert True 15 | 16 | 17 | @pytest.mark.db 18 | @pytest.mark.slow 19 | def test_fake_multi_join_query(): 20 | """ 21 | Test cases can have multiple marks assigned 22 | """ 23 | assert True 24 | 25 | 26 | @pytest.mark.db 27 | def asserty_callable_thing(): 28 | """ 29 | PyTest still only runs "tests", not just any callabe with a mark 30 | """ 31 | print("This isn't even a test! And it fails!") 32 | assert False 33 | 34 | 35 | """ 36 | Tags can be used to target (or omit) tests in the runner: 37 | 38 | # Run all three tests in this module (verbosely) 39 | pytest -v 10_mark_test.py 40 | 41 | # Run one specific test by Node name: 42 | pytest -v 10_mark_test.py::test_fake_query 43 | 44 | # Run all tests with "query" in their names 45 | pytest -v -k query 46 | 47 | # Run all tests with "stats" or "join" in their names 48 | pytest -v -k "stats or join" 49 | 50 | # Run all tests marked with "db" 51 | pytest -v -m db 52 | 53 | # Run all tests marked with "db", but not with "slow" 54 | pytest -v -m "db and not slow" 55 | """ 56 | -------------------------------------------------------------------------------- /tests/12_special_marks.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | dev_s3_credentials = None 4 | 5 | 6 | @pytest.mark.skip 7 | def test_broken_feature(): 8 | # Always skipped! 9 | assert False 10 | 11 | 12 | @pytest.mark.skipif(not dev_s3_credentials, reason="S3 creds not found!") 13 | def test_s3_api(): 14 | # Skipped if a certain condition is met 15 | assert True 16 | 17 | 18 | @pytest.mark.xfail 19 | def test_where_failure_is_acceptable(): 20 | # Allows failed assertions (returns "XPASS" if there are no failures) 21 | assert True 22 | 23 | 24 | @pytest.mark.xfail 25 | def test_where_failure_is_accepted(): 26 | # Allows failed assertions (returns "xfail" on failure) 27 | assert False 28 | 29 | 30 | @pytest.mark.xfail(strict=True) 31 | def test_where_failure_is_mandatory(): 32 | # Requires failed assertions! (returns "xfail" on failure; FAILs on pass!) 33 | assert True 34 | 35 | 36 | # # Uncomment to skip everything in the module 37 | # pytest.skip("This whole Module is problematic at best!", allow_module_level=True) 38 | -------------------------------------------------------------------------------- /tests/13_mark_parametrization.py: -------------------------------------------------------------------------------- 1 | import pytest 2 | 3 | 4 | @pytest.mark.parametrize("number", [1, 2, 3, 4, 5]) 5 | def test_numbers(number): 6 | """ 7 | mark can be used to apply "inline" parameterization, without a fixture 8 | """ 9 | print("\nRunning test_numbers with {}".format(number)) 10 | 11 | 12 | @pytest.mark.parametrize("x, y", [(1, 1), (1, 2), (2, 2)]) 13 | def test_dimensions(x, y): 14 | """ 15 | mark.parametrize can even unpack tuples into named parameters 16 | """ 17 | print("\nRunning test_coordinates with {}x{}".format(x, y)) 18 | 19 | @pytest.mark.parametrize("mode", [1, 2, 3], ids=['foo', 'bar', 'baz']) 20 | def test_modes(mode): 21 | """ 22 | The `ids` kwarg can be used to rename the parameters 23 | """ 24 | print("\nRunning test_modes with {}".format(mode)) 25 | -------------------------------------------------------------------------------- /tests/14_class_based_test.py: -------------------------------------------------------------------------------- 1 | class TestSimpleClass(object): 2 | """ 3 | Classes can still be used to organize collections of test cases, with 4 | each test being a Method on the Class, rather than a standalone function. 5 | """ 6 | 7 | x = 1 8 | y = 2 9 | 10 | def regular_method(self): 11 | print("\n(This is a regular, non-test-case method.)") 12 | 13 | def test_two_checking_method(self): 14 | print("\nRunning TestSimpleClass.test_twos_method") 15 | assert self.x != 2 16 | assert self.y == 2 17 | -------------------------------------------------------------------------------- /tests/15_advanced_class_test.py: -------------------------------------------------------------------------------- 1 | from pytest import fixture, mark 2 | 3 | 4 | @fixture 5 | def class_fixture(): 6 | print("\n (class_fixture)") 7 | 8 | 9 | @fixture 10 | def bonus_fixture(): 11 | print("\n (bonus_fixture)") 12 | 13 | 14 | @mark.usefixtures("class_fixture") 15 | class TestIntermediateClass(object): 16 | @fixture(autouse=True) 17 | def method_fixture(self): 18 | print("\n(autouse method_fixture)") 19 | 20 | def test1(self): 21 | print("\n Running TestIntermediateClass.test1") 22 | assert True 23 | 24 | def test2(self, bonus_fixture): 25 | print("\n Running TestIntermediateClass.test2") 26 | assert True 27 | -------------------------------------------------------------------------------- /tests/16_scoped_and_meta_fixtures_test.py: -------------------------------------------------------------------------------- 1 | from pytest import fixture, mark 2 | from other_code.services import ExpensiveClass 3 | 4 | 5 | @fixture(scope="module", autouse=True) 6 | def scoped_fixture(): 7 | """ 8 | Scoping affects how often fixtures are (re)initialized 9 | """ 10 | print("\n(Begin Module-scoped fixture)") 11 | yield ExpensiveClass() 12 | print("\n(End Module-scoped fixture)") 13 | 14 | 15 | @mark.parametrize("x", range(1, 51)) 16 | def test_scoped_fixtures(x): 17 | """ 18 | A (hopefully fast!) test, to be run with fifty different parameters... 19 | """ 20 | print("\n Running test_scoped_fixture") 21 | -------------------------------------------------------------------------------- /tests/17_marked_meta_fixtures.py: -------------------------------------------------------------------------------- 1 | from pytest import fixture, mark 2 | 3 | 4 | @fixture(scope="module") 5 | def meta_fixture(): 6 | print("\n*** begin meta_fixture ***") 7 | yield 8 | print("\n*** end meta_fixture ***") 9 | 10 | 11 | # Apply this fixture to everything in this module! 12 | pytestmark = mark.usefixtures("meta_fixture") 13 | 14 | 15 | def test_with_meta_fixtures_a(): 16 | print("\n Running test_with_meta_fixtures_a") 17 | 18 | 19 | def test_with_meta_fixtures_b(): 20 | print("\n Running test_with_meta_fixtures_b") 21 | 22 | 23 | # How could we tell meta_fixture to only run once, "around" our tests? 24 | # (See 16_scoped_and_meta_fixtures_test.py for a hint...) 25 | -------------------------------------------------------------------------------- /tests/18_the_mocker_fixture.py: -------------------------------------------------------------------------------- 1 | from other_code.services import count_service 2 | 3 | 4 | def test_simple_mocking(mocker): 5 | """ 6 | pytest-mock provides a fixture for easy, self-cleaning mocking 7 | """ 8 | mock_db_service = mocker.patch("other_code.services.db_service", autospec=True) 9 | 10 | mock_data = [(0, "fake row", 0.0)] 11 | 12 | mock_db_service.return_value = mock_data 13 | 14 | print("\n(Calling count_service with the DB mocked out...)") 15 | 16 | c = count_service("foo") 17 | 18 | mock_db_service.assert_called_with("foo") 19 | 20 | assert c == 1 21 | -------------------------------------------------------------------------------- /tests/19_re_usable_mock_test.py: -------------------------------------------------------------------------------- 1 | from other_code.services import count_service 2 | from pytest import fixture, raises 3 | 4 | 5 | @fixture 6 | def re_usable_db_mocker(mocker): 7 | """ 8 | Fixtures can invoke mocker to yield "re-usable" mocks 9 | """ 10 | mock_db_service = mocker.patch("other_code.services.db_service", autospec=True) 11 | mock_db_service.return_value = [(0, "fake row", 0.0)] 12 | return mock_db_service 13 | 14 | 15 | def test_re_usable_mocker(re_usable_db_mocker): 16 | c = count_service("foo") 17 | re_usable_db_mocker.assert_called_with("foo") 18 | assert c == 1 19 | 20 | 21 | def test_mocker_with_exception(re_usable_db_mocker): 22 | re_usable_db_mocker.side_effect = Exception("Oh noes!") 23 | 24 | with raises(Exception): 25 | count_service("foo") 26 | -------------------------------------------------------------------------------- /tests/__init__.py: -------------------------------------------------------------------------------- 1 | from __future__ import print_function 2 | -------------------------------------------------------------------------------- /tests/other_stuff.py: -------------------------------------------------------------------------------- 1 | def test_in_non_test_module(): 2 | """ 3 | PyTest will recognize this function as a test... 4 | But will not collect tests from this file (by default) 5 | """ 6 | print("\nRunning test_in_non_test_module...") 7 | -------------------------------------------------------------------------------- /tox.ini: -------------------------------------------------------------------------------- 1 | # tox (https://tox.readthedocs.io/) is a tool for running tests 2 | # in multiple virtualenvs. This configuration file will run the 3 | # test suite on all supported python versions. To use it, "pip install tox" 4 | # and then run "tox" from this directory. 5 | 6 | [tox] 7 | envlist = py27, py36 8 | skipsdist = True 9 | 10 | [testenv] 11 | deps = 12 | pytest 13 | pytest-mock 14 | commands = 15 | pytest 16 | -------------------------------------------------------------------------------- /tutorials/00_empty_test.md: -------------------------------------------------------------------------------- 1 | ## 0: An Empty Test 2 | 3 | The first test is pretty boring: It is a module with "test" in the name, containing a callable (in this case, a plain old function) which also has "test" in the name, that doesn't really do anything. 4 | 5 | [tests/00_empty_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/00_empty_test.py) 6 | 7 | ``` 8 | pytest -vs tests/00_empty_test.py 9 | ``` 10 | 11 | This is about as minimal as a PyTest test can get - It doesn't assert anything. It doesn't really do anything interesting at all! But since it also doesn't raise any Exceptions, it results in a passing test. 12 | 13 | Among other things, this demonstrates that we can use PyTest tests to simply "exercise" our code, even if we don't assert anything specific about the behavior (beyond it not being "broken"), in the sense that it does not raise any unhandled Exceptions or Errors. 14 | 15 | This is also an example of how PyTest decides what is and is not a test: By default, it looks for callables (such as functions or methods) whose names begin with "test". And earlier, when we ran it without any arguments, it searched for tests in all the modules (python files) whose name started with "test_" or ended with "_test", by default. 16 | 17 | If you take a look into the file, you'll see that PyTest _did_ run the `test_empty` function, since its name starts with "test", but it chose not to run the `empty_test` function, since it contains "test", but does not start with it. 18 | 19 | All of these "test discovery" behaviors can be changed, if you want, but I recommend at least starting with PyTest's defaults, as they tend to be pretty reasonable. 20 | 21 | While it's a start, this test doesn't really prove much - let's fix that! 22 | 23 | ## Up Next: 24 | 25 | [Basic Tests and Assertions](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/01_basic_test.md) 26 | -------------------------------------------------------------------------------- /tutorials/01_basic_test.md: -------------------------------------------------------------------------------- 1 | ## 1: A Basic Test 2 | 3 | Let's make a proper test that actually asserts something: 4 | 5 | [tests/01_basic_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/01_basic_test.py) 6 | 7 | ``` 8 | pytest -vs tests/01_basic_test.py 9 | ``` 10 | 11 | Pytest doesn't come with a ton of fancy assertion methods, because it's doing a lot of work behind the scenes to make Python's humble `assert` operator more informative. 12 | 13 | For example, try changing `DATA_SET_B` to `DATA_SET_C` in the assertion to make this test fail, and run it again - And without any of the flags to list tests verbosely or print their output: 14 | 15 | ``` 16 | pytest tests/01_basic_test.py 17 | ``` 18 | 19 | Instead of just raising "AssertionError", PyTest will show you the line where the failure occurred, in context with the rest of your code, and even unpack the contents of the two variables for you, to indicate the difference. 20 | 21 | In fact, it shows you the most relevant part of the diff by default - You can run the command with `-v` to see more of the difference between the two objects, or `-vv` to see all the available information that PyTest has about the failure. Nifty! 22 | 23 | ## Up Next: 24 | 25 | [Special Assertions](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/02_special_assertions.md) 26 | -------------------------------------------------------------------------------- /tutorials/02_special_assertions.md: -------------------------------------------------------------------------------- 1 | ## 2: Special Assertions 2 | 3 | Not everything can be expressed as a simple assertion, though, but fear not - PyTest provides: 4 | 5 | [tests/02_special_assertions_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/02_special_assertions_test.py) 6 | 7 | ``` 8 | pytest tests/02_special_assertions_test.py 9 | ``` 10 | Two of these tests raise exceptions on purpose - we can use the `pytest.raises` context manager to both assert that they happen (and handle that exception, so it doesn't show up as a failure). For example, if you change line 9 to `x = 1/1`, PyTest will now fail the test, since the expected Exception didn't happen. (and it will explain this in detail in the console!) 11 | 12 | In `test_keyerror_details`, we also assign the exception to a variable using `as`, so that we can refer to it after the `pytest.raises` block - we can inspect it in more detail, or even `assert` that it has qualities we're expecting. Very helpful when you want to test for specific exception-raising behavior! 13 | 14 | Finally, in `test_approximate_matches`, we use `pytest.approx` to help assert that our two values are "approximately" equal, even it's not exact due to fun with floating point math. (We can also adjust how "close" we want the match to be before it fails the test - For more details, check out the [pytest.approx documentation](https://docs.pytest.org/en/latest/reference.html#pytest-approx).) 15 | 16 | ### Up Next: 17 | 18 | [Review of the Basics](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/03_reviewing_the_basics.md) 19 | -------------------------------------------------------------------------------- /tutorials/03_reviewing_the_basics.md: -------------------------------------------------------------------------------- 1 | ### 3: Reviewing the Basics 2 | 3 | * PyTest cases can be as simple as a function whose name starts with "test", in a file whose name starts with "test". 4 | 5 | * (PyTest will also find and run xUnit-style tests created using the standard `unittest` module, allowing you to start using PyTest alongside existing, legacy tests. 6 | 7 | * The test-finding behavior has reasonable defaults, but is [extremely configurable!](https://docs.pytest.org/en/latest/goodpractices.html#conventions-for-python-test-discovery).) 8 | 9 | * A PyTest case will pass if: 10 | * It's assertions are True 11 | * (Or it doesn't have any assertions!) 12 | * And if it doesn't raise any unhandled Exceptions. 13 | 14 | * (PyTest can be used to "exercise" code, and will report errors, even without any actual assertions!) 15 | 16 | * PyTest uses the basic Python `assert` keyword, but will introspect into your code and "unpack" useful info about why the assertion failed. 17 | 18 | * (If your PyTest case calls other code that makes assertions, they will be honored as well (in the sense that any failed assertion resulting from your test will cause the test to be reported as "failed".) 19 | 20 | * However, assertions that aren't local (e.g. not located inside your test function) won't be "unpacked" and explained in detail. If your tests call other code that performs assertions, you should make those "external" assertions as clear as possible: Try to limit each assert to one specific check, and provide an error message as a second argument, so that the failure is easier to understand. 21 | 22 | * For example, if you wanted to assert that x is greater than zero, and divisible by 2, in a function that is called by one of your test cases (but is not inside a python test case function!) consider something like: 23 | 24 | ``` 25 | assert (x > 0), "X should be > 0, but is {}".format(x) 26 | assert not (x % 2), "X should be divisible by 2, but is {}".format(x) 27 | ``` 28 | 29 | (But if possible, do all your assertions inside test cases, so that PyTest can document their failure reasons and context for you!) 30 | 31 | * PyTest provides features for "expecting" Exceptions, and matching approximately similar values, similiar to [unittest.TestCase](https://docs.python.org/2/library/unittest.html#basic-example): 32 | 33 | * [pytest.raises](https://docs.pytest.org/en/latest/reference.html#pytest-raises) 34 | 35 | * [pytest.approx](https://docs.pytest.org/en/latest/reference.html#pytest-approx) 36 | 37 | ### Up Next: 38 | 39 | [Intro to Fixtures](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/04_intro_to_fixtures.md) 40 | -------------------------------------------------------------------------------- /tutorials/04_intro_to_fixtures.md: -------------------------------------------------------------------------------- 1 | ## 4: Fixtures 2 | 3 | Fixtures are a core part of what makes PyTest really powerful - They can fill the same role as `setUp()` and `tearDown()` methods in the old xUnit style `unittest.TestCase` tests, but can also go far beyond that. And you don't even need to create Classes to use them! 4 | 5 | We create our `simple_fixture` simply by defining a function with the `pytest.fixture` decorator - This example just prints some text, but you could imagine it doing something more interesting, like setting up test data, or initializing objects to be tested... 6 | 7 | Then we make another test, but this time we give it a single argument whose name matches the name of our `simple_fixture`, above. 8 | 9 | PyTest is responsible for "calling" our test functions, and deciding if they were successful, but what will it do if a test function has a named argument? 10 | 11 | [tests/03_simple_fixture_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/03_simple_fixture_test.py) 12 | 13 | ``` 14 | pytest -vs tests/03_simple_fixture_test.py 15 | ``` 16 | 17 | Now, you might be asking, "What the heck just happened?" 18 | 19 | The short answer is "dependency injection", but the longer answer is that, when PyTest calls our test functions, it's also attempting to "fill in" their named arguments using `fixtures` with matching names. And as we can see in the detailed output, it is essentially calling our fixture function first, and then our test. 20 | 21 | Another way to express this is that PyTest test case arguments indicate "dependencies" on fixtures, which PyTest will prepare in advance. And it is falling the fixture function multiple times - By default, it calls the fixture function once for each test case that depends on it. (This behavior is configurable as well! But we'll get to that later.) 22 | 23 | (You might be wondering what happens if you add an argument whose name doesn't correspond to a Fixture: The answer is "nothing good". For example, try changing the argument name to `not_a_fixture` on one of the tests, and run them again...) 24 | 25 | So far, our fixture hasn't done much for us: Let's change that. 26 | 27 | ### Up Next: 28 | 29 | [Fixture Return Values](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/05_fixture_return_values.md) -------------------------------------------------------------------------------- /tutorials/05_fixture_return_values.md: -------------------------------------------------------------------------------- 1 | ## 5: Fixture Returns 2 | 3 | Beyond simply printing a message, a fixture can also return data, just like a regular function: 4 | 5 | [tests/04_fixture_returns_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/04_fixture_returns_test.py) 6 | 7 | ``` 8 | pytest -vs tests/04_fixture_returns_test.py 9 | ``` 10 | 11 | The interesting part is that when PyTest runs our test, it not only runs the fixture function first, it also captures the output (in this case, the return value of `one_fixture`), and passes it into our test function as the `one_fixture` argument! 12 | 13 | So we can make assertions about what our fixture is returning, or use it in any other way we'd like during our test. (And by default, PyTest runs our fixtures for each test that depends on them, so we are guaranteed that each test is getting a "fresh" copy of whatever it is that our fixture returns: It doesn't matter for fixtures that return static data, but imagine a fixture that returns a mutable data structure, that gets altered during a test?) 14 | 15 | This helps take care of test case "setUp" scenarios, but what about "tearDown"? (If you aren't familiar with xUnit, the "setUp" method is run before each test, and the "tearDown" method is called afterwards, and typically used to clean up after a test.) 16 | 17 | ### Up Next: 18 | 19 | [Yield Fixtures](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/06_yield_fixtures.md) -------------------------------------------------------------------------------- /tutorials/06_yield_fixtures.md: -------------------------------------------------------------------------------- 1 | ## 6: Yield Fixtures 2 | 3 | Here's a more complicated fixture that uses the `yield` keyword - You may be more accustomed to seeing it used in generator functions, which are typically called repeatedly (e.g. iterated) to deliver their values. 4 | 5 | If this seems confusing (or if you aren't familiar with `yield`), don't worry: This is a little different, but the important thing to understand is that `yield` is a lot like `return`, except for one interesting difference... 6 | 7 | [tests/05_yield_fixture_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/05_yield_fixture_test.py) 8 | 9 | ``` 10 | pytest -vs tests/05_yield_fixture_test.py 11 | ``` 12 | 13 | Like last time, our fixture ran before the test case that depended on it... Up until the point that we called `yield`. Then our test was run, receiving the "yielded" value as an argument... And then, _after_ the test finished, our fixture picked up where it left off, and ran the rest of the code (after the `yield` call). 14 | 15 | This allows us to do both pre-test and post-test actions, with a minimum of code! But there are a few things to keep in mind: 16 | 17 | * Unlike a typical generators, our yield fixtures should never yield more than once. (And PyTest enforces this - try adding a second yield and see what happens: Spoiler Alert! As with many of our hypothetical questions, the result is an unusable test). 18 | 19 | * (If this is messing with your own personal concepts of generators, try not to read too much into it - Fixtures _can_ be generators, and PyTest will use them accordingly, but it expects them to yield exactly once, and that it will perform the first generation before the test case, and the second generation (the "cleanup" code after the `yield`) after the test case completes. 20 | 21 | * There is a corner case to be aware of here: If something goes wrong _inside_ our fixture, such that an unhandled exception is thrown before we call `yield`, we'll never get to the post-yield code... Kind of understandable, if you think about it! 22 | 23 | * This may not be the end of the world - it also means we won't actually run the test cases that depend on our broken fixture, so perhaps the post-test cleanup won't be as vital. 24 | 25 | * (This doesn't totally kill our test run, either - The tests that depend on the broken fixture will fail during "setup", but PyTest will consider on to other stuff.) 26 | 27 | * (If this seems like it might be problematic, depending on what the fixture was trying to do before failed, don't worry: There are some more thorough cleanup options, which we'll discuss later on.) 28 | 29 | ### Up Next: 30 | 31 | [Request Fixtures](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/07_request_fixtures.md) -------------------------------------------------------------------------------- /tutorials/07_request_fixtures.md: -------------------------------------------------------------------------------- 1 | ## 7: The "request" fixture 2 | 3 | Fixtures are very powerful, not only because PyTest can run them automatically, but because they can be "aware" of the context in which they're being used! 4 | 5 | (And also, as we're about to see, Fixtures can depend on other Fixtures, allowing for some really interesting behavior...) 6 | 7 | In this example, we'll write a fixture which leverages the built-in `request` fixture (aka a "Plugin", a standard fixture that is globally available to all PyTest tests) to learn more about how it's being called: 8 | 9 | [tests/06_request_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/06_request_test.py) 10 | 11 | ``` 12 | pytest -vs tests/06_request_test.py 13 | ``` 14 | 15 | Among other things, our fixture can tell that it's being invoked at function-level scope (e.g. it is being referenced directly by a test case function), it knows which "node" it's currently running on (in a dependency tree sense: It knows which test case is calling it), and it knows which Module it's being run in, which in this case is the `06_request_test.py` file. 16 | 17 | In addition to providing context, the `request` fixture can also be used to influence PyTest's behavior as it runs our tests... 18 | 19 | ### Up Next: 20 | 21 | [Request "finalizer" Callbacks](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/08_request_finalizers.md) -------------------------------------------------------------------------------- /tutorials/08_request_finalizers.md: -------------------------------------------------------------------------------- 1 | 2 | ## 8: Request "finalizer" Callbacks 3 | 4 | Sometimes we want to run a "cleanup" function after testing is complete: We've already covered a very easy way to do this using `yield` [inside a fixture](), but noted that it's not the safest option, if something goes wrong inside our fixture... 5 | 6 | Fortunately, PyTest has a `request` plugin (a built-in global fixture) that, among other things, can be used to add a "finalizer", a function which is guaranteed to be called after the fixture (and the test(s) that depend on it) are run... Even in the worst case scenario, where our fixture itself fails, and raises an unhandled exception: 7 | 8 | [tests/07_request_finalizer_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/07_request_finalizer_test.py) 9 | 10 | ``` 11 | pytest -vs tests/07_request_finalizer_test.py 12 | ``` 13 | 14 | As usual, we can see that our fixture runs first (including a "risky" function call), followed by our test case, and finally our safe_cleanup function. One advantage of this approach is that we can re-use a shared cleanup function, but the main benefit is that even if our fixture fails to initialize, our finalizer "cleanup" function still gets run! 15 | 16 | To really see the finalizer in action, uncomment line 11 in `07_request_finalizer_test.py` (e.g. the commented-out "raise Exception" call), and re-run the test using the command above. 17 | 18 | That "risky" function didn't work out - it derailed our fixture, and our test case never even ran! But despite all that, our `safe_cleanup` function still got called. 19 | 20 | And in a real test, with a fixture that sets up something complicated or expensive (and might fail _after_ it has made some kind of a mess), guaranteed cleanup could be a really important distinction! 21 | 22 | ### Up Next: 23 | 24 | [Intro to Parameters](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/09_intro_to_parameters.md) -------------------------------------------------------------------------------- /tutorials/09_intro_to_parameters.md: -------------------------------------------------------------------------------- 1 | ## 9: Intro to Parameters 2 | 3 | When we decorate a callable as a Fixture, we can also give it some additional properties, like parameters, allowing us to do parameterized testing - And the `request` plugin (built-in fixture) we've covered previously will come in handy here as well. 4 | 5 | In testing, we use parameterization to refactor and "automate" similar tests. Especially in unit testing, you may find yourself in a situation where you want to run the same code, with the same set of assertions (essentially, the same "test") with a number of different inputs and expected outputs. 6 | 7 | It's possible to simply include those inputs and outputs (a.k.a. parameters) in our test case... But at the expense of making that test more complicated, and harder to understand when it fails: We'll see a single test case passing or failing, regardless of how many of those cases were valid. And it may not be clear which set of parameters was the problem, without digging into the code, turning on more debugging, etc... 8 | 9 | So let's look at a better approach: 10 | 11 | [tests/08_params_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/08_params_test.py) 12 | 13 | ``` 14 | pytest -vs tests/08_params_test.py 15 | ``` 16 | 17 | We only have one test case here, with one fixture, but that fixture includes five parameters, "a" through "e". Because our test case depends on a parameterized fixture, PyTest will run it repeatedly, once for each parameter, and it treats each of those as a distinct "test" that can pass or fail independently: We can clearly see how many of those parameters passed or failed, and it even labeled those tests with both the test case name, and the parameter being used. 18 | 19 | (This is an interesting philosophical point: When we saw PyTest referring to "nodes" earlier, they seemed to correspond to our test functions... But it's more accurate to say that our test functions are merely "specifications" or "requests" that tell PyTest what to do, and the resulting nodes are the _real_ Tests. This may also make ) 20 | 21 | PyTest will run our test cases (and their fixture) once per parameter: In our fixture, we're using the `request` plugin to access the current parameter value, as `request.param`, and in this example we're simply yielding that value. 22 | 23 | And so our single test case is called five times, once for each parameter value, with that value being passed in as the named argument corresponding to `letters_fixture`. 24 | 25 | It doesn't have to be this direct - Our fixture might use the parameter to customize an object, then yield that object to our test. (Or even yield a tuple of values that are derived from the parameter). 26 | 27 | (There is also a second parameterized fixture, `mode`, which uses a second keyword argument, `ids`, which allows the names of each parameter label to be overridden. For example, the parameters we need are 1, 2, and 3, but we would prefer to see them labeled as "foo", "bar", and "baz" on the individual tests.) 28 | 29 | And this behavior gets really interesting (and powerful) when we consider that fixtures can depend on other fixtures... 30 | 31 | ### Up Next: 32 | 33 | [Parameter-ception!](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/10_parameter-ception.md) -------------------------------------------------------------------------------- /tutorials/10_parameter-ception.md: -------------------------------------------------------------------------------- 1 | ## 10: Parameter-ception! 2 | 3 | Python includes an amazing set of Iteration Tools, including functions that make it simple to generate all possible combinations and permutations of a set of data - We're about to see an interesting example of this kind of behavior, using multiple parameterized fixtures. 4 | 5 | It's a lot easier to demonstrate than explain, so let's start with that: Here's another single test case, which depends on two fixtures - And it's worth noting that each of those fixtures each have their own set of parameters: 6 | 7 | [tests/09_params-ception_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/09_params-ception_test.py) 8 | 9 | ``` 10 | pytest -vs tests/09_params-ception_test.py 11 | ``` 12 | 13 | How did two sets of 4 parameters turn into 16 tests? The short answer is that we're experiencing the [Cartesian Product](https://en.wikipedia.org/wiki/Cartesian_product) of our fixture parameters. 14 | 15 | But the less set-theory-intensive answer is that our test case depends on `letters_fixture`, which causes PyTest to produce a test for each letter parameter... And it also depends on `numbers_fixture`, which in turn wants to repeat each test with each of its own number parameters. 16 | 17 | This is evident from the order in which the tests are run, and (thanks to PyTest!) from the labels of those tests: We can see that our test is being run first with our `letters_fixture`, and each of its parameters (starting with "a"), and those runs are being further "multiplied" by the `letters_fixture`, which is ensuring that its own tests are being repeated for each of its own parameters (starting with "1"). 18 | 19 | As a result, our single test case function gets run as a total of sixteen tests, once for each combination of the four numbers and four letters (4 x 4 = 16). 20 | 21 | While we _could_ just make a single fixture that yielded each combination as a parameter ('a1', 'a2', 'a3', etc.), maintaining them as separate fixtures reduces the footprint of those parameters in our code, leaving it somewhat easier to read (in the sense that two lists of four takes up half the space of a full list of sixteen). 22 | 23 | But these individual fixtures could also be reused and composed across different tests, allowing for a lot more flexibility, especially if the letters or numbers ever needed to be referenced on their own. And imagine if you needed a fixture that tested 234 unique combinations of letters and digits, and later decided to drop all the sets with vowels, or all the sets containing the number 3 - Wouldn't it be cleaner and easier to operate on two smaller subsets (the list of the 26 letters, and the list of 9 digits) that combine to produce that data? 24 | 25 | But there's an even more elegant way to solve that particular problem, continuing to take advantage of the fact that fixtures can, in turn, depend on other fixtures... 26 | 27 | ### Up Next: 28 | 29 | [Advanced Parameter-ception!](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/11_advanced_parameter-ception.md) 30 | -------------------------------------------------------------------------------- /tutorials/11_advanced_parameter-ception.md: -------------------------------------------------------------------------------- 1 | ## 11: Advanced Parameter-ception! 2 | 3 | Let's try that again, but with our test case depending on only one fixture (which, in turn, depends on a second fixture): 4 | 5 | [tests/10_advanced_params-ception_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/10_advanced_params-ception_test.py) 6 | 7 | 8 | ``` 9 | pytest -vs tests/10_advanced_params-ception_test.py 10 | ``` 11 | 12 | The end result is... almost identical, even though the approach is different. 13 | 14 | Since our parameterized `coordinate_fixture` depends on another parameterized fixture, `numbers_fixture`, we still get the Cartesian Product of both set of parameters, even though the test case itself only depends on one of them. 15 | 16 | And this relationship is still reflected in the names PyTest assigns to the tests being run: the letter from the "inner" fixture appears first, followed by the digit from the "outer" fixture it depends on. 17 | 18 | This can be a deceptively simple but powerful feature - You can essentially create "higher order fixtures" that take each other as dependencies (and arguments), using extra layers of fixtures to further customize behavior, all without touching the test case itself. 19 | 20 | For example, try uncommenting the commented section of code (lines 19 through 22) to enable a clever piece of filtering logic using the `pytest.skip` function, and run the test again... 21 | 22 | Now the `coordinate_fixture` applies some extra logic about which parameter combinations should be used, without affecting `numbers_fixture`, or the test case. 23 | 24 | This also demonstrates that PyTest responds to `skip` at any time - even if it's called inside of a fixture, before we've even gotten into a test case, allowing us to avoid any undesirable combinations. This is an other hint to how PyTest works, internally: Our test cases are merely specifications for the tests that PyTest will be running, and we can conditionally skip a test at any point before it completes (thus passing or failing). 25 | 26 | In this example, we've added our filtering logic to one of our parameterized fixtures... But we could further abstract this into a `letters_fixture`and `numbers_fixture` which yield parameters, and a third, more purpose-specific `coordinates_fixture` that depends on those, adds the filtering logic, and has no parameters of its own, with the test case depending on it only. If we expect to use our two parameterized fixtures separately, that might be an even better way of organizing them. 27 | 28 | Finally, this can also serve as an example of how fixture dependencies are not entirely unlike an `import` - If you add the `number_fixture` as an argument for (and dependency of) `test_advanced_fixtureception`, what do you expect might go wrong? 29 | 30 | While this seems like it could be problematic - The test case now depends on `number_fixture` twice, both directly with a named argument, and indirectly through `coordinates_fixture` - PyTest is surprisingly cool about it. 31 | 32 | You might expect this to result in `number_fixture` being invoked twice, doubling our resulting tests, or even multiplying them by another copy of our extra parameter... But PyTest recognizes that both dependencies refer to the same fixture, and that fixture (by default) is run once per test case, and so we get the same results as if our `number_fixture` was only referred to once. 33 | 34 | ### Up Next: 35 | 36 | [Reviewing Fixtures](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/12_reviewing_fixtures.md) 37 | -------------------------------------------------------------------------------- /tutorials/12_reviewing_fixtures.md: -------------------------------------------------------------------------------- 1 | ## 12: Reviewing Fixtures 2 | 3 | * Fixtures are PyTest's mechanism for controlling the "context" around your test cases 4 | 5 | * PyTest comes with a number of [built in fixtures](https://docs.pytest.org/en/latest/reference.html#fixtures), though you will likely want to create your own. 6 | 7 | * Fixtures are typically functions with the @pytest.fixture decorator: 8 | 9 | * Fixtures can `return` a value, just like a normal function, or `yield` a value like a generator (though only one yield is allowed!), and any post-yield code will be run after the test case has passed or failed. 10 | 11 | * By default, a Fixture will be called (and potentially return or yield a value) once per Test Case it is associated with, though this can be reconfigured in a number of different ways (covered later, though see parameterization below...) 12 | 13 | * (If a fixture raises an unhandled exception, or otherwise fails, the test case won't be run. In the case of a `yield` fixture, this also means that the post-yield code won't be run, either.) 14 | 15 | * Fixtures can be defined "locally", in the same module as the test cases that use them, or "globally" in a `conftest.py` file. (In the event of multiple implementations of a given fixture name, PyTest will prefer the "most local" one, e.g. the fixture located closest to the test case in your file structure.) 16 | 17 | * Test cases can have a "dependency" on a fixture: 18 | 19 | * By having a keyword argument whose name matches the fixture 20 | 21 | * (Test cases will fail if their named arguments don't correspond to valid fixtures) 22 | 23 | * Fixtures with parameters (`params`) can cause multiple tests to be created out of a given test case - These "parameterized" tests can pass or fail independently, and are named after the parameters. 24 | 25 | * (It's worth noting that the test instances themselves are the "nodes" in the PyTest testing graph, not the test case functions that you have been writing - This distinction starts to become more apparent with parameters!) 26 | 27 | * Fixtures can get "very meta": 28 | 29 | * Fixtures can depend on each other... 30 | 31 | * Fixture dependencies are similar to `import`s, in that even if A depends on B and C, and B depends on C, then C will still only be imported (or in this case, instantiated) once for A. 32 | 33 | * If a test case depends on multiple fixtures that have parameters, the test case will be called with the full cartesian product of all the parameters (e.g. every combination of all the fixture parameters combined). 34 | 35 | Fixtures are very complicated, but ultimately very powerful - we've really only scratched the surface here, but hopefully this gives you an overview of what they can do for your tests. 36 | 37 | For now, let's move on to another powerful concept... 38 | 39 | ### Up Next: 40 | 41 | [Introduction to Test Marking](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/13_intro_to_test_marking.md) -------------------------------------------------------------------------------- /tutorials/13_intro_to_test_marking.md: -------------------------------------------------------------------------------- 1 | ## 13: Introduction to Test Marking 2 | 3 | PyTest includes a "mark" decorator, which can be used to tag tests and other objects for later reference (and for a more localized type of parameterization, though we'll get to that later). 4 | 5 | Here are some tests with marks already applied: 6 | 7 | [tests/11_mark_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/11_mark_test.py) 8 | 9 | ``` 10 | pytest -vs tests/11_mark_test.py 11 | ``` 12 | 13 | We ran three tests... Note that even though we marked `asserty_callable_thing` as if it was a test, PyTest still didn't actually run it - `mark` tags are only processed on callables that PyTest recognizes as test cases (and `asserty_callable_thing`'s name does not start with the word "test"). 14 | 15 | Admittedly, this code isn't all that interesting on its own. But the real value of `mark` is best demonstrated within the `pytest` test runner itself: 16 | 17 | We can tell PyTest to run a specific named test (a.k.a "node") by name, by appending it to our module path with a "::" separator. For example: 18 | 19 | ``` 20 | pytest -v 11_mark_test.py::test_fake_query 21 | ``` 22 | 23 | (PyTest only collected and ran the named `test_fake_query` case, instead of all the available test cases in the file.) 24 | 25 | (You might be wondering: What if we explicitly ordered PyTest to run `asserty_callable_thing` with the node syntax? Even though it isn't something that PyTest normally recognizes as a test?) 26 | 27 | ``` 28 | pytest -v tests/11_mark_test.py::asserty_callable_thing 29 | ``` 30 | 31 | (No luck there - This is another example of how PyTest uses "node" to refer to the specific tests that it actually runs - Since `asserty_callable_thing` isn't recognized as a test case, no "nodes" are created for it, and even the explicit node syntax won't find anything with that name.) 32 | 33 | We can also do partial matches on node name, for example, running all tests with "query" in the name, using the `-k` operator: 34 | 35 | ``` 36 | pytest -v -k query 37 | ``` 38 | 39 | (PyTest only matches two of our three test cases, based on name.) 40 | 41 | Or we could use a simple `-k` expression to run all tests with "stats" or "join" in their names: 42 | 43 | ``` 44 | pytest -v -k "stats or join" 45 | ``` 46 | 47 | Or, and this is where `mark` comes in, we can use `-m` to run only the tests marked with the "db" tag: 48 | 49 | ``` 50 | pytest -v -m db 51 | ``` 52 | 53 | Or a `-m` expression to target tests marked with "db", but *not* also with the "slow" tag: 54 | 55 | ``` 56 | pytest -v -m "db and not slow" 57 | ``` 58 | 59 | While the `mark` decorator can be used to simply "tag" test cases for easier selection in the test runner, it has some more esoteric uses as well... 60 | 61 | ### Up Next: 62 | 63 | [Mark-based Parameters](https://github.com/pluralsight/intro-to-pytest/blob/master/tutorials/14_mark_based_parameters.md) 64 | -------------------------------------------------------------------------------- /tutorials/14_mark_based_parameters.md: -------------------------------------------------------------------------------- 1 | (I am still writing more detailed tutorials for the newer examples, but feel free to check them out anyway:) 2 | 3 | . 4 | 5 | . 6 | 7 | . 8 | 9 | ## 14: Mark-based Parameters 10 | 11 | [tests/13_mark_parametrization.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/13_mark_parametrization.py) 12 | 13 | ``` 14 | pytest -vs tests/13_mark_parametrization.py 15 | ``` 16 | 17 | ## 15: Reviewing Marks 18 | 19 | ... 20 | 21 | ## 16: PyTesting with Classes 22 | 23 | [tests/14_class_based_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/14_class_based_test.py) 24 | 25 | ``` 26 | pytest -vs tests/14_class_based_test.py 27 | ``` 28 | 29 | ## 17: Advanced Class usage 30 | 31 | [tests/15_advanced_class_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/15_advanced_class_test.py) 32 | 33 | ``` 34 | pytest -vs tests/15_advanced_class_test.py 35 | ``` 36 | 37 | ## 18: Reviewing Class Tests 38 | 39 | ... 40 | 41 | ## 19: Fixture Scoping 42 | 43 | [tests/16_scoped_and_meta_fixtures_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/16_scoped_and_meta_fixtures_test.py) 44 | 45 | ``` 46 | pytest -vs tests/16_scoped_and_meta_fixtures_test.py 47 | ``` 48 | 49 | ## 20: Marked Meta-Fixtures 50 | 51 | [tests/17_marked_meta_fixtures.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/17_marked_meta_fixtures.py) 52 | 53 | ``` 54 | pytest -vs tests/17_marked_meta_fixtures.py 55 | ``` 56 | 57 | ## 21: The Mocker Fixture 58 | 59 | [tests/18_the_mocker_fixture.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/18_the_mocker_fixture.py) 60 | 61 | ``` 62 | pytest -vs tests/18_the_mocker_fixture.py 63 | ``` 64 | 65 | ## 22: Re-Usable mock fixtures 66 | 67 | [tests/19_re_usable_mock_test.py](https://github.com/pluralsight/intro-to-pytest/blob/master/tests/19_re_usable_mock_test.py) 68 | 69 | ``` 70 | pytest -vs tests/19_re_usable_mock_test.py 71 | ``` 72 | --------------------------------------------------------------------------------