Given no arguments, pytest looks at your current directory and all subdirectories for test files and runs the test code it finds
If you give pytest a filename, a directory name, or a list of those, it looks there instead of the current directory
Each directory listed on the command line is recursively traversed to look for test code
The part of pytest execution where pytest goes off and finds which tests to run is called test discovery
Naming conventions to keep your test code discoverable:
test files: test_<something>.py or <something>_test.py
test methods and functions: test_<something>
test classes: Test<Something>
There are ways to alter these discovery rules
see: Configuration
Running only one test:
pytest <directory>/<file.py>::<test_name>
e.g., pytest tasks/test_four.py::test_asdict
Using Options
Some useful pytest command line options
--collect-only
shows you which tests will be run with the given options and configuration
e.g., when used in conjunction with -k
-k
use an expression to find what test functions to run
e.g., pytest -k "asdict or defaults"
run the test_asdict() and test_defaults() tests
-m MARKEXPR
markers allow you to mark a subset of your test functions so that they can be run together
e.g., pytest -m run_these_please
will run tests with the @pytest.mark.run_these_please marker
further examples:
-m "mark1 and mark2"
-m "mark1 and not mark2"
-m "mark1 or mark2"
-x, --exitfirst
stop the entire test session immediately when a test fails
useful when debugging a problem
-s and --capture=method
-s flag allows anything that normally would be printed to stdout to actually be printed to stdout while the tests are running
shortcut for --capture=no
example: without and with -s:
deftest_fail():
a=1print("--- a is "+str(a))
asserta==2
$ pytest====================== test session starts =======================...collected 1 itemtests/test_s_option.py F [100%]============================ FAILURES ============================___________________________ test_fail ____________________________ def test_fail(): a = 1 print("--- a is " + str(a))
> assert a == 2E assert 1 == 2tests/test_s_option.py:4: AssertionError---------------------- Captured stdout call ------------------------- a is 1==================== short test summary info =====================FAILED tests/test_s_option.py::test_fail - assert 1 == 2======================= 1 failed in 0.02s ========================
$ pytest -s====================== test session starts =======================...collected 1 itemtests/test_s_option.py --- a is 1F============================ FAILURES ============================___________________________ test_fail ____________________________ def test_fail(): a = 1 print("--- a is " + str(a))
> assert a == 2E assert 1 == 2tests/test_s_option.py:4: AssertionError==================== short test summary info =====================FAILED tests/test_s_option.py::test_fail - assert 1 == 2======================= 1 failed in 0.02s ========================
-lf, --last-failed
when one or more tests fails, run just the failing tests
helpful for debugging
-ff, --failed-first
run all tests but run the last failures first
-v, --verbose
report more information than without it
-q, --quiet
opposite of -v/--verbose
decrease the information reported
-l, --showlocals
local variables and their values are displayed with tracebacks for failing tests
--tb=style
modify the way tracebacks for failures are output
useful styles:
short: prints just the assert line and the E evaluated line with no context
line: keep the failure to one line
no: remove the traceback entirely
--durations=N
report the slowest N number of tests/setups/teardowns after the tests run
--durations=0 reports everything in order of slowest to fastest
All of the tests are kept in tests and separate from the package source files in src
not a requirement of pytest, but a best practice
Functional (func) and unit (unit) tests are separated into their own directories
allows you to easily run a subset of tests
functional tests should only break if we are intentionally changing functionality of the system
unit tests could break during a refactoring or an implementation change
Two types of __init__.py files
under the src/ directory
src/tasks/__init__.py tells Python that the directory is a package
acts as the main interface to the package when someone uses import tasks
contains code to import specific functions from api.py
cli.py and our test files can access package functionality like tasks.add() instead of having to do tasks.api.add()
under tests/
tests/func/__init__.py and tests/unit/__init__.py files are empty
tell pytest to go up one directory to look for the root of the test directory and to look for the pytest.ini file
pytest.ini
optional
contains project-wide pytest configuration, at most only one of these in your project
directives that change the behaviour of pytest
e.g., a list of options that will always be used
conftest.py
optional
considered by pytest as a "local plugin" and can contain hook functions and fixtures
hook functions
a way to insert code into part of the pytest execution process to alter how pytest works
fixtures
setup and teardown functions that run before and after test functions
can be used to represent resources and data used by the tests
hook functions and fixtures that are used by tests in multiple subdirectories should be contained in tests/conftest.py
can have multiple conftest.py files
e.g., you can have one at tests and one for each subdirectory under tests
Installing a Package Locally
The best way to allow the tests to be able to import tasks or from tasks import something is to install tasks locally using pip
possible because there's a setup.py file present to direct pip
Install tasks either by running pip install . or pip install -e . from the tasks_proj directory
or you can run pip install -e tasks_proj from one directory up
-e, --editable <path/url>: Install a project in editable mode (i.e. setuptools "develop mode") from a local project path or a VCS url
be able to modify the source code while tasks is installed
Run tests:
$ cd /path/to/code/ch2/tasks_proj/tests/unit
$ pytest test_task.py
Using assert Statements
The normal Python assert statement is your primary tool to communicate test failure
The following is a list of a few of the assert forms and assert helper functions:
pytest
unittest
assert something
assertTrue(something)
assert a == b
assertEqual(a, b)
assert a <= b
assertLessEqual(a, b)
You can use assert <expression> with any expression
if the expression would evaluate to False if converted to a bool, the test would fail
pytest includes a feature called assert rewriting that intercepts assert calls and replaces them with something that can tell you more about why your assertions failed
$ pytest test_task_fail.py============================= test session starts ==============================platform linux -- Python 3.6.9, pytest-5.4.1, py-1.8.1, pluggy-0.13.1rootdir: /.../ch2/tasks_proj/tests, inifile: pytest.inicollected 2 itemstest_task_fail.py FF [100%]=================================== FAILURES ===================================______________________________ test_task_equality ______________________________ def test_task_equality(): """Different tasks should not be equal.""" t1 = Task('sit there', 'brian') t2 = Task('do something', 'okken')
> assert t1 == t2E AssertionError: assert Task(summary=...alse, id=None) == Task(summary=...alse, id=None)E At index 0 diff: 'sit there' != 'do something'E Use -v to get the full difftest_task_fail.py:9: AssertionError______________________________ test_dict_equality ______________________________...
use the wrong type in a test function to intentionally cause TypeError exceptions, and use with pytest.raises(<expected exception>)
deftest_add_raises():
"""add() should raise an exception with wrong type param."""withpytest.raises(TypeError):
tasks.add(task="not a Task object")
Check the parameters to the exception
you can check to make sure the exception message is correct by adding as excinfo
deftest_start_tasks_db_raises():
"""Make sure unsupported db raises an exception."""withpytest.raises(ValueError) asexcinfo:
tasks.start_tasks_db("some/great/path", "mysql")
exception_msg=excinfo.value.args[0]
assertexception_msg=="db_type must be a 'tiny' or 'mongo'"
@pytest.mark.skipif(tasks.__version__<"0.2.0", reason="not supported until version 0.2.0")deftest_unique_id_1():
"""s (skipped): skipped because we're currently at version 0.1.0"""
...
$ pytest -r s test_unique_id_3.py============================= test session starts ==============================...collected 2 itemstest_unique_id_3.py s. [100%]=========================== short test summary info ============================SKIPPED [1] func/test_unique_id_3.py:8: not supported until version 0.2.0========================= 1 passed, 1 skipped in 0.31s =========================
reason
not required in skip, but it is required in skipif
show skip reason in test output with pytest -r s
-r chars show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed, (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-warnings), 'N' can be used to reset the list. (default: 'fE').
Marking Tests as Expecting to Fail
With the xfail marker, we are telling pytest to run a test function, but that we expect it to fail
@pytest.mark.xfail(tasks.__version__<"0.2.0", reason="not supported until version 0.2.0")deftest_unique_id_1():
"""x (xfail): expected to fail"""
...
@pytest.mark.xfail()deftest_unique_id_is_a_duck():
"""x (xfail): expected to fail"""
...
@pytest.mark.xfail()deftest_unique_id_not_a_duck():
"""X (xpass): expected to fail but passed """
...
$ pytest -r sxX test_unique_id_4.py============================= test session starts ==============================...collected 4 itemstest_unique_id_4.py xxX. [100%]=========================== short test summary info ============================XFAIL test_unique_id_4.py::test_unique_id_1 not supported until version 0.2.0XFAIL test_unique_id_4.py::test_unique_id_is_a_duckXPASS test_unique_id_4.py::test_unique_id_not_a_duck=================== 1 passed, 2 xfailed, 1 xpassed in 0.43s ====================
You can configure pytest to report the tests that pass but were marked with xfail to be reported as FAIL
done in a pytest.ini file:
[pytest]xfail_strict=true
Running a Subset of Tests
A Single Directory
To run all the tests from one directory, use the directory as a parameter to pytest
classTestUpdate:
"""Test expected exceptions with tasks.update()."""deftest_bad_id(self):
"""A non-int id should raise an exception."""
...
deftest_bad_task(self):
"""A non-Task task should raise an exception."""
...
To run just this class, add ::, then the class name to the file parameter
Fixtures are functions that are run by pytest before (and sometimes after) the actual test functions
the mechanism pytest provides to allow the separation of "getting ready for" and "cleaning up after" code from your test functions
You can use fixtures to
get a data set for the tests to work on
get a system into a known state before running a test
get data ready for multiple tests
@pytest.fixture() decorator is used to tell pytest that a function is a fixture
When you include the fixture name in the parameter list of a test function, pytest knows to run it before running the test
@pytest.fixture()def some_data():
"""Return answer to ultimate question."""
return 42
def test_some_data(some_data):
"""Use fixture return value in a test."""
assert some_data==42
test_some_data() has the name of the fixture, some_data, as a parameter
pytest will look for a fixture with this name in the module of the test
or in conftest.py files if it doesn't find it in this file
Sharing Fixtures Through conftest.py
To share fixtures among multiple test files
use a conftest.py file somewhere centrally located for all of the tests
for the Tasks project: tasks_proj/tests/conftest.py
You can put fixtures in individual test files
to only be used by tests in that file
You can have other conftest.py files in subdirectories of the top tests directory
will be available to tests in that directory and subdirectories
Although conftest.py is a Python module, it should not be imported by test files
gets read by pytest, and is considered a local plugin
Using Fixtures for Setup and Teardown
pytest includes a fixture called tmpdir that we can use for testing and don't have to worry about cleaning up
@pytest.fixture()deftasks_db(tmpdir):
"""Connect to db before tests, disconnect after."""# Setup : start dbtasks.start_tasks_db(str(tmpdir), "tiny")
yield# this is where the testing happens# Teardown : stop dbtasks.stop_tasks_db()
If there is a yield in the function
fixture execution stops there
passes control to the tests
picks up on the next line after the tests are done
Code after the yield is guaranteed to run regardless of what happens during the tests
We're not returning any data with the yield in this fixture, but you can
$ pytest --setup-show func/test_add.py -k valid_id============================= test session starts ==============================...collected 3 items / 2 deselected / 1 selectedfunc/test_add.pySETUP S tmp_path_factory SETUP F tmp_path (fixtures used: tmp_path_factory) SETUP F tmpdir (fixtures used: tmp_path) SETUP F tasks_db (fixtures used: tmpdir) func/test_add.py::test_add_returns_valid_id (fixtures used: request, tasks_db, tmp_path, tmp_path_factory, tmpdir). TEARDOWN F tasks_db TEARDOWN F tmpdir TEARDOWN F tmp_pathTEARDOWN S tmp_path_factory
The F and S in front of the fixture names indicate scope
F for function scope
S for session scope
Using Fixtures for Test Data
Fixtures are a great place to store data to use for testing
you can return anything
When an exception occurs in a fixture:
$ pytest test_fixtures.py::test_other_data============================= test session starts ==============================...collected 1 itemtest_fixtures.py E [100%]==================================== ERRORS ====================================______________________ ERROR at setup of test_other_data _______________________ @pytest.fixture() def some_other_data(): """Raise an exception from fixture."""
> return 1 / 0E ZeroDivisionError: division by zerotest_fixtures.py:20: ZeroDivisionError=========================== short test summary info ============================ERROR test_fixtures.py::test_other_data - ZeroDivisionError: division by zero=============================== 1 error in 0.02s ===============================
@pytest.fixture()deftasks_db(tmpdir):
"""Connect to db before tests, disconnect after."""# Setup : start dbtasks.start_tasks_db(str(tmpdir), "tiny")
yield# this is where the testing happens# Teardown : stop dbtasks.stop_tasks_db()
@pytest.fixture()deftasks_just_a_few():
"""All summaries and owners are unique."""return (
Task("Write some code", "Brian", True),
Task("Code review Brian's code", "Katie", False),
Task("Fix what Brian did", "Michelle", False),
)
@pytest.fixture()defdb_with_3_tasks(tasks_db, tasks_just_a_few):
"""Connected db with 3 tasks, all unique."""fortintasks_just_a_few:
tasks.add(t)
deftest_add_increases_count(db_with_3_tasks):
"""Test tasks.add() affect on tasks.count()."""# GIVEN a db with 3 tasks# WHEN another task is addedtasks.add(Task("throw a party"))
# THEN the count increases by 1asserttasks.count() ==4
$ pytest --setup-show func/test_add.py::test_add_increases_count============================= test session starts ==============================...collected 1 itemfunc/test_add.pySETUP S tmp_path_factory SETUP F tmp_path (fixtures used: tmp_path_factory) SETUP F tmpdir (fixtures used: tmp_path) SETUP F tasks_db (fixtures used: tmpdir) SETUP F tasks_just_a_few SETUP F db_with_3_tasks (fixtures used: tasks_db, tasks_just_a_few) func/test_add.py::test_add_increases_count (fixtures used: db_with_3_tasks, request, tasks_db, tasks_just_a_few, tmp_path, tmp_path_factory, tmpdir). TEARDOWN F db_with_3_tasks TEARDOWN F tasks_just_a_few TEARDOWN F tasks_db TEARDOWN F tmpdir TEARDOWN F tmp_pathTEARDOWN S tmp_path_factory
Specifying Fixture Scope
Fixtures include an optional parameter called scope, which controls how often a fixture gets set up and torn down
scope="function"
run once per test function
default scope used when no scope parameter is specified
scope="class"
run once per test class, regardless of how many test methods are in the class
scope="module"
run once per module, regardless of how many test functions or methods or other fixtures in the module use it
scope="session"
run once per session
all test methods and functions using a fixture of session scope share one setup and teardown call
The scope is set at the definition of a fixture, and not at the place where it's called
test functions that use a fixture don't control how often a fixture is set up and torn down
Fixtures can only depend on other fixtures of their same scope or wider
@pytest.fixture(scope="session")deftasks_just_a_few():
"""All summaries and owners are unique."""return (
Task("Write some code", "Brian", True),
Task("Code review Brian's code", "Katie", False),
Task("Fix what Brian did", "Michelle", False),
)
@pytest.fixture(scope="session")deftasks_db_session(tmpdir_factory):
"""Connect to db before tests, disconnect after."""temp_dir=tmpdir_factory.mktemp("temp")
tasks.start_tasks_db(str(temp_dir), "tiny")
yieldtasks.stop_tasks_db()
@pytest.fixture()deftasks_db(tasks_db_session):
"""An empty tasks db."""tasks.delete_all()
@pytest.fixture()defdb_with_3_tasks(tasks_db, tasks_just_a_few):
"""Connected db with 3 tasks, all unique."""fortintasks_just_a_few:
tasks.add(t)
deftest_add_increases_count(db_with_3_tasks):
"""Test tasks.add() affect on tasks.count()."""# GIVEN a db with 3 tasks# WHEN another task is addedtasks.add(Task("throw a party"))
# THEN the count increases by 1asserttasks.count() ==4
$ pytest --setup-show func/test_add.py::test_add_increases_count============================= test session starts ==============================...collected 1 itemfunc/test_add.pySETUP S tasks_just_a_fewSETUP S tmp_path_factorySETUP S tmpdir_factory (fixtures used: tmp_path_factory)SETUP S tasks_db_session (fixtures used: tmpdir_factory) SETUP F tasks_db (fixtures used: tasks_db_session) SETUP F db_with_3_tasks (fixtures used: tasks_db, tasks_just_a_few) func/test_add.py::test_add_increases_count (fixtures used: db_with_3_tasks, request, tasks_db, tasks_db_session, tasks_just_a_few, tmp_path_factory, tmpdir_factory). TEARDOWN F db_with_3_tasks TEARDOWN F tasks_dbTEARDOWN S tasks_db_sessionTEARDOWN S tmpdir_factoryTEARDOWN S tmp_path_factoryTEARDOWN S tasks_just_a_few
Specifying Fixtures with usefixtures
You can also mark a test or a class with @pytest.mark.usefixtures('fixture1', 'fixture2')
takes a string that is composed of a comma-separated list of fixtures to use
A test using a fixture due to usefixtures cannot use the fixture's return value
pytest allows you to rename fixtures with a name parameter to @pytest.fixture()
@pytest.fixture(name="lue")defultimate_answer_to_life_the_universe_and_everything():
"""Return ultimate answer."""return42deftest_everything(lue):
"""Use the shorter name."""assertlue==42
$ pytest --setup-show test_rename_fixture.py============================= test session starts ==============================...collected 1 itemtest_rename_fixture.py SETUP F lue test_rename_fixture.py::test_everything (fixtures used: lue). TEARDOWN F lue============================== 1 passed in 0.00s ===============================
Use the --fixtures pytest option to find out where lue is defined
lists all the fixtures available for the test, including ones that have been renamed
--fixtures, --funcargs show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v')
$ pytest --fixtures test_rename_fixture.py============================= test session starts ==============================...collected 1 itemcache Return a cache object that can persist state between testing sessions. ......------------------ fixtures defined from test_rename_fixture -------------------lue Return ultimate answer.============================ no tests ran in 0.00s =============================
request is a built-in fixture that represents the calling state of the fixture
has a field param that is filled in with one element from the list assigned to params in @pytest.fixture(params=tasks_to_try)
$ pytest --setup-show test_add_variety2.py::test_add_a============================= test session starts ==============================...collected 4 itemstest_add_variety2.pySETUP S tmp_path_factorySETUP S tmpdir_factory (fixtures used: tmp_path_factory)SETUP S tasks_db_session (fixtures used: tmpdir_factory) SETUP F tasks_db (fixtures used: tasks_db_session) SETUP F a_task[Task(summary='sleep', owner=None, done=True, id=None)] func/test_add_variety2.py::test_add_a[a_task0] (fixtures used: a_task, request, tasks_db, tasks_db_session, tmp_path_factory, tmpdir_factory). TEARDOWN F a_task[Task(summary='sleep', owner=None, done=True, id=None)] TEARDOWN F tasks_db SETUP F tasks_db (fixtures used: tasks_db_session) SETUP F a_task[Task(summary='wake', owner='brian', done=False, id=None)] func/test_add_variety2.py::test_add_a[a_task1] (fixtures used: a_task, request, tasks_db, tasks_db_session, tmp_path_factory, tmpdir_factory). TEARDOWN F a_task[Task(summary='wake', owner='brian', done=False, id=None)] TEARDOWN F tasks_db SETUP F tasks_db (fixtures used: tasks_db_session) SETUP F a_task[Task(summary='breathe', owner='BRIAN', done=True, id=None)] func/test_add_variety2.py::test_add_a[a_task2] (fixtures used: a_task, request, tasks_db, tasks_db_session, tmp_path_factory, tmpdir_factory). TEARDOWN F a_task[Task(summary='breathe', owner='BRIAN', done=True, id=None)] TEARDOWN F tasks_db SETUP F tasks_db (fixtures used: tasks_db_session) SETUP F a_task[Task(summary='exercise', owner='BrIaN', done=False, id=None)] func/test_add_variety2.py::test_add_a[a_task3] (fixtures used: a_task, request, tasks_db, tasks_db_session, tmp_path_factory, tmpdir_factory). TEARDOWN F a_task[Task(summary='exercise', owner='BrIaN', done=False, id=None)] TEARDOWN F tasks_dbTEARDOWN S tasks_db_sessionTEARDOWN S tmpdir_factoryTEARDOWN S tmp_path_factory============================== 4 passed in 0.67s ===============================
ids: list of string ids each corresponding to the params so that they are part of the test id
task_ids= ["Task({},{},{})".format(t.summary, t.owner, t.done) fortintasks_to_try]
@pytest.fixture(params=tasks_to_try, ids=task_ids)defb_task(request):
"""Using a list of ids."""returnrequest.param
We can also set the ids parameter to a function that provides the identifiers
defid_func(fixture_value):
"""A function for generating ids."""t=fixture_valuereturn"Task({},{},{})".format(t.summary, t.owner, t.done)
@pytest.fixture(params=tasks_to_try, ids=id_func)defc_task(request):
"""Using a function (id_func) to generate ids."""returnrequest.paramdeftest_add_c(tasks_db, c_task):
"""Use fixture with generated ids."""task_id=tasks.add(c_task)
t_from_db=tasks.get(task_id)
assertequivalent(t_from_db, c_task)
Since the parametrization is a list of Task objects, id_func() will be called with a Task object, which allows us to use the namedtuple accessor methods to access a single Task object to generate the identifier for one Task object at a time
4. Builtin Fixtures
Using tmpdir and tmpdir_factory
The tmpdir and tmpdir_factory builtin fixtures are used to create a temporary file system directory before your test runs, and remove the directory when your test is finished
With the pytestconfig builtin fixture, you can control how pytest runs through command-line arguments and options, configuration files, plugins, and the directory from which you launched pytest
Shortcut to request.config
Sometimes referred to in the pytest documentation as "the pytest config object"
Adding a custom command-line option and read the option value from within a test
read the value of command-line options directly from pytestconfig
add the option and have pytest parse it using a hook function
should be done via plugins or in the conftest.py file at the top of your project directory structure
$ pytest -s -q --myopt --foo baz test_config.py::test_option"foo" set to: baz"myopt" set to: True.1 passed in 0.07s
$ pytest -s -q --myopt --foo baz test_config.py::test_pytestconfigargs : ['test_config.py::test_pytestconfig']inifile : Noneinvocation_dir : /.../ch4/pytestconfigrootdir : /.../ch4/pytestconfig-k EXPRESSION :-v, --verbose : -1-q, --quiet : 1-l, --showlocals: False--tb=style : auto.1 passed in 0.00s
Using cache
Sometimes passing information from one test session to the next can be quite useful
with the cache builtin fixture
cache is used for the --last-failed and --failed-first builtin functionality
$ pytest --cache-clear cache/test_pass_fail.py============================= test session starts ==============================...collected 2 itemscache/test_pass_fail.py .F [100%]=================================== FAILURES ===================================_______________________________ test_this_fails ________________________________ def test_this_fails():
> assert 1 == 2E assert 1 == 2cache/test_pass_fail.py:6: AssertionError=========================== short test summary info ============================FAILED cache/test_pass_fail.py::test_this_fails - assert 1 == 2========================= 1 failed, 1 passed in 0.02s ==========================
$ pytest --cache-show============================= test session starts ==============================...cachedir: /.../ch4/.pytest_cache----------------------------- cache values for '*' -----------------------------cache/lastfailed contains: {'cache/test_pass_fail.py::test_this_fails': True}cache/nodeids contains: ['cache/test_pass_fail.py::test_this_passes', 'cache/test_pass_fail.py::test_this_fails']cache/stepwise contains: []============================ no tests ran in 0.00s =============================
The interface for the cache fixture:
cache.get(key, default)
cache.set(key, value)
By convention, key names start with the name of your application or plugin, followed by a /, and continuing to separate sections of the key name with /s
the value you store can be anything that is convertible to json
Example: a fixture that records how long tests take, saves the times, and on the next run, reports an error on tests that take longer than, say, twice as long as last time
Use with capsys.disabled() to temporarily let output get past the capture mechanism
Using monkeypatch
A "monkey patch" is a dynamic modification of a class or module during runtime
a convenient way to take over part of the runtime environment of the code under test and replace either input dependencies or output dependencies with objects or functions that are more convenient for testing
The monkeypatch fixture provides the following functions:
setattr(target, name, value=<notset>, raising=True): Set an attribute
delattr(target, name=<notset>, raising=True): Delete an attribute
setitem(dic, name, value): Set a dictionary entry
delitem(dic, name, raising=True): Delete a dictionary entry
setenv(name, value, prepend=None): Set an environmental variable
delenv(name, raising=True): Delete an environmental variable
syspath_prepend(path): Prepend path to sys.path, which is Python's list of import locations
puts your new path at the head of the line for module import directories
one use would be to replace a system-wide module or package with a stub version, and the code under test will find the stub version first
chdir(path)
changes the current working directory during the test
useful for testing command-line scripts and other utilities that depend on what the current working directory is by setting up a temporary directory with whatever contents make sense for your script
You can also use the monkeypatch fixture functions in conjunction with unittest.mock to temporarily replace attributes with mock objects
Using doctest_namespace
The doctest module is part of the standard Python library and allows you to put little code examples inside docstrings for a function and test them to make sure they work
You can have pytest look for and run doctest tests within your Python code by using the --doctest-modules flag
With the doctest_namespace fixture, you can build autouse fixtures to add symbols to the namespace pytest uses while running doctest tests
commonly used to add module imports into the namespace
The recwarn value acts like a list of warnings, and each warning in the list has a category, message, filename, and lineno defined
The warnings are collected at the beginning of the test
if that is inconvenient because the portion of the test where you care about warnings is near the end, you can use recwarn.clear() to clear out the list before the chunk of the test where you do care about collecting warnings
pytest can also check for warnings with pytest.warns()
recwarn and the pytest.warns() context manager provide similar functionality, so the decision of which to use is purely a matter of taste
5.Plugins
The pytest code base is structured with customization and extensions, and there are hooks available to allow modifications and improvements through plugins
Frequently, changes you only intended to use on one project will become useful enough to share and grow into a plugin
therefore, we'll start by adding functionality to a conftest.py file, then, after we get things working in conftest.py, we'll move the code to a package
$ pytest --nice --tb=no func/test_api_exceptions.py::TestAdd============================= test session starts ==============================...Thanks for running the tests....collected 2 itemsfunc/test_api_exceptions.py .O [100%]=============================== warnings summary ===============================...=========================== short test summary info ============================OPPORTUNITY for improvement func/test_api_exceptions.py::TestAdd::test_done_not_bool=================== 1 failed, 1 passed, 3 warnings in 0.26s ====================
$ pytest --nice -v --tb=no func/test_api_exceptions.py::TestAdd============================= test session starts ==============================...Thanks for running the tests....collected 2 itemsfunc/test_api_exceptions.py::TestAdd::test_missing_summary PASSED [ 50%]func/test_api_exceptions.py::TestAdd::test_done_not_bool OPPORTUNITY for improvement [100%]=============================== warnings summary ===============================...=========================== short test summary info ============================OPPORTUNITY for improvement func/test_api_exceptions.py::TestAdd::test_done_not_bool=================== 1 failed, 1 passed, 3 warnings in 0.25s ====================
if you use the --strict command line option, any misspelled or unregistered markers show up as an error
[pytest]
smoke: Run the smoke test functions for tasks project
get: Run the test functions that test tasks.get()
$ pytest --markers@pytest.mark.smoke: Run the smoke test test functions@pytest.mark.get: Run the test functions that test tasks.get()@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
Requiring a Minimum pytest Version
The minversion setting enables you to specify a minimum pytest version you expect for your tests
approx() for testing floating point numbers for was introduced into pytest version 3.0
[pytest]minversion = 3.0
Stopping pytest from Looking in the Wrong Places
Test discovery traverses many directories recursively
there are some directories you don't want pytest looking in
The default setting for norecursedirs is .* build dist CVS _darcs {arch} and *.egg
you can add venv and src
norecursedirs = .* venv src *.egg dist build
Specifying Test Directory Locations
Opposite to norecursedirs, testpaths tells pytest where to look
a list of directories relative to the root directory
Setting xfail_strict = true causes tests marked with @pytest.mark.xfail that don't fail to be reported as an error
Avoiding Filename Collisions
If you have empty __init__.py files in all of your test subdirectories, you can have the same test filename show up in multiple directories
7. Using pytest with Other Tools
pdb: Debugging Test Failures
pytest options available to help speed up debugging test failures:
--tb=[auto/long/short/line/native/no]: Controls the traceback style
-v / --verbose: Displays all the test names, passing or failing
-l / --showlocals: Displays local variables alongside the stacktrace
-lf / --last-failed: Runs just the tests that failed last
-x / --exitfirst: Stops the tests session with the first failure
--pdb: Starts an interactive debugging session at the point of failure
Commands that you can use when you are at the (Pdb) prompt:
p/print expr: Prints the value of expr
pp expr: Pretty prints the value of expr
l/list: Lists the point of failure and five lines of code above and below
l/list begin,end: Lists specific line numbers
a/args: Prints the arguments of the current function with their values (helpful when in a test helper function)
u/up: Moves up one level in the stack trace
d/down: Moves down one level in the stack trace
q/quit: Quits the debugging session
other navigation commands like step and next aren't that useful since we are sitting right at an assert statement
you can also just type variable names and get the values
$ pytest -x --pdb ch2/tasks_proj/tests============================= test session starts ==============================...collected 56 itemsch2/tasks_proj/tests/func/test_add.py .. [ 3%]ch2/tasks_proj/tests/func/test_add_variety.py .......................... [ 50%]...... [ 60%]ch2/tasks_proj/tests/func/test_api_exceptions.py ....... [ 73%]ch2/tasks_proj/tests/func/test_unique_id_1.py F>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> def test_unique_id(): """Calling unique_id() twice should return different numbers.""" id_1 = tasks.unique_id() id_2 = tasks.unique_id()
> assert id_1 != id_2E assert 1 != 1ch2/tasks_proj/tests/func/test_unique_id_1.py:11: AssertionError>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB post_mortem (IO-capturing turned off) >>>>>>>>>>>>>>>>>>>
> /path/to/ch2/tasks_proj/tests/func/test_unique_id_1.py(11)test_unique_id()-> assert id_1 != id_2(Pdb) p tasks.unique_id()1(Pdb) id_11(Pdb) id_21(Pdb) l 6 7 def test_unique_id(): 8 """Calling unique_id() twice should return different numbers.""" 9 id_1 = tasks.unique_id() 10 id_2 = tasks.unique_id() 11 -> assert id_1 != id_2 12 13 14 @pytest.fixture(autouse=True) 15 def initialized_tasks_db(tmpdir): 16 """Connect to db before testing, disconnect after."""(Pdb) u
> /path/to/venv/lib/python3.6/site-packages/_pytest/python.py(184)pytest_pyfunc_call()-> result = testfunction(**testargs)(Pdb) apyfuncitem = <Function test_unique_id>(Pdb) d
> /path/to/ch2/tasks_proj/tests/func/test_unique_id_1.py(11)test_unique_id()-> assert id_1 != id_2(Pdb) q=============================== warnings summary ===============================...=========================== short test summary info ============================FAILED ch2/tasks_proj/tests/func/test_unique_id_1.py::test_unique_id - assert...!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! _pytest.outcomes.Exit: Quitting debugger !!!!!!!!!!!!!!!!!!!============= 1 failed, 41 passed, 4 warnings in 60.26s (0:01:00) ==============
Coverage.py: Determining How Much Code Is Tested
Coverage.py is the preferred Python coverage tool that measures code coverage
Installing pytest-cov plugin will pull in coverage.py since coverage is one its dependencies
pip install pytest-cov
$ pytest --helpusage: pytest [options] [file_or_dir] [file_or_dir] [...]...coverage reporting with distributed testing support: --cov=[SOURCE] Path or package name to measure during execution (multi- allowed). Use --cov= to not do any source filtering and record everything. --cov-report=TYPE Type of report to generate: term, term-missing, annotate, html, xml (multi-allowed). term, term-missing may be followed by ":skip-covered". annotate, html and xml may be followed by ":DEST" where DEST specifies the output location. Use --cov-report= to not generate any output. --cov-config=PATH Config file for coverage. Default: .coveragerc --no-cov-on-fail Do not report coverage if test run fails. Default: False --no-cov Disable coverage report completely (useful for debuggers). Default: False --cov-fail-under=MIN Fail if the total coverage is less than MIN. --cov-append Do not delete coverage but append to current. Default: False --cov-branch Enable branch coverage. --cov-context=CONTEXT Dynamic contexts to use. "test" for now.
The MagicMock class is a flexible subclass of unittest.Mock with reasonable default behavior and the ability to specify a return value
The Mock and MagicMock classes (and others) are used to mimic the interface of other code with introspection methods built in to allow you to ask them how they were called
@contextmanagerdefstub_tasks_db():
yielddeftest_list_no_args(mocker):
# Replace the _tasks_db() context manager with our stub that does nothing.mocker.patch.object(tasks.cli, "_tasks_db", new=stub_tasks_db)
# Replace any calls to tasks.list_tasks() from within tasks.cli to a default# MagicMock object with a return value of an empty list.mocker.patch.object(tasks.cli.tasks, "list_tasks", return_value=[])
# Use the Click CliRunner to do the same thing as calling tasks list on the command# line.runner=CliRunner()
runner.invoke(tasks.cli.tasks_cli, ["list"])
# Use the mock object to make sure the API call was called correctly.# assert_called_once_with() is part of unittest.mock.Mock objects.tasks.cli.tasks.list_tasks.assert_called_once_with(None)
$ pytest --helpusage: pytest [options] [file_or_dir] [file_or_dir] [...]...custom options: --count=COUNT Number of times to repeat each test --repeat-scope={function,class,module,session} Scope for repeating tests
If your tests do not need access to a shared resource, you could speed up test sessions by running multiple tests in parallel using the pytest-xdist plugin
You can specify multiple processors and run many tests in parallel
You can even push off tests onto other machines and use more than one computer
$ pytest --helpusage: pytest [options] [file_or_dir] [file_or_dir] [...]...distributed and subprocess testing: -n numprocesses, --numprocesses=numprocesses shortcut for '--dist=load --tx=NUM*popen', you can use 'auto' here for auto detection CPUs number on host system and it will be 0 when used with --pdb --maxprocesses=maxprocesses limit the maximum number of workers to process the tests when using --numprocesses=auto --max-worker-restart=MAXWORKERRESTART, --max-slave-restart=MAXWORKERRESTART maximum number of workers that can be restarted when crashed (set to zero to disable this feature) '--max- slave-restart' option is deprecated and will be removed in a future release --dist=distmode set mode for distributing tests to exec environments. each: send each test to all available environments. load: load balance by sending any pending test to any available environment. loadscope: load balance by sending pending groups of tests in the same scope to any available environment. loadfile: load balance by sending test grouped by file to any available environment. (default) no: run tests inprocess, don't distribute. --tx=xspec add a test execution environment. some examples: --tx popen//python=python2.5 --tx socket=192.168.1.102:8888 --tx [email protected]//chdir=testcache -d load-balance tests. shortcut for '--dist=load' --rsyncdir=DIR add directory for rsyncing to remote tx nodes. --rsyncignore=GLOB add expression for ignores when rsyncing to remote tx nodes. --boxed backward compatibility alias for pytest-forked --forked --testrunuid=TESTRUNUID provide an identifier shared amongst all workers as the value of the 'testrun_uid' fixture, ,if not provided, 'testrun_uid' is filled with a new unique string on every test run. -f, --looponfail run tests in subprocess, wait for modified files and re- run failing test set until all pass.
$ pytest --helpusage: pytest [options] [file_or_dir] [file_or_dir] [...]...Interrupt test run and dump stacks of all threads after a test times out: --timeout=TIMEOUT Timeout in seconds before dumping the stacks. Default is 0 which means no timeout. --timeout_method={signal,thread} Deprecated, use --timeout-method --timeout-method={signal,thread} Timeout mechanism to use. 'signal' uses SIGALRM if available, 'thread' uses a timer thread. The default is to use 'signal' and fall back to 'thread'.
$ pytest --timeout=0.5 -x test_parallel.py============================= test session starts ==============================...plugins: timeout-1.3.4, xdist-1.32.0, forked-1.1.3, repeat-0.8.0, cov-2.8.1, mock-3.1.0timeout: 0.5stimeout method: signaltimeout func_only: Falsecollected 10 itemstest_parallel.py F=================================== FAILURES ===================================______________________________ test_something[0] _______________________________x = 0 @pytest.mark.parametrize("x", list(range(10))) def test_something(x):
> time.sleep(1)E Failed: Timeout >0.5stest_parallel.py:7: Failed=========================== short test summary info ============================FAILED test_parallel.py::test_something[0] - Failed: Timeout >0.5s!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!============================== 1 failed in 0.60s ===============================
Plugins That Alter or Enhance Output
These plugins don't change how test are run, but they do change the output you see
pytest-instafail: See Details of Failures and Errors as They Happen
If your test suite takes quite a bit of time, you may want to see the tracebacks as they happen, rather than wait until the end
$ pytest --helpusage: pytest [options] [file_or_dir] [file_or_dir] [...]...reporting: ... --instafail show failures and errors instantly as they occur (disabled by default).
$ pytest --instafail --timeout=0.5 --tb=line --maxfail=2 test_parallel.py============================= test session starts ==============================...plugins: cov-2.8.1, mock-3.1.0, instafail-0.4.1.post0timeout: 0.5stimeout method: signaltimeout func_only: Falsecollected 10 itemstest_parallel.py F/path/to/appendices/xdist/test_parallel.py:7: Failed: Timeout >0.5stest_parallel.py F/path/to/appendices/xdist/test_parallel.py:7: Failed: Timeout >0.5s=========================== short test summary info ============================FAILED test_parallel.py::test_something[0] - Failed: Timeout >0.5sFAILED test_parallel.py::test_something[1] - Failed: Timeout >0.5s!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 2 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!============================== 2 failed in 1.04s ===============================
pytest-sugar: Instafail + Colors + Progress Bar
Lets you see status not just as characters, but also in color
Also shows failure and error tracebacks during execution, and has a cool progress bar to the right of the shell
$ pytest --helpusage: pytest [options] [file_or_dir] [file_or_dir] [...]...reporting: ... --old-summary Show tests that failed instead of one-line tracebacks --force-sugar Force pytest-sugar output even when not in real terminal
$ pytest --helpusage: pytest [options] [file_or_dir] [file_or_dir] [...]...reporting: ... --html=path create html report file at given path. --self-contained-html create a self-contained html file containing all necessary styles, scripts, and images - this means that the report may not render or function where CSP restrictions are in place (see https://developer.mozilla.org/docs/Web/Security/CSP) --css=path append given css file content to report style file.
$ pytest --html=report.html============================= test session starts ==============================...plugins: metadata-1.9.0, timeout-1.3.4, cov-2.8.1, mock-3.1.0, html-2.1.1collected 6 itemstest_outcomes.py .FxXsE [100%]==================================== ERRORS ====================================_________________________ ERROR at setup of test_error _________________________ @pytest.fixture() def flaky_fixture():
> assert 1 == 2E assert 1 == 2test_outcomes.py:29: AssertionError=================================== FAILURES ===================================__________________________________ test_fail ___________________________________ def test_fail():
> assert 1 == 2E assert 1 == 2test_outcomes.py:9: AssertionError- generated html file: file:///path/to/appendices/outcomes/report.html -=========================== short test summary info ============================FAILED test_outcomes.py::test_fail - assert 1 == 2ERROR test_outcomes.py::test_error - assert 1 == 2==== 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.16s =====
Plugins for Static Analysis
pytest-pycodestyle, pytest-pep8: Comply with Python's Style Guide
Use the pytest-pycodestyle plugin to run pycodestyle on code in your project, including test code, with the --pycodestyle flag
pytest-flake8: Check for Style Plus Linting
With the pytest-flake8 plugin, you can run all of your source code and test code through flake8 and get a failure if something isn't right
checks for PEP 8, as well as for logic errors
use the --flake8 option to run flake8 during a pytest session
You can extend flake8 with plugins that offer even more checks, such as flake8-docstrings
Plugins for Web Development
pytest-selenium: Test with a Web Browser
The pytest-selenium plugin is the Python binding for Selenium
With it, you can
launch a web browser and use it to open URLs
exercise web applications
fill out forms
programmatically control the browser to test a web site or web application
pytest-django: Test Django Applications
By default, the builtin testing support in Django is based on unittest
The pytest-django plugin allows you to use pytest instead of unittest
includes helper functions and fixtures to speed up test implementation
pytest-flask: Test Flask Applications
The pytest-flask plugin provides a handful of fixtures to assist in testing Flask applications
A4. Packaging and Distributing Python Projects
Creating an Installable Module
For a simple one-module project, the minimal configuration is small
$ pythonPython 3.6.9 (default, Apr 18 2020, 01:56:04)[GCC 8.4.0] on linuxType "help", "copyright", "credits" or "license" for more information.>>> from some_module import some_func>>> some_func()42>>>
$ pythonPython 3.6.9 (default, Apr 18 2020, 01:56:04)[GCC 8.4.0] on linuxType "help", "copyright", "credits" or "license" for more information.>>> from some_package import some_func>>> some_func()42>>>
You can add a tests directory at the same level of src to add our tests