Giter Site home page Giter Site logo

pytest-subtests's Introduction

pytest-subtests

unittest subTest() support and subtests fixture.

PyPI version Python versions

This pytest plugin was generated with Cookiecutter along with @hackebrot's cookiecutter-pytest-plugin template.

Features

  • Adds support for TestCase.subTest.
  • New subtests fixture, providing similar functionality for pure pytest tests.

Installation

You can install pytest-subtests via pip from PyPI:

$ pip install pytest-subtests

Usage

unittest subTest() example

import unittest


class T(unittest.TestCase):
    def test_foo(self):
        for i in range(5):
            with self.subTest("custom message", i=i):
                self.assertEqual(i % 2, 0)


if __name__ == "__main__":
    unittest.main()

Output

λ pytest .tmp\test-unit-subtest.py
======================== test session starts ========================
...
collected 1 item

.tmp\test-unit-subtest.py FF.                                  [100%]

============================= FAILURES ==============================
_________________ T.test_foo [custom message] (i=1) _________________

self = <test-unit-subtest.T testMethod=test_foo>

    def test_foo(self):
        for i in range(5):
            with self.subTest('custom message', i=i):
>               self.assertEqual(i % 2, 0)
E               AssertionError: 1 != 0

.tmp\test-unit-subtest.py:9: AssertionError
_________________ T.test_foo [custom message] (i=3) _________________

self = <test-unit-subtest.T testMethod=test_foo>

    def test_foo(self):
        for i in range(5):
            with self.subTest('custom message', i=i):
>               self.assertEqual(i % 2, 0)
E               AssertionError: 1 != 0

.tmp\test-unit-subtest.py:9: AssertionError
================ 2 failed, 1 passed in 0.07 seconds =================

subtests fixture example

def test(subtests):
    for i in range(5):
        with subtests.test(msg="custom message", i=i):
            assert i % 2 == 0

Output

λ pytest .tmp\test-subtest.py
======================== test session starts ========================
...
collected 1 item

.tmp\test-subtest.py .F.F..                                    [100%]

============================= FAILURES ==============================
____________________ test [custom message] (i=1) ____________________

    def test(subtests):
        for i in range(5):
            with subtests.test(msg='custom message', i=i):
>               assert i % 2 == 0
E               assert (1 % 2) == 0

.tmp\test-subtest.py:4: AssertionError
____________________ test [custom message] (i=3) ____________________

    def test(subtests):
        for i in range(5):
            with subtests.test(msg='custom message', i=i):
>               assert i % 2 == 0
E               assert (3 % 2) == 0

.tmp\test-subtest.py:4: AssertionError
================ 2 failed, 1 passed in 0.07 seconds =================

Contributing

Contributions are very welcome. Tests can be run with tox:

tox -e py37

License

Distributed under the terms of the MIT license, "pytest-subtests" is free and open source software

Issues

If you encounter any problems, please file an issue along with a detailed description.

pytest-subtests's People

Contributors

alex avatar bluetech avatar bmispelon avatar bswck avatar dannysepler avatar dependabot[bot] avatar edgarrmondragon avatar github-actions[bot] avatar jacalata avatar jamesbraza avatar kalekundert avatar mauvilsa avatar maxnikulin avatar maybe-sybr avatar nicoddemus avatar pre-commit-ci[bot] avatar reaperhulk avatar rhoban13 avatar ronnypfannschmidt avatar saulshanabrook avatar vtbassmatt avatar webknjaz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytest-subtests's Issues

No logs?

Thanks for the plugin, subtests are properly named now. However...

After installing the plugin the whole section in the output showing the logs is gone (i.e the "Captured stdout call" and "Captured log call" that appear in the bottom without this plugin).

It's very hard to debug failing tests without logs, making this plugin currently unusable for me.

Question: subtests and pytest-xdist

I have started using subtests and I'm loving it. Would it be possible (and feasible) to make subtest work with xdist in order to parallelize the test executions?

Question on asynchronous (concurrent) testing, hiding subtests' parent function

I just asked a question on stackoverflow about ways to run test concurrently using pytest, it was suggested to use pytest-subtests so I tried it and it seems to work for a simple example.

I just wanted to have your thoughts on this strategy (limitations/interest/ideas).

import pytest
import sys
import asyncio
import inspect
import re
import time


pytestmark = pytest.mark.asyncio
io_test_pattern = re.compile("io_.*")


async def tests(subtests):

    def find_io_tests(subtests, ignored_names):
        functions = inspect.getmembers(sys.modules[__name__], inspect.isfunction)
        for (f_name, function) in functions:
            if f_name in ignored_names:
                continue
            if re.search(io_test_pattern, f_name):
                yield run(subtests, f_name, function)

    async def run(subtests, test_name, test_function):
        with subtests.test(msg=test_name):
            await test_function()

    self_name = inspect.currentframe().f_code.co_name
    return await asyncio.gather(*find_io_tests(subtests, {self_name}))


async def io_test_1():
    await assert_sleep_duration_ok(1)

async def io_test_2():
    await assert_sleep_duration_ok(2)

async def io_test_3():
    await assert_sleep_duration_ok(3)

async def io_test_4():
    await assert_sleep_duration_ok(4, fail=True)

MAX_ERROR = 0.1

async def assert_sleep_duration_ok(duration, fail=False):
    start = time.time()
    await asyncio.sleep(duration)
    actual_duration = time.time() - start
    assert abs(actual_duration - duration) < MAX_ERROR
    assert not fail

The output summary of this is a bit unclear for my use case but from my understanding this is related to #9 and pytest-dev/pytest#5047. Right?

============================= test session starts =============================
platform darwin -- Python 3.7.0, pytest-4.6.2, py-1.8.0, pluggy-0.12.0
cachedir: .pytest_cache
rootdir: /Users/cglacet/test/async-tests
plugins: asyncio-0.10.0, trio-0.5.2, subtests-0.2.1
collected 2 items

asyncio_test.py::tests PASSED                                           [ 50%]
asyncio_test.py::tests PASSED                                           [ 50%]
asyncio_test.py::tests PASSED                                           [ 50%]
asyncio_test.py::tests FAILED                                           [ 50%]
asyncio_test.py::tests PASSED                                           [ 50%]

================================== FAILURES ===================================

It would make more sense for me to have something that completely hides the existence of the tests function and only shows something like:

asyncio_test.py::io_test_1 PASSED                                       [ 25%]
asyncio_test.py::io_test_2 PASSED                                       [ 50%]
asyncio_test.py::io_test_3 PASSED                                       [ 75%]
trio_test.py::io_test_4 FAILED                                          [100%]

pytest.xfail reported as hard failure

I was testing out some of the error reporting and I noticed that apparently pytest.xfail() is reported as a hard failure when called within a subtest:

With this in test.py:

 import pytest

def test_xfail(subtests):
    with subtests.test():
        pytest.xfail()

I get:

$ pytest -v test.py
...
test.py::test_a_bunch_of_stuff FAILED    [100%]
test.py::test_a_bunch_of_stuff PASSED    [100%]

============ FAILURES =========
____ test_xfail (<subtest>) ___

subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f1539aafee0>,
                    suspend_capture_ctx=<bound method CaptureManager.gl...ed'
                    _in_suspended=False> _capture_fixture=None>>,
                    request=<SubRequest 'subtests' for
                    <Function test_a_bunch_of_stuff>>)

    def test_a_bunch_of_stuff(subtests):
        with subtests.test():
>           pytest.xfail()
E           _pytest.outcomes.XFailed: <XFailed instance>

test.py:27: XFailed
========= short test summary info =========
FAILED test.py::test_a_bunch_of_stuff - _pytest.outcomes.XFailed: <XFailed instance>
====== 1 failed, 1 passed in 0.03s ========

Without the subtest, the overall test is properly reported as XFAIL.

self.skipTest does not produce the same results as unittest does

Given this test case:

from unittest import TestCase, main

class T(TestCase):

    def test_foo(self):
        for i in range(5):
            with self.subTest(msg="custom", i=i):
                if i % 2 == 0:
                    self.skipTest('even number')


if __name__ == '__main__':
    main()

Running with python:

λ python .tmp\test-ut-skip.py
sss
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK (skipped=3)

Running with pytest:

 λ pytest .tmp\test-ut-skip.py -q
ss                                                             [100%]
2 skipped in 0.01 seconds

The problem is that TestCaseFunction.addSkip will append the exception info to a list, but pytest_runtest_makereport will only pop one item off the list. We need to find a way to hook into TestCase.subTest to issue pytest_runtest_logreport from within the with statement.

Only first iteration of subtest has logs in pycharm

I am running this code in pycharm:

import logging
import random

logger = logging.getLogger(__name__)


class TestSub:
    def test_has_nested(self, subtests):
        for i in range(5):
            with subtests.test(f"Text - {i}"):
                logger.info(f"This is log for iteration: {i}")
                assert random.choice([True, False])

Expect result: Every 5 subtests has logs in "Run" panel
image

Actual result: Only first subtest has logs, rest subtests are empty
image

Test regressions due to output changes in Python 3.11

With Python 3.11.0b1:

$ tox -e py311
GLOB sdist-make: /tmp/pytest-subtests/setup.py
py311 create: /tmp/pytest-subtests/.tox/py311
py311 installdeps: pytest-xdist>=1.28
py311 inst: /tmp/pytest-subtests/.tox/.tmp/package/1/pytest-subtests-0.7.1.dev11+g7991ee2.zip
py311 installed: attrs==21.4.0,execnet==1.9.0,iniconfig==1.1.1,packaging==21.3,pluggy==1.0.0,py==1.11.0,pyparsing==3.0.9,pytest==7.1.2,pytest-forked==1.4.0,pytest-subtests @ file:///tmp/pytest-subtests/.tox/.tmp/package/1/pytest-subtests-0.7.1.dev11%2Bg7991ee2.zip,pytest-xdist==2.5.0,tomli==2.0.1
py311 run-test-pre: PYTHONHASHSEED='3046512211'
py311 run-test: commands[0] | pytest tests
========================================================= test session starts =========================================================
platform linux -- Python 3.11.0b1, pytest-7.1.2, pluggy-1.0.0
cachedir: .tox/py311/.pytest_cache
rootdir: /tmp/pytest-subtests
plugins: subtests-0.7.1.dev11+g7991ee2, xdist-2.5.0, forked-1.4.0
collected 26 items                                                                                                                    

tests/test_subtests.py ........F..F...xxxxx......                                                                               [100%]

============================================================== FAILURES ===============================================================
__________________________________________ TestSubTest.test_simple_terminal_normal[unittest] __________________________________________

self = <test_subtests.TestSubTest object at 0x7f0a4e7d3d90>
simple_script = local('/tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_normal2/test_simple_terminal_normal.py')
testdir = <Testdir local('/tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_normal2')>, runner = 'unittest'

    @pytest.mark.parametrize("runner", ["unittest", "pytest-normal", "pytest-xdist"])
    def test_simple_terminal_normal(self, simple_script, testdir, runner):
    
        if runner == "unittest":
            result = testdir.run(sys.executable, simple_script)
>           result.stderr.fnmatch_lines(
                [
                    "FAIL: test_foo (__main__.T) [custom] (i=1)",
                    "AssertionError: 1 != 0",
                    "FAIL: test_foo (__main__.T) [custom] (i=3)",
                    "AssertionError: 1 != 0",
                    "Ran 1 test in *",
                    "FAILED (failures=2)",
                ]
            )
E           Failed: nomatch: 'FAIL: test_foo (__main__.T) [custom] (i=1)'
E               and: 'FF'
E               and: '======================================================================'
E               and: 'FAIL: test_foo (__main__.T.test_foo) [custom] (i=1)'
E               and: '----------------------------------------------------------------------'
E               and: 'Traceback (most recent call last):'
E               and: '  File "/tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_normal2/test_simple_terminal_normal.py", line 8, in test_foo'
E               and: '    self.assertEqual(i % 2, 0)'
E               and: '    ^^^^^^^^^^^^^^^^^^^^^^^^^^'
E               and: 'AssertionError: 1 != 0'
E               and: ''
E               and: '======================================================================'
E               and: 'FAIL: test_foo (__main__.T.test_foo) [custom] (i=3)'
E               and: '----------------------------------------------------------------------'
E               and: 'Traceback (most recent call last):'
E               and: '  File "/tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_normal2/test_simple_terminal_normal.py", line 8, in test_foo'
E               and: '    self.assertEqual(i % 2, 0)'
E               and: '    ^^^^^^^^^^^^^^^^^^^^^^^^^^'
E               and: 'AssertionError: 1 != 0'
E               and: ''
E               and: '----------------------------------------------------------------------'
E               and: 'Ran 1 test in 0.001s'
E               and: ''
E               and: 'FAILED (failures=2)'
E           remains unmatched: 'FAIL: test_foo (__main__.T) [custom] (i=1)'

/tmp/pytest-subtests/tests/test_subtests.py:142: Failed
-------------------------------------------------------- Captured stdout call ---------------------------------------------------------
running: /tmp/pytest-subtests/.tox/py311/bin/python /tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_normal2/test_simple_terminal_normal.py
     in: /tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_normal2
-------------------------------------------------------- Captured stderr call ---------------------------------------------------------
FF
======================================================================
FAIL: test_foo (__main__.T.test_foo) [custom] (i=1)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_normal2/test_simple_terminal_normal.py", line 8, in test_foo
    self.assertEqual(i % 2, 0)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 1 != 0

======================================================================
FAIL: test_foo (__main__.T.test_foo) [custom] (i=3)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_normal2/test_simple_terminal_normal.py", line 8, in test_foo
    self.assertEqual(i % 2, 0)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 1 != 0

----------------------------------------------------------------------
Ran 1 test in 0.001s

FAILED (failures=2)
_________________________________________ TestSubTest.test_simple_terminal_verbose[unittest] __________________________________________

self = <test_subtests.TestSubTest object at 0x7f0a4e7debd0>
simple_script = local('/tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_verbose2/test_simple_terminal_verbose.py')
testdir = <Testdir local('/tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_verbose2')>, runner = 'unittest'

    @pytest.mark.parametrize("runner", ["unittest", "pytest-normal", "pytest-xdist"])
    def test_simple_terminal_verbose(self, simple_script, testdir, runner):
    
        if runner == "unittest":
            result = testdir.run(sys.executable, simple_script, "-v")
>           result.stderr.fnmatch_lines(
                [
                    "test_foo (__main__.T) ... ",
                    "FAIL: test_foo (__main__.T) [custom] (i=1)",
                    "AssertionError: 1 != 0",
                    "FAIL: test_foo (__main__.T) [custom] (i=3)",
                    "AssertionError: 1 != 0",
                    "Ran 1 test in *",
                    "FAILED (failures=2)",
                ]
            )
E           Failed: nomatch: 'test_foo (__main__.T) ... '
E               and: 'test_foo (__main__.T.test_foo) ... '
E               and: '  test_foo (__main__.T.test_foo) [custom] (i=1) ... FAIL'
E               and: '  test_foo (__main__.T.test_foo) [custom] (i=3) ... FAIL'
E               and: ''
E               and: '======================================================================'
E               and: 'FAIL: test_foo (__main__.T.test_foo) [custom] (i=1)'
E               and: '----------------------------------------------------------------------'
E               and: 'Traceback (most recent call last):'
E               and: '  File "/tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_verbose2/test_simple_terminal_verbose.py", line 8, in test_foo'
E               and: '    self.assertEqual(i % 2, 0)'
E               and: '    ^^^^^^^^^^^^^^^^^^^^^^^^^^'
E               and: 'AssertionError: 1 != 0'
E               and: ''
E               and: '======================================================================'
E               and: 'FAIL: test_foo (__main__.T.test_foo) [custom] (i=3)'
E               and: '----------------------------------------------------------------------'
E               and: 'Traceback (most recent call last):'
E               and: '  File "/tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_verbose2/test_simple_terminal_verbose.py", line 8, in test_foo'
E               and: '    self.assertEqual(i % 2, 0)'
E               and: '    ^^^^^^^^^^^^^^^^^^^^^^^^^^'
E               and: 'AssertionError: 1 != 0'
E               and: ''
E               and: '----------------------------------------------------------------------'
E               and: 'Ran 1 test in 0.001s'
E               and: ''
E               and: 'FAILED (failures=2)'
E           remains unmatched: 'test_foo (__main__.T) ... '

/tmp/pytest-subtests/tests/test_subtests.py:176: Failed
-------------------------------------------------------- Captured stdout call ---------------------------------------------------------
running: /tmp/pytest-subtests/.tox/py311/bin/python /tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_verbose2/test_simple_terminal_verbose.py -v
     in: /tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_verbose2
-------------------------------------------------------- Captured stderr call ---------------------------------------------------------
test_foo (__main__.T.test_foo) ... 
  test_foo (__main__.T.test_foo) [custom] (i=1) ... FAIL
  test_foo (__main__.T.test_foo) [custom] (i=3) ... FAIL

======================================================================
FAIL: test_foo (__main__.T.test_foo) [custom] (i=1)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_verbose2/test_simple_terminal_verbose.py", line 8, in test_foo
    self.assertEqual(i % 2, 0)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 1 != 0

======================================================================
FAIL: test_foo (__main__.T.test_foo) [custom] (i=3)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/tmp/pytest-of-mgorny/pytest-0/test_simple_terminal_verbose2/test_simple_terminal_verbose.py", line 8, in test_foo
    self.assertEqual(i % 2, 0)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 1 != 0

----------------------------------------------------------------------
Ran 1 test in 0.001s

FAILED (failures=2)
======================================================= short test summary info =======================================================
FAILED tests/test_subtests.py::TestSubTest::test_simple_terminal_normal[unittest] - Failed: nomatch: 'FAIL: test_foo (__main__.T) [c...
FAILED tests/test_subtests.py::TestSubTest::test_simple_terminal_verbose[unittest] - Failed: nomatch: 'test_foo (__main__.T) ... '
=============================================== 2 failed, 19 passed, 5 xfailed in 5.12s ===============================================
ERROR: InvocationError for command /tmp/pytest-subtests/.tox/py311/bin/pytest tests (exited with code 1)
_______________________________________________________________ summary _______________________________________________________________
ERROR:   py311: commands failed

[feature request] Global subtest function

Currently, when using subtest, one has to propagate the subtest object in each of the submethods, adding boilerplate:

def test_xyz(subtests):
  assert_fn(x, y, subtests)  # Propagate subtest to all sub functions


def assert_fn(x, y, subtests):
  with subtests.test('a'):
    with subtests.test('x'):
      assert_child_fn(x, subtests)  # Additional function with subtests
    with subtests.test('y'):
      assert_child_fn(x, subtests)

It would be nice if there was instead a global subtest function which could be called directly. The above code could be rewritten as

@pytest.mark.usefixture('subtest')
def test_xyz():
  assert_fn(x, y)  # No more subtest argument


def assert_fn(x, y):
  with pytest.subtest('a'):
    with pytest.subtest('x'):
      assert_child_fn(x)
    with pytest.subtest('y'):
      assert_child_fn(x)

Of course, calling pytest.subtests would raise an error if executed in a test which does not use the subtest fixture.
But this would make it easier to make modular tests function.

If pytest.subtest is not possible, import subtests directly would be great too.

Reports of nested test

Currently:

def test_abc(subtests):
    with subtests.test("a"):
        with subtests.test("b"):
            assert 1 == 2

Is reported as:

___________test_abc [b] ___________

It would be nice if it was reported as:

___________ test_abc [a/b] ___________

This is especially useful when calling the same function from within multiple sub-tests, when the same <name> can be used in multiple subtests.test(<name>), but in different scopes.

Question: Hook for subtest start

Hi,
I'm a maintainer for pyfakefs and currently trying to make it play nice with pytest-subtests (see this issue).
As pyfakefs patches the filesystem, I have to make sure that it is not active during reporting, so I currently suspend patching in pytest_runtest_logreport and resume it (provided the fixture is still active) in pytest_runtest_call.
This does not work with subtests, as pytest_runtest_logreport is called before each subtest (which suspends patching), but pytest_runtest_call is not, and I failed to find a hook to switch it back on before the next subtest. Is there a possibility (hook or otherwise) to get notified before a subtest is run which I can rely on?

Thanks!

pathlib.Path are not shown

Because pytest don't support subtests I have to use this addon here. I'm not sure if I do something wrong?

In short: Using a pathlib.Path object as argument in unittest.subTest() the string representation of that object isn't used. I just see (<subtest>) in the output for each subtest.

This is a snippet of an unittest of mine:

expect_folder = pathlib.Path.cwd() / 'Beverly'
expect = [
    expect_folder / '_Elke.pickle',
    expect_folder / '_Foo.pickle',
    expect_folder / 'Bar.pickle',
    expect_folder / 'Wurst.pickle',
]
for fp in expect:
    with self.subTest(fp):
        self.assertTrue(fp.exists())

That is the output for each subtest FolderModeFS.test_build_container (<subtest>) .

When I wrap fp in str() like this

with self.subTest(str(fp)):

The output looks like this

FolderModeFS.test_build_container [/Beverly/Wurst.pickle]

Parametrize subtest

Is it in any way possible to parametrize a subtest with @pytest.mark.parametrize? I'm dynamically creating subtests for different templates that I need to check but it would be nice if I could insert values into those templates via parametrize. (Of course, I can also just pass them in as normal values, but pytests formatting with parametrize is nicer/easier to understand and debug).

subtests don't seem to work with --last-failed

I usually run my tests with --lf and have just now added subtests.

Failed subtests don't seem to be registered as "last failed". This causes all tests to be rerun even if some subtests failed.

Keep track of passed tests

It would be nice if it was possible to keep track of the tests that were passed or that all subtests either passed or not.

For example:

import pytest
from pytest_subtests import SubTests

def test_something(subtests: SubTests):
    with subtests.test(msg="some subtest"):
        assert False

    with subtests.test(msg="some subtest"):
        assert True

    # Suggested additional code
    assert subtests.passed, "Some subtests failed"

This would make it so that I could explicitly fail a test based on failed subtests. Sometimes a test might additionally be made of of just subtests as well. It looks misleading that my test PASSED while the subtests failed. So it didn't actually pass.

Capture not working for unittest.TestCase subclasses

Not getting the Captured stdout call logs

Test:

# test_subtest.py
import unittest


class PytestSubtestTest(unittest.TestCase):
    def test_subtest_stdout(self):
        for i in range(2):
            with self.subTest(i=i):
                print(f"{i}: Print something to stdout")
                self.assertEqual(0, 1)

Output:

$ pytest -o log_cli=true test_subtest.py

================================================================================ test session starts ================================================================================
platform darwin -- Python 3.7.7, pytest-5.4.3, py-1.8.1, pluggy-0.13.1
rootdir: /Users/w.son/Projects/git.soma.salesforce.com/TESTS/presto-fit-tests/src/test/python
plugins: subtests-0.3.1
collected 1 item

test_subtest.py::PytestSubtestTest::test_subtest_stdout
test_subtest.py::PytestSubtestTest::test_subtest_stdout PASSED                                                                                                                [100%]

===================================================================================== FAILURES ======================================================================================
____________________________________________________________________ PytestSubtestTest.test_subtest_stdout (i=0) ____________________________________________________________________

self = <test_subtest.PytestSubtestTest testMethod=test_subtest_stdout>

    def test_subtest_stdout(self):
        for i in range(2):
            with self.subTest(i=i):
                print(f"{i}: Print something to stdout")
>               self.assertEqual(0, 1)
E               AssertionError: 0 != 1

test_subtest.py:9: AssertionError
____________________________________________________________________ PytestSubtestTest.test_subtest_stdout (i=1) ____________________________________________________________________

self = <test_subtest.PytestSubtestTest testMethod=test_subtest_stdout>

    def test_subtest_stdout(self):
        for i in range(2):
            with self.subTest(i=i):
                print(f"{i}: Print something to stdout")
>               self.assertEqual(0, 1)
E               AssertionError: 0 != 1

test_subtest.py:9: AssertionError
============================================================================== short test summary info ==============================================================================
FAILED test_subtest.py::PytestSubtestTest::test_subtest_stdout - AssertionError: 0 != 1
FAILED test_subtest.py::PytestSubtestTest::test_subtest_stdout - AssertionError: 0 != 1
============================================================================ 2 failed, 1 passed in 0.11s ============================================================================

Originally posted by @dongwoo1005 in #18 (comment)

Captured output not displayed for functions from the outer test

Normally when a function fails, the captured output from the whole function is printed in the output results, but with subtests, pytest is only outputting the captured logs and output from the specific subtest that failed. This makes some sense, but I think it would make more sense to show the captured output from the parts of the test not under a subtest. For example:

def my_function():
    print("This function is critical!")
    return 3, 4

def test_with_subtests(subtests):
    a, b = my_function()

    with subtests.test(a):
        assert a == 5

    with subtests.test(b):
        assert b == 6

This doesn't print anything, but the logs from my_function may be important, since the results of that function are used in the subtests.

I think it would make sense to display output from anything in the enclosing scope of a given subtest, so for example:

def test_with_subtests(subtests):
    print(1)
    for i in range(5):
        with subtests.test(i):
            print(f"Subtest: {i}")
            assert i < 4

    with subtests.test("Nested"):
        print("Nested")
        with subtests.test("Failing"):
            print("Fail")
            assert False

        with subtests.test("Succeeding"):
           print("Success")
           assert True

I would want this to show

1
4
Nested
Fail

Or if the output is not combined at the test level, then for subtest 4, I'd want:

1
4

And for subest "Nested/Failing" I'd want:

1
Nested
Fail

This is somewhat related to #11 — I think tests should probably be considered failures if any subtests nested under them fail, and this is an example of one reason it makes sense to think of things that way.

pytest -x doesn't break on first failing subtest

I have noticed that when running pytest -x (i.e. stop after the first failure) on a test with failing subtests, the entire test will run - the test won't abort after the first failing subtest. For example, with a test like this:

import pytest

def test_with_subtests(subtests):
    for i in range(10):
        with subtests.test(msg="Message", i=i):
            assert i == 0

The short test summary for pytest -x shows 9 failures instead of 1:

$ pytest -x
(tmp) [/tmp/tmp.1B06JKxL9V]$ pytest -x tests.py
================================================== test session starts ===================================================
platform linux -- Python 3.8.1, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /tmp/tmp.1B06JKxL9V
plugins: subtests-0.3.0
collected 1 item                                                                                                         

tests.py .FFFFFFFFF.                                                                                               [100%]

< skipping failures >
================================================ short test summary info =================================================
FAILED tests.py::test_with_subtests - assert 1 == 0
FAILED tests.py::test_with_subtests - assert 2 == 0
FAILED tests.py::test_with_subtests - assert 3 == 0
FAILED tests.py::test_with_subtests - assert 4 == 0
FAILED tests.py::test_with_subtests - assert 5 == 0
FAILED tests.py::test_with_subtests - assert 6 == 0
FAILED tests.py::test_with_subtests - assert 7 == 0
FAILED tests.py::test_with_subtests - assert 8 == 0
FAILED tests.py::test_with_subtests - assert 9 == 0
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 9 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================== 9 failed, 1 passed in 0.08s ===============================================

I've been using pytest-subtests quite extensively with the reference implementation for PEP 615, and some tests have dozens of subtests (e.g. this one, where the subtest is basically used for parametrization), and I'd like to be able to quickly stop the tests and get short feedback if something is broken, but instead I get dozens of failure messages.

I assume there's a related but possibly trickier question of when you set --max-failures to some number other than 1, are you counting subtests or are you counting top-level tests, but I only ever use -x to mean "stop immediately on first failure", and this is defeating my intuitions on that use case.

XML output lacks `pass`ed subtests info

(I recommend copy-pasting the XML content from this issue into files, and then opening those in a web-browser. The tags nesting will be much more apparent.)

Take this code:

import pytest

@pytest.mark.parametrize('n', [0,2,4,0,3,6,0,5,10])
class TestClass:
    def test_func(self, n):
        print(n)
        self.run_single(n)

    def run_single(self, n):
        if n == 2 or n == 10:
            pytest.skip()

        assert n%2 == 0, 'n is odd'

When run with pytest pytest-subtest.py --junitxml=out-subtest.xml, the XML file it produces is the following:

<?xml version="1.0" encoding="utf-8"?><testsuites><testsuite name="pytest" errors="0" failures="2" skipped="2" tests="9" time="0.041" timestamp="2023-02-22T13:10:30.982095" hostname="stefano-XPS"><testcase classname="pytest-regular.TestClass" name="test_func[00]" time="0.001" /><testcase classname="pytest-regular.TestClass" name="test_func[2]" time="0.000"><skipped type="pytest.skip" message="Skipped">/home/stefano/git/neo4j-experiments/pytest-regular.py:11: Skipped</skipped></testcase><testcase classname="pytest-regular.TestClass" name="test_func[4]" time="0.000" /><testcase classname="pytest-regular.TestClass" name="test_func[01]" time="0.000" /><testcase classname="pytest-regular.TestClass" name="test_func[3]" time="0.001"><failure message="AssertionError: n is odd&#10;assert (3 % 2) == 0">self = &lt;pytest-regular.TestClass object at 0x7f7770985300&gt;, n = 3

    def test_func(self, n):
        print(n)
&gt;       self.run_single(n)

pytest-regular.py:7: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = &lt;pytest-regular.TestClass object at 0x7f7770985300&gt;, n = 3

    def run_single(self, n):
        if n == 2 or n == 10:
            pytest.skip()
    
&gt;       assert n%2 == 0, 'n is odd'
E       AssertionError: n is odd
E       assert (3 % 2) == 0

pytest-regular.py:13: AssertionError</failure></testcase><testcase classname="pytest-regular.TestClass" name="test_func[6]" time="0.000" /><testcase classname="pytest-regular.TestClass" name="test_func[02]" time="0.000" /><testcase classname="pytest-regular.TestClass" name="test_func[5]" time="0.001"><failure message="AssertionError: n is odd&#10;assert (5 % 2) == 0">self = &lt;pytest-regular.TestClass object at 0x7f77709854b0&gt;, n = 5

    def test_func(self, n):
        print(n)
&gt;       self.run_single(n)

pytest-regular.py:7: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = &lt;pytest-regular.TestClass object at 0x7f77709854b0&gt;, n = 5

    def run_single(self, n):
        if n == 2 or n == 10:
            pytest.skip()
    
&gt;       assert n%2 == 0, 'n is odd'
E       AssertionError: n is odd
E       assert (5 % 2) == 0

pytest-regular.py:13: AssertionError</failure></testcase><testcase classname="pytest-regular.TestClass" name="test_func[10]" time="0.000"><skipped type="pytest.skip" message="Skipped">/home/stefano/git/neo4j-experiments/pytest-regular.py:11: Skipped</skipped></testcase></testsuite></testsuites>

Screenshot from 2023-02-22 13-41-56

I tweaked that code to run the exact same test cases, but split in 3 tests of 3 subtests each:

import pytest

@pytest.mark.parametrize('start', [2,3,5])
class TestClass:
    def test_func(self, subtests, start):
        print(start)
        for multiplier in range(3):
            with subtests.test():
                n = start*multiplier
                self.run_single(n)

    def run_single(self, n):
        if n == 6 or n == 10:
            pytest.skip()

        assert n%2 == 0, 'n is odd'

the resulting XML of which is:

<?xml version="1.0" encoding="utf-8"?><testsuites><testsuite name="pytest" errors="0" failures="2" skipped="2" tests="12" time="0.041" timestamp="2023-02-22T13:10:24.166299" hostname="stefano-XPS"><testcase classname="pytest-subtest.TestClass" name="test_func[2]" time="0.007"><skipped type="pytest.skip" message="Skipped">/home/stefano/git/neo4j-experiments/pytest-subtest.py:15: Skipped</skipped></testcase><testcase classname="pytest-subtest.TestClass" name="test_func[3]" time="0.017"><failure message="AssertionError: n is odd&#10;assert (3 % 2) == 0">self = &lt;pytest-subtest.TestClass object at 0x7f8123d1e530&gt;
subtests = SubTests(ihook=&lt;_pytest.config.compat.PathAwareHookProxy object at 0x7f8124cf4190&gt;, suspend_capture_ctx=&lt;bound method ...te='started' _in_suspended=False&gt; _capture_fixture=None&gt;&gt;, request=&lt;SubRequest 'subtests' for &lt;Function test_func[3]&gt;&gt;)
start = 3

    def test_func(self, subtests, start):
        print(start)
        for multiplier in range(3):
            with subtests.test():
                n = start*multiplier
                print(n)
&gt;               self.run_single(n)

pytest-subtest.py:11: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = &lt;pytest-subtest.TestClass object at 0x7f8123d1e530&gt;, n = 3

    def run_single(self, n):
        if n == 2 or n == 10:
            pytest.skip()
    
&gt;       assert n%2 == 0, 'n is odd'
E       AssertionError: n is odd
E       assert (3 % 2) == 0

pytest-subtest.py:17: AssertionError</failure></testcase><testcase classname="pytest-subtest.TestClass" name="test_func[5]" time="0.004"><failure message="AssertionError: n is odd&#10;assert (5 % 2) == 0">self = &lt;pytest-subtest.TestClass object at 0x7f8123d1e410&gt;
subtests = SubTests(ihook=&lt;_pytest.config.compat.PathAwareHookProxy object at 0x7f8124cf4190&gt;, suspend_capture_ctx=&lt;bound method ...te='started' _in_suspended=False&gt; _capture_fixture=None&gt;&gt;, request=&lt;SubRequest 'subtests' for &lt;Function test_func[5]&gt;&gt;)
start = 5

    def test_func(self, subtests, start):
        print(start)
        for multiplier in range(3):
            with subtests.test():
                n = start*multiplier
                print(n)
&gt;               self.run_single(n)

pytest-subtest.py:11: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = &lt;pytest-subtest.TestClass object at 0x7f8123d1e410&gt;, n = 5

    def run_single(self, n):
        if n == 2 or n == 10:
            pytest.skip()
    
&gt;       assert n%2 == 0, 'n is odd'
E       AssertionError: n is odd
E       assert (5 % 2) == 0

pytest-subtest.py:17: AssertionError</failure><skipped type="pytest.skip" message="Skipped">/home/stefano/git/neo4j-experiments/pytest-subtest.py:15: Skipped</skipped></testcase></testsuite></testsuites>

Screenshot from 2023-02-22 13-41-44

In the subtest version, the xml lacks any information about passed subtests. There is info about failures/skips as nested tags inside a testcase, but while the non-subtests version has all tests listed out in separate testcase tags, the subtests one only lists tests and subtests with special status. This can skew off CI tools that count testcase tags. We have a few tens of tests that each spawn hundreds of subtests (in a scenario that makes sense, contrary to the stupid example here), and

  • we get a full test fail/skip if one subtest fails/is skipped
  • we get a single pass if all subtests pass in one test.

If we run 3 tests with 3 subtests each, and 2 subtests are skipped and one fails, my expectation would be the CI to report 1 failure, 2 skips, 7 pass (or 9 if we also consider the tests, I don't care). Instead, depending a bit on where the tests fail/skip, I can now get 1 failure, 2 skips, 0 pass.

Is there scope for improving on this?

pytest -v does not show the 'msg' contents in the report for subtests

For the following test function, code and test details unimportant:

def test_apifail_inventory_no_object_invs(subtests):
    """Confirm no-objects inventories don't import."""
    inv = soi.Inventory()

    with subtests.test(msg="plain"):
        with pytest.raises(TypeError):
            soi.Inventory(inv.data_file())

    with subtests.test(msg="zlib"):
        with pytest.raises((TypeError, ValueError)):
            soi.Inventory(soi.compress(inv.data_file()))

    d = {"project": "test", "version": "0.0", "count": 0}
    with subtests.test(msg="json"):
        with pytest.raises(ValueError):
            soi.Inventory(d)

The following output is obtained; the contents of msg are not included in the line of the report for each subtest (marked with arrows):

>pytest -vk no_object
================================================= test session starts =================================================
platform win32 -- Python 3.6.3, pytest-4.4.0, py-1.8.0, pluggy-0.9.0 -- c:\temp\git\sphobjinv\env\scripts\python.exe
cachedir: .pytest_cache
rootdir: C:\Temp\git\sphobjinv, inifile: tox.ini
plugins: timeout-1.3.3, subtests-0.1.0, ordering-0.6, cov-2.6.1
collected 173 items / 172 deselected / 1 selected

tests/test_api_fail.py::test_apifail_inventory_no_object_invs PASSED     [100%]  <---
tests/test_api_fail.py::test_apifail_inventory_no_object_invs PASSED     [100%]  <---
tests/test_api_fail.py::test_apifail_inventory_no_object_invs PASSED     [100%]  <---
tests/test_api_fail.py::test_apifail_inventory_no_object_invs PASSED                                             [100%]

====================================== 1 passed, 172 deselected in 0.30 seconds =======================================

This is problematic when a subtest fails and something like --tb=line is used. After making a breaking edit to one of the subtests and rerunning:

>pytest -vk no_object --tb=line
================================================= test session starts =================================================
platform win32 -- Python 3.6.3, pytest-4.4.0, py-1.8.0, pluggy-0.9.0 -- c:\temp\git\sphobjinv\env\scripts\python.exe
cachedir: .pytest_cache
rootdir: C:\Temp\git\sphobjinv, inifile: tox.ini
plugins: timeout-1.3.3, subtests-0.1.0, ordering-0.6, cov-2.6.1
collected 173 items / 172 deselected / 1 selected

tests/test_api_fail.py::test_apifail_inventory_no_object_invs FAILED     [100%]    <===
tests/test_api_fail.py::test_apifail_inventory_no_object_invs PASSED     [100%]
tests/test_api_fail.py::test_apifail_inventory_no_object_invs PASSED     [100%]
tests/test_api_fail.py::test_apifail_inventory_no_object_invs PASSED                                             [100%]

====================================================== FAILURES =======================================================
c:\temp\git\sphobjinv\src\sphobjinv\inventory.py:626: TypeError: Invalid Inventory source type
================================= 1 failed, 1 passed, 172 deselected in 0.34 seconds ==================================

It would be nice to have the msg reported with each subtest, so as to avoid having to rerun the test with expanded context to identify offending case(s).

Support coverage.py dynamic contexts when coverage is being collected

coverage.py supports a notion of "dynamic contexts".

The most common reason for these is answering "what tests cover a particular line" in the coverage output -- but right now that information doesn't descend into subtests, so you just see one single context reported when looking at output.

E.g. here's a screenshot from a project where I'm running like 200 subtests within a single test (called test_referencing_suite), only a few of which are going to really cover this line, even though it gets reported as "1 ctx":

Screenshot 2023-05-24 at 14 17 44

I think the way to do this is by including a coverage plugin which is aware of when/how subtests are entered (and to conditionally call into it if coverage is being collected) though I haven't looked closely to be honest, figured I'd file this and see if there's any appetite first for even including this here.

Fixture to make all assertions into their own subtest

Currently the usage requires a context manager, like this:

def test(subtests):
    for i in range(5):
        with subtests.test(msg="custom message", i=i):
            assert i % 2 == 0

Often I want to have a subtest for each assert statement so that I can see all of the failures individually. So for example

def test(assert_subtests):
    for i in range(5):        
        assert i % 2 == 0

I'm not sure what possible implementations would look like. Would this require a major reimplementation of the assertion rewriting?

Captured stdout for a subtest is not displayed properly

Captured stdout for a subtest is not displayed when the --capture option value is fd or sys. It is displayed as expected when the value is tee-sys.
This causes pytest-html plugin not to be able to display them in a report either. (pytest-dev/pytest-html#750)

Minimum code to reproduce:

def test_something(subtests):
    print("main test")

    with subtests.test("subtest"):
        print("sub test")

Logs:

  • --capture=fd (or sys)
$ pytest -rA 
============================================================ test session starts =============================================================
platform darwin -- Python 3.9.18, pytest-7.4.2, pluggy-1.3.0
rootdir: /Users/yugo/Desktop/test
plugins: subtests-0.11.0
collected 1 item                                                                                                                             

test_something.py ,.                                                                                                                   [100%]

=================================================================== PASSES ===================================================================
_______________________________________________________________ test_something _______________________________________________________________
------------------------------------------------------------ Captured stdout call ------------------------------------------------------------
main test
========================================================== short test summary info ===========================================================
PASSED test_something.py::test_something
==================================================== 1 passed, 1 subtests passed in 0.02s ====================================================
  • --capture=tee-sys
$ pytest -rA --capture=tee-sys
============================================================ test session starts =============================================================
platform darwin -- Python 3.9.18, pytest-7.4.2, pluggy-1.3.0
rootdir: /Users/yugo/Desktop/test
plugins: subtests-0.11.0
collected 1 item                                                                                                                             

test_something.py main test
sub test
,.                                                                                                                   [100%]

=================================================================== PASSES ===================================================================
_______________________________________________________________ test_something _______________________________________________________________
------------------------------------------------------------ Captured stdout call ------------------------------------------------------------
main test
sub test
========================================================== short test summary info ===========================================================
PASSED test_something.py::test_something
==================================================== 1 passed, 1 subtests passed in 0.02s ====================================================
$ pip freeze
attrs==23.1.0
exceptiongroup==1.1.3
iniconfig==2.0.0
packaging==23.2
pluggy==1.3.0
pytest==7.4.2
pytest-subtests==0.11.0
tomli==2.0.1

Show count of subtests in summary

As always, thanks for the work on this extremely useful plugin!

It would be useful if pytest-subtests could provide a count of the number of subtests. Either by incrementing the pytest test count, or augmenting it with a new section. Something like 2760 passed (53,383 subtests), 3253 skipped.

Doesn't show the name with nested subtests

When you use subTest nested, the label in output, is only the "deepest" label.

**example:

import unittest


class T(unittest.TestCase):
    def test_foo(self):
        with self.subTest("Positive Tests"):
          for i in range(5):
            with self.subTest("custom message", i=i):
                self.assertEqual(i % 2, 0)


if __name__ == "__main__":
    unittest.main()

Unclear test failure reporting

Currently when many substests are failing, the output looks like:

============================= test session starts ==============================
platform linux -- Python 3.9.12, pytest-6.2.3, py-1.10.0, pluggy-0.9.0 -- None
cachedir: ...
rootdir: ...
collecting ... collected 72 items

dtypes_test.py::test_dtype[np-item0] PASSED                              [  1%]
dtypes_test.py::test_dtype[np-item0] FAILED                              [  1%]
dtypes_test.py::test_dtype[np-item0] FAILED                              [  1%]
dtypes_test.py::test_dtype[np-item0] FAILED                              [  1%]
...

It makes it hard to understand which subtests pass or not. It would be nice if the subtest name was displayed, like:

dtypes_test.py::test_dtype[np-item0] [np.int32] PASSED                              [  1%]
dtypes_test.py::test_dtype[np-item0] [np.int64] FAILED                              [  1%]
dtypes_test.py::test_dtype[np-item0] [np.float32] FAILED                              [  1%]
dtypes_test.py::test_dtype[np-item0] [np.float64] FAILED                              [  1%]
...

Subtest message not displayed when installed using pip

I am running the subtests example from the documentation:

def test_foo(subtests):
    for i in range(5):
        with subtests.test(msg="custom", i=i):
            assert i % 2 == 0

I am running the tests using

pytest tests/test_tmp.py -v

When I install using pip

pip install pytest-subtests

I get the following output

tests/test_tmp.py::test_foo SUBPASS                                          [100%]
tests/test_tmp.py::test_foo SUBFAIL                                          [100%]
tests/test_tmp.py::test_foo SUBPASS                                          [100%]
tests/test_tmp.py::test_foo SUBFAIL                                          [100%]
tests/test_tmp.py::test_foo SUBPASS                                          [100%]
tests/test_tmp.py::test_foo PASSED                                           [100%]

However, if I install it directly from github using

pip install git+https://github.com/pytest-dev/pytest-subtests

I get the expected output

tests/test_tmp.py::test_foo [custom] (i=0) SUBPASS                           [100%]
tests/test_tmp.py::test_foo [custom] (i=1) SUBFAIL                           [100%]
tests/test_tmp.py::test_foo [custom] (i=2) SUBPASS                           [100%]
tests/test_tmp.py::test_foo [custom] (i=3) SUBFAIL                           [100%]
tests/test_tmp.py::test_foo [custom] (i=4) SUBPASS                           [100%]
tests/test_tmp.py::test_foo PASSED                                           [100%]

pytest `--last-failed` does not seem to remember failed subtests

I'm experiencing an issue where it appears that --last-failed cannot remember a failed test if the failure occurs in a subtest.

As a result, instead of only re-running the failed test --last-failed will re-run all tests.

Additionally, no tests will run if --last-failed is used with --last-failed-no-failure=none.

Here is my test:

def test_fails(subtests):
    for i in range(0, 2):
        with subtests.test(msg="error", i=i):
            assert i == 1

def test_passes():
    assert True

Result with pytest -q

Looks good so far. This is expected output.

============================ FAILURES ============================
____________________ test_fails [error] (i=0) ____________________

    def test_fails(subtests):
        for i in range(0, 2):
            with subtests.test(msg="error", i=i):
>               assert i == 1
E               assert 0 == 1

test_subtest.py:4: AssertionError
==================== short test summary info =====================
FAILED test_subtest.py::test_fails - assert 0 == 1
1 failed, 2 passed, 2 warnings in 0.21s

Result with pytest -q --lf

It's re-running all tests including test_passes which didn't fail. This is unexpected.

============================ FAILURES ============================
____________________ test_fails [error] (i=0) ____________________

    def test_fails(subtests):
        for i in range(0, 2):
            with subtests.test(msg="error", i=i):
>               assert i == 1
E               assert 0 == 1

test_subtest.py:4: AssertionError
==================== short test summary info =====================
FAILED test_subtest.py::test_fails - assert 0 == 1
1 failed, 2 passed, 2 warnings in 0.25s

Result with pytest -q --lf --lfnf=none

This is the worst case because I know something failed and it didn't re-run anything!

2 deselected in 0.01s

`pytest --sw` doesn't remember failures when using `subtests` with `@parametrize`

Here's a short example:

# test_foo.py
import pytest

@pytest.mark.parametrize(
        'xs', [
            [True],
            [False],
        ],
)
def test_foo(xs, subtests):
    for x in xs:
        with subtests.test(x=x):
            assert x

When I run pytest --sw the first time, it runs both tests (as expected). As a sidenote, though, it does seem to miscount the tests. The summary at the end claims that 1 test failed and 2 passed, while the verbose output at the beginning claims that 1 failed and 3 passed (each parameter seems to be tested twice).

============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.2.5, py-1.9.0, pluggy-0.13.1 -- /home/kale/.pyenv/versions/3.8.2/bin/python3.8
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/kale/sandbox/pytest_subtest/.hypothesis/examples')
rootdir: /home/kale/sandbox/pytest_subtest
plugins: forked-1.1.3, anyio-3.1.0, xonsh-0.9.27, typeguard-2.12.1, unordered-0.4.1, subtests-0.5.0, cov-2.8.1, xdist-1.32.0, hypothesis-5.8.3, mock-2.0.0, profiling-1.7.0
collected 2 items                                                              
stepwise: no previously failed tests, not skipping.

test_foo.py::test_foo[xs0] PASSED                                        [ 50%]
test_foo.py::test_foo[xs0] PASSED                                        [ 50%]
test_foo.py::test_foo[xs1] FAILED                                        [100%]
test_foo.py::test_foo[xs1] PASSED                                        [100%]

=================================== FAILURES ===================================
___________________________ test_foo[xs1] (x=False) ____________________________

xs = [False]
subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7fce6eaa8580>, suspend_capture_ctx=<bound method CaptureManager.gl...e='started' _in_suspended=False> _capture_fixture=None>>, request=<SubRequest 'subtests' for <Function test_foo[xs1]>>)

    @pytest.mark.parametrize(
            'xs', [
                [True],
                [False],
            ],
    )
    def test_foo(xs, subtests):
        for x in xs:
            with subtests.test(x=x):
>               assert x
E               assert False

test_foo.py:14: AssertionError
=============================== warnings summary ===============================
test_foo.py::test_foo[xs0]
test_foo.py::test_foo[xs1]
  /home/kale/.pyenv/versions/3.8.2/lib/python3.8/site-packages/pytest_subtests.py:143: PytestDeprecationWarning: A private pytest class or function was used.
    fixture = CaptureFixture(FDCapture, self.request)

-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ============================
FAILED test_foo.py::test_foo[xs1] - assert False
!!!!!!!! Interrupted: Test failed, continuing from this test next run. !!!!!!!!!
=================== 1 failed, 2 passed, 2 warnings in 0.64s ====================

When I run pytest --sw for a second time, I expect it to skip the first parameter, which passed the first time. Instead, it reruns both parameters:

$ pytest --sw -v
============================= test session starts ==============================
platform linux -- Python 3.8.2, pytest-6.2.5, py-1.9.0, pluggy-0.13.1 -- /home/kale/.pyenv/versions/3.8.2/bin/python3.8
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/kale/sandbox/pytest_subtest/.hypothesis/examples')
rootdir: /home/kale/sandbox/pytest_subtest
plugins: forked-1.1.3, anyio-3.1.0, xonsh-0.9.27, typeguard-2.12.1, unordered-0.4.1, subtests-0.5.0, cov-2.8.1, xdist-1.32.0, hypothesis-5.8.3, mock-2.0.0, profiling-1.7.0
collected 2 items                                                              
stepwise: no previously failed tests, not skipping.

test_foo.py::test_foo[xs0] PASSED                                        [ 50%]
test_foo.py::test_foo[xs0] PASSED                                        [ 50%]
test_foo.py::test_foo[xs1] FAILED                                        [100%]
test_foo.py::test_foo[xs1] PASSED                                        [100%]

=================================== FAILURES ===================================
___________________________ test_foo[xs1] (x=False) ____________________________

xs = [False]
subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7ffb9ee0a580>, suspend_capture_ctx=<bound method CaptureManager.gl...e='started' _in_suspended=False> _capture_fixture=None>>, request=<SubRequest 'subtests' for <Function test_foo[xs1]>>)

    @pytest.mark.parametrize(
            'xs', [
                [True],
                [False],
            ],
    )
    def test_foo(xs, subtests):
        for x in xs:
            with subtests.test(x=x):
>               assert x
E               assert False

test_foo.py:14: AssertionError
=============================== warnings summary ===============================
test_foo.py::test_foo[xs0]
test_foo.py::test_foo[xs1]
  /home/kale/.pyenv/versions/3.8.2/lib/python3.8/site-packages/pytest_subtests.py:143: PytestDeprecationWarning: A private pytest class or function was used.
    fixture = CaptureFixture(FDCapture, self.request)

-- Docs: https://docs.pytest.org/en/stable/warnings.html
=========================== short test summary info ============================
FAILED test_foo.py::test_foo[xs1] - assert False
!!!!!!!! Interrupted: Test failed, continuing from this test next run. !!!!!!!!!
=================== 1 failed, 2 passed, 2 warnings in 0.64s ====================

If I had to guess what was going on, I'd say that pytest thinks the test as a whole is passing even though some of the subtests are failing.

subtests.test cannot be used inside a generator

Imagine I have similar test case setup for multiple tests and I want to use subtests:

import pytest


@pytest.fixture
def create_test_cases(subtests):
    def fn(n):
        for i in range(n):
            with subtests.test(msg="custom message", i=i):
                yield i

    return fn


def test_foo(create_test_cases):
    for i in create_test_cases(5):
        assert i % 2 == 0


def test_bar(create_test_cases):
    for i in create_test_cases(5):
        assert i % 3 == 0

This gives the following output

main.py ,F,F                                                       [100%]

================================ FAILURES =================================
________________________________ test_foo _________________________________
Traceback (most recent call last):
  File "/home/user/main.py", line 24, in test_foo
    assert i % 2 == 0
AssertionError: assert (1 % 2) == 0
________________________________ test_bar _________________________________
Traceback (most recent call last):
  File "/home/user/main.py", line 29, in test_bar
    assert i % 3 == 0
AssertionError: assert (1 % 3) == 0
========================= short test summary info =========================
FAILED main.py::test_foo - assert (1 % 2) == 0
FAILED main.py::test_bar - assert (1 % 3) == 0
============================ 2 failed in 0.03s ============================

As you can see, although the subtests.test context manager is in place, the execution stops after the first failure. Since it is a FAILED instead of a SUBFAILED, one also misses out on the extra information the sub failure would print.

Digging into the code, the problem is

try:
yield
except (Exception, OutcomeException):
exc_info = ExceptionInfo.from_current()

not handling the GeneratorExit.

verbose output confusing, as it shows test passed

The example given in the readme is kinda weird when passed the -v flag.

(subtest) $ pytest -v test_sub_pytest.py  --tb=no
================= test session starts =================
platform darwin -- Python 3.8.0a1, pytest-4.4.0, py-1.8.0, pluggy-0.9.0 -- 
plugins: subtests-0.1.0
collected 1 item                                      

test_sub_pytest.py::test 
test_sub_pytest.py::test PASSED                 [100%]

========= 2 failed, 1 passed in 0.05 seconds ==========

The double ::test is a bit odd.
As is the PASSED.
And, well, with 1 test that has 5 subtests, where does the 2 failed 1 passed come from?
Specifically, the 1 passed?

Confusion: Missing subtest message in 4 failed subtest but 1 test passed

I have two problems here and not sure if I misunderstand the purpose of pytest-subtests or if there really is a bug.

I have a unittest like this

expect_folder = pathlib.Path.cwd() / 'Beverly'
self.assertTrue(expect_folder.exists())

expect = [
    expect_folder / '_Elke.pickle',
    expect_folder / '_Foo.pickle',
    expect_folder / 'Bar.pickle',
    expect_folder / 'Wurst.pickle',
]
for fp in expect:
    with self.subTest(str(fp)):
        self.assertTrue(fp.exists())

You see assertTrue() is called five times.

The pytest output

===================================================================================== short test summary info ==
BFAIL tests/test_bandas_datacontainer.py::FolderModeFS::test_build_container - AssertionError: False is not true
SUBFAIL tests/test_bandas_datacontainer.py::FolderModeFS::test_build_container - AssertionError: False is not true
SUBFAIL tests/test_bandas_datacontainer.py::FolderModeFS::test_build_container - AssertionError: False is not true
SUBFAIL tests/test_bandas_datacontainer.py::FolderModeFS::test_build_container - AssertionError: False is not true
=================================================================================== 4 failed, 1 passed in 1.97s ==

Problem 1 : passed but not passed

The message 4 failed, 1 passed might technically be correct but it is wrong from the users perspective. There is one test defined by the def test_...() method. It doesn't matter how often self.assert...() is called in there. One fail means the whole test fails. Saying 1 passed is a bug.

Problem 2 : sub test message missing

I do use subtest (self.subTest(str(fp))) to better see which part of the test failed. But the output just tells me four times AssertionError: False is not true without more details about which subtest is involved. I would expect something like AssertionError: False is not true (/Beverly/Foo.pickle) as an output.

Integrate into the core

Opening this to debate if it is time to integrate this into the core.

Pros to integrate it into the core:

  • Adds support for a standard unittest feature.
  • A different (arguably simpler) way to execute parametrized tests via the subtests fixture.
  • Some features (#6 for example) are hard to implement as plugin without further changes into the core.

Cons:

  • Yet another maintenance burden to the core.

Opinions?

cc @bluetech @RonnyPfannschmidt @asottile @The-Compiler

pytest-subtests deprecation warning

pytest-subtests uses CaptureFixture in pytest_subtests.py, which causes the following warning on pytest (at least as of latest pytest):

PytestDeprecationWarning: A private pytest class or function was used.

Request: usage without context manager

Is it possible to have subtests work without a context manager? Something like subtests.inline_test:

def test_several_cases(subtests) -> None:
    subtests.inline_test(msg="test 1"):
    assert 0 % 2 == 0

    # Upon next inline_test call,
    # the first test case is considered over
    subtests.inline_test(msg="test 2"):
    assert 1 % 2 == 1

Why consider this? I find my tests losing out on a lot of whitespace. Here's an example:

  • 8 chars: test class and then test method def
  • 4 chars: patches
  • 4 chars: pytest.raises
  • Now we arrive at the test behavior(s), already 16+ chars indented

Adding with subtests.test, one loses another 4 characters. black then steps in to wrap within 88-chars, and I end up with unnecessarily vertically long code blocks.

Is there any appetite to support something that doesn't require context management?


Workarounds:

  1. Calling __enter__ and __exit__ manually
  2. Inlining subtests.test with patching

pytest==6.0.0 breaks pytest-subtests

Content of test_subtests.py

def test_subtests(subtests):
    with subtests.test("foo"):
        assert True

Run with pytest==5.4.3

================================ test session starts =================================
platform linux -- Python 3.6.9, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: /home/user
plugins: subtests-0.3.1
collected 1 item                                                                     

test_subtests.py ..                                                            [100%]

================================= 1 passed in 0.01s ==================================

Run with pytest==6.0.0

platform linux -- Python 3.6.9, pytest-6.0.0, py-1.9.0, pluggy-0.13.1
rootdir: /home/user
plugins: subtests-0.3.1
collected 1 item                                                                     

test_subtests.py F                                                             [100%]

====================================== FAILURES ======================================
___________________________________ test_subtests ____________________________________

subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f6abd329cf8>, suspend_capture_ctx=<bound method CaptureManager.gl...'suspended' _in_suspended=False> _capture_fixture=None>>, request=<SubRequest 'subtests' for <Function test_subtests>>)

    def test_subtests(subtests):
        with subtests.test("foo"):
>           assert True

test_subtests.py:3: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/lib/python3.6/contextlib.py:88: in __exit__
    next(self.gen)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f6abd329cf8>, suspend_capture_ctx=<bound method CaptureManager.gl...'suspended' _in_suspended=False> _capture_fixture=None>>, request=<SubRequest 'subtests' for <Function test_subtests>>)
msg = 'foo', kwargs = {}, start = 6178.172653449, exc_info = None
captured = Captured(out='', err=''), stop = 6178.172874861

    @contextmanager
    def test(self, msg=None, **kwargs):
        start = monotonic()
        exc_info = None
    
        with self._capturing_output() as captured:
            try:
                yield
            except (Exception, OutcomeException):
                exc_info = ExceptionInfo.from_current()
    
        stop = monotonic()
    
>       call_info = CallInfo(None, exc_info, start, stop, when="call")
E       TypeError: __init__() missing 1 required positional argument: 'duration'

.venv/lib/python3.6/site-packages/pytest_subtests.py:161: TypeError
============================== short test summary info ===============================
FAILED test_subtests.py::test_subtests - TypeError: __init__() missing 1 required p...
================================= 1 failed in 0.03s ==================================

Some tests are failing

[fab@1234 repos]$ git clone [email protected]:fabaff/pytest-subtests.git
Cloning into 'pytest-subtests'...
remote: Enumerating objects: 25, done.
remote: Counting objects: 100% (25/25), done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 166 (delta 5), reused 12 (delta 3), pack-reused 141
Receiving objects: 100% (166/166), 39.34 KiB | 592.00 KiB/s, done.
Resolving deltas: 100% (74/74), done.
[fab@1234 repos]$ cd pytest-subtests/
[fab@1234 pytest-subtests]$ python3 -m venv .
[fab@1234 pytest-subtests]$ source bin/activate
(pytest-subtests) [fab@1234 pytest-subtests]$ python3 setup.py develop
running develop
running egg_info
creating pytest_subtests.egg-info
writing pytest_subtests.egg-info/PKG-INFO
writing dependency_links to pytest_subtests.egg-info/dependency_links.txt
writing entry points to pytest_subtests.egg-info/entry_points.txt
writing requirements to pytest_subtests.egg-info/requires.txt
writing top-level names to pytest_subtests.egg-info/top_level.txt
writing manifest file 'pytest_subtests.egg-info/SOURCES.txt'
writing manifest file 'pytest_subtests.egg-info/SOURCES.txt'
running build_ext
Creating /home/fab/Documents/repos/pytest-subtests/lib/python3.7/site-packages/pytest-subtests.egg-link (link to .)
Adding pytest-subtests 0.3.1.dev1+gc5442e3 to easy-install.pth file

Installed /home/fab/Documents/repos/pytest-subtests
Processing dependencies for pytest-subtests==0.3.1.dev1+gc5442e3
[...]
Finished processing dependencies for pytest-subtests==0.3.1.dev1+gc5442e3
(pytest-subtests) [fab@1234 pytest-subtests]$ pytest-3 -v tests
/usr/lib/python3.7/site-packages/trio/_core/_multierror.py:450: RuntimeWarning: You seem to already have a custom sys.excepthook handler installed. I'll skip installing trio's custom handler, but this means MultiErrors will not show full tracebacks.
  category=RuntimeWarning
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')
rootdir: /home/fab/Documents/repos/pytest-subtests
plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0
collecting ... 

―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestFixture.test_simple_terminal_normal[normal] ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestFixture object at 0x7f12c26c0fd0>, simple_script = None, testdir = <Testdir local('/tmp/pytest-of-fab/pytest-4/test_simple_terminal_normal0')>
mode = 'normal'

    def test_simple_terminal_normal(self, simple_script, testdir, mode):
        if mode == "normal":
            result = testdir.runpytest()
            expected_lines = ["collected 1 item"]
        else:
            pytest.importorskip("xdist")
            result = testdir.runpytest("-n1")
            expected_lines = ["gw0 [1]"]
    
        expected_lines += [
            "* test_foo [[]custom[]] (i=1) *",
            "* test_foo [[]custom[]] (i=3) *",
            "* 2 failed, 1 passed in *",
        ]
>       result.stdout.fnmatch_lines(expected_lines)
E       Failed: nomatch: 'collected 1 item'
E           and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E           and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')"
E           and: 'rootdir: /tmp/pytest-of-fab/pytest-4/test_simple_terminal_normal0'
E           and: 'plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E           and: ''
E           and: ''
E           and: '―――――――――――――――――――――――――― ERROR at setup of test_foo ――――――――――――――――――――――――――'
E           and: 'file /tmp/pytest-of-fab/pytest-4/test_simple_terminal_normal0/test_simple_terminal_normal.py, line 1'
E           and: '  def test_foo(subtests):'
E           and: "E       fixture 'subtests' not found"
E           and: '>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave'
E           and: ">       use 'pytest --fixtures [testpath]' for help on them."
E           and: ''
E           and: '/tmp/pytest-of-fab/pytest-4/test_simple_terminal_normal0/test_simple_terminal_normal.py:1'
E           and: '\r                                                                 \x1b[32m100% \x1b[0m\x1b[40m██████████\x1b[0m'
E           and: '===Flaky Test Report==='
E           and: ''
E           and: ''
E           and: '===End Flaky Test Report==='
E           and: ''
E           and: 'Results (0.08s):'
E           and: '\x1b[31m       1 error\x1b[0m'
E           and: ''
E       remains unmatched: 'collected 1 item'

/home/fab/Documents/repos/pytest-subtests/tests/test_subtests.py:37: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-4/test_simple_terminal_normal0
plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0


―――――――――――――――――――――――――― ERROR at setup of test_foo ――――――――――――――――――――――――――
file /tmp/pytest-of-fab/pytest-4/test_simple_terminal_normal0/test_simple_terminal_normal.py, line 1
  def test_foo(subtests):
E       fixture 'subtests' not found
>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave
>       use 'pytest --fixtures [testpath]' for help on them.

/tmp/pytest-of-fab/pytest-4/test_simple_terminal_normal0/test_simple_terminal_normal.py:1
                                                                 100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.08s):
       1 error

 tests/test_subtests.py::TestFixture.test_simple_terminal_normal[normal] ⨯                                                                                           5% ▌         
 tests/test_subtests.py::TestFixture.test_simple_terminal_normal[xdist] s                                                                                           11% █▏        

―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestFixture.test_simple_terminal_verbose[normal] ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestFixture object at 0x7f12c1a71090>, simple_script = None, testdir = <Testdir local('/tmp/pytest-of-fab/pytest-4/test_simple_terminal_verbose0')>
mode = 'normal'

    def test_simple_terminal_verbose(self, simple_script, testdir, mode):
        if mode == "normal":
            result = testdir.runpytest("-v")
            expected_lines = [
                "*collected 1 item",
                "test_simple_terminal_verbose.py::test_foo PASSED *100%*",
                "test_simple_terminal_verbose.py::test_foo FAILED *100%*",
                "test_simple_terminal_verbose.py::test_foo PASSED *100%*",
                "test_simple_terminal_verbose.py::test_foo FAILED *100%*",
                "test_simple_terminal_verbose.py::test_foo PASSED *100%*",
                "test_simple_terminal_verbose.py::test_foo PASSED *100%*",
            ]
        else:
            pytest.importorskip("xdist")
            result = testdir.runpytest("-n1", "-v")
            expected_lines = [
                "gw0 [1]",
                "*gw0*100%* test_simple_terminal_verbose.py::test_foo*",
                "*gw0*100%* test_simple_terminal_verbose.py::test_foo*",
                "*gw0*100%* test_simple_terminal_verbose.py::test_foo*",
                "*gw0*100%* test_simple_terminal_verbose.py::test_foo*",
                "*gw0*100%* test_simple_terminal_verbose.py::test_foo*",
                "*gw0*100%* test_simple_terminal_verbose.py::test_foo*",
            ]
    
        expected_lines += [
            "* test_foo [[]custom[]] (i=1) *",
            "* test_foo [[]custom[]] (i=3) *",
            "* 2 failed, 1 passed in *",
        ]
>       result.stdout.fnmatch_lines(expected_lines)
E       Failed: nomatch: '*collected 1 item'
E           and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E           and: 'cachedir: .pytest_cache'
E           and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')"
E           and: 'rootdir: /tmp/pytest-of-fab/pytest-4/test_simple_terminal_verbose0'
E           and: 'plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E           and: 'collecting ... '
E           and: ''
E           and: '―――――――――――――――――――――――――― ERROR at setup of test_foo ――――――――――――――――――――――――――'
E           and: 'file /tmp/pytest-of-fab/pytest-4/test_simple_terminal_verbose0/test_simple_terminal_verbose.py, line 1'
E           and: '  def test_foo(subtests):'
E           and: "E       fixture 'subtests' not found"
E           and: '>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave'
E           and: ">       use 'pytest --fixtures [testpath]' for help on them."
E           and: ''
E           and: '/tmp/pytest-of-fab/pytest-4/test_simple_terminal_verbose0/test_simple_terminal_verbose.py:1'
E           and: '\r                                                                 \x1b[32m100% \x1b[0m\x1b[40m██████████\x1b[0m'
E           and: '===Flaky Test Report==='
E           and: ''
E           and: ''
E           and: '===End Flaky Test Report==='
E           and: ''
E           and: 'Results (0.06s):'
E           and: '\x1b[31m       1 error\x1b[0m'
E           and: ''
E       remains unmatched: '*collected 1 item'

/home/fab/Documents/repos/pytest-subtests/tests/test_subtests.py:69: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-4/test_simple_terminal_verbose0
plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0
collecting ... 

―――――――――――――――――――――――――― ERROR at setup of test_foo ――――――――――――――――――――――――――
file /tmp/pytest-of-fab/pytest-4/test_simple_terminal_verbose0/test_simple_terminal_verbose.py, line 1
  def test_foo(subtests):
E       fixture 'subtests' not found
>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave
>       use 'pytest --fixtures [testpath]' for help on them.

/tmp/pytest-of-fab/pytest-4/test_simple_terminal_verbose0/test_simple_terminal_verbose.py:1
                                                                 100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.06s):
       1 error

 tests/test_subtests.py::TestFixture.test_simple_terminal_verbose[normal] ⨯                                                                                         16% █▋        
 tests/test_subtests.py::TestFixture.test_simple_terminal_verbose[xdist] s                                                                                          21% ██▏       

――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestFixture.test_skip[normal] ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestFixture object at 0x7f12cdfb7990>, testdir = <Testdir local('/tmp/pytest-of-fab/pytest-4/test_skip0')>, mode = 'normal'

    def test_skip(self, testdir, mode):
        testdir.makepyfile(
            """
            import pytest
            def test_foo(subtests):
                for i in range(5):
                    with subtests.test(msg="custom", i=i):
                        if i % 2 == 0:
                            pytest.skip('even number')
        """
        )
        if mode == "normal":
            result = testdir.runpytest()
            expected_lines = ["collected 1 item"]
        else:
            pytest.importorskip("xdist")
            result = testdir.runpytest("-n1")
            expected_lines = ["gw0 [1]"]
        expected_lines += ["* 1 passed, 3 skipped in *"]
>       result.stdout.fnmatch_lines(expected_lines)
E       Failed: nomatch: 'collected 1 item'
E           and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E           and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')"
E           and: 'rootdir: /tmp/pytest-of-fab/pytest-4/test_skip0'
E           and: 'plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E           and: ''
E           and: ''
E           and: '―――――――――――――――――――――――――― ERROR at setup of test_foo ――――――――――――――――――――――――――'
E           and: 'file /tmp/pytest-of-fab/pytest-4/test_skip0/test_skip.py, line 2'
E           and: '  def test_foo(subtests):'
E           and: "E       fixture 'subtests' not found"
E           and: '>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave'
E           and: ">       use 'pytest --fixtures [testpath]' for help on them."
E           and: ''
E           and: '/tmp/pytest-of-fab/pytest-4/test_skip0/test_skip.py:2'
E           and: '\r                                                                 \x1b[32m100% \x1b[0m\x1b[40m██████████\x1b[0m'
E           and: '===Flaky Test Report==='
E           and: ''
E           and: ''
E           and: '===End Flaky Test Report==='
E           and: ''
E           and: 'Results (0.06s):'
E           and: '\x1b[31m       1 error\x1b[0m'
E           and: ''
E       remains unmatched: 'collected 1 item'

/home/fab/Documents/repos/pytest-subtests/tests/test_subtests.py:90: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-4/test_skip0
plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0


―――――――――――――――――――――――――― ERROR at setup of test_foo ――――――――――――――――――――――――――
file /tmp/pytest-of-fab/pytest-4/test_skip0/test_skip.py, line 2
  def test_foo(subtests):
E       fixture 'subtests' not found
>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave
>       use 'pytest --fixtures [testpath]' for help on them.

/tmp/pytest-of-fab/pytest-4/test_skip0/test_skip.py:2
                                                                 100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.06s):
       1 error

 tests/test_subtests.py::TestFixture.test_skip[normal] ⨯                                                                                                            26% ██▋       
 tests/test_subtests.py::TestFixture.test_skip[xdist] s                                                                                                             32% ███▎      
 tests/test_subtests.py::TestSubTest.test_simple_terminal_normal[unittest] ✓                                                                                        37% ███▊      

――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestSubTest.test_simple_terminal_normal[pytest-normal] ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestSubTest object at 0x7f12cdf6a750>, simple_script = local('/tmp/pytest-of-fab/pytest-4/test_simple_terminal_normal3/test_simple_terminal_normal.py')
testdir = <Testdir local('/tmp/pytest-of-fab/pytest-4/test_simple_terminal_normal3')>, runner = 'pytest-normal'

    @pytest.mark.parametrize("runner", ["unittest", "pytest-normal", "pytest-xdist"])
    def test_simple_terminal_normal(self, simple_script, testdir, runner):
    
        if runner == "unittest":
            result = testdir.run(sys.executable, simple_script)
            result.stderr.fnmatch_lines(
                [
                    "FAIL: test_foo (__main__.T) [custom] (i=1)",
                    "AssertionError: 1 != 0",
                    "FAIL: test_foo (__main__.T) [custom] (i=3)",
                    "AssertionError: 1 != 0",
                    "Ran 1 test in *",
                    "FAILED (failures=2)",
                ]
            )
        else:
            if runner == "pytest-normal":
                result = testdir.runpytest(simple_script)
                expected_lines = ["collected 1 item"]
            else:
                pytest.importorskip("xdist")
                result = testdir.runpytest(simple_script, "-n1")
                expected_lines = ["gw0 [1]"]
            result.stdout.fnmatch_lines(
                expected_lines
                + [
                    "* T.test_foo [[]custom[]] (i=1) *",
                    "E  * AssertionError: 1 != 0",
                    "* T.test_foo [[]custom[]] (i=3) *",
                    "E  * AssertionError: 1 != 0",
>                   "* 2 failed, 1 passed in *",
                ]
            )
E           Failed: nomatch: 'collected 1 item'
E               and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E               and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')"
E               and: 'rootdir: /tmp/pytest-of-fab/pytest-4/test_simple_terminal_normal3'
E               and: 'plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E               and: ''
E               and: ''
E               and: '―――――――――――――――――――――――――――――――――― T.test_foo ――――――――――――――――――――――――――――――――――'
E               and: ''
E               and: 'self = <test_simple_terminal_normal.T testMethod=test_foo>'
E               and: ''
E               and: '    def test_foo(self):'
E               and: '        for i in range(5):'
E               and: '            with self.subTest(msg="custom", i=i):'
E               and: '>               self.assertEqual(i % 2, 0)'
E               and: 'E               AssertionError: 1 != 0'
E               and: ''
E               and: 'test_simple_terminal_normal.py:8: AssertionError'
E               and: '\r'
E               and: '\r \x1b[36m\x1b[0mtest_simple_terminal_normal.py\x1b[0m \x1b[31m⨯\x1b[0m                                \x1b[31m100% \x1b[0m\x1b[40m\x1b[31m█\x1b[0m\x1b[40m\x1b[31m█████████\x1b[0m'
E               and: '===Flaky Test Report==='
E               and: ''
E               and: ''
E               and: '===End Flaky Test Report==='
E               and: ''
E               and: 'Results (0.08s):'
E               and: '\x1b[31m       1 failed\x1b[0m'
E               and: '         - \x1b[36m\x1b[0mtest_simple_terminal_normal.py\x1b[0m:5 \x1b[31mT.test_foo\x1b[0m'
E               and: ''
E           remains unmatched: 'collected 1 item'

/home/fab/Documents/repos/pytest-subtests/tests/test_subtests.py:146: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-4/test_simple_terminal_normal3
plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0


―――――――――――――――――――――――――――――――――― T.test_foo ――――――――――――――――――――――――――――――――――

self = <test_simple_terminal_normal.T testMethod=test_foo>

    def test_foo(self):
        for i in range(5):
            with self.subTest(msg="custom", i=i):
>               self.assertEqual(i % 2, 0)
E               AssertionError: 1 != 0

test_simple_terminal_normal.py:8: AssertionError

 test_simple_terminal_normal.py ⨯                                100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.08s):
       1 failed
         - test_simple_terminal_normal.py:5 T.test_foo

 tests/test_subtests.py::TestSubTest.test_simple_terminal_normal[pytest-normal] ⨯                                                                                   42% ████▎     
 tests/test_subtests.py::TestSubTest.test_simple_terminal_normal[pytest-xdist] s                                                                                    47% ████▊     
 tests/test_subtests.py::TestSubTest.test_simple_terminal_verbose[unittest] ✓                                                                                       53% █████▍    

―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestSubTest.test_simple_terminal_verbose[pytest-normal] ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestSubTest object at 0x7f12cdefdad0>, simple_script = local('/tmp/pytest-of-fab/pytest-4/test_simple_terminal_verbose3/test_simple_terminal_verbose.py')
testdir = <Testdir local('/tmp/pytest-of-fab/pytest-4/test_simple_terminal_verbose3')>, runner = 'pytest-normal'

    @pytest.mark.parametrize("runner", ["unittest", "pytest-normal", "pytest-xdist"])
    def test_simple_terminal_verbose(self, simple_script, testdir, runner):
    
        if runner == "unittest":
            result = testdir.run(sys.executable, simple_script, "-v")
            result.stderr.fnmatch_lines(
                [
                    "test_foo (__main__.T) ... ",
                    "FAIL: test_foo (__main__.T) [custom] (i=1)",
                    "AssertionError: 1 != 0",
                    "FAIL: test_foo (__main__.T) [custom] (i=3)",
                    "AssertionError: 1 != 0",
                    "Ran 1 test in *",
                    "FAILED (failures=2)",
                ]
            )
        else:
            if runner == "pytest-normal":
                result = testdir.runpytest(simple_script, "-v")
                expected_lines = [
                    "*collected 1 item",
                    "test_simple_terminal_verbose.py::T::test_foo FAILED *100%*",
                    "test_simple_terminal_verbose.py::T::test_foo FAILED *100%*",
                    "test_simple_terminal_verbose.py::T::test_foo PASSED *100%*",
                ]
            else:
                pytest.importorskip("xdist")
                result = testdir.runpytest(simple_script, "-n1", "-v")
                expected_lines = [
                    "gw0 [1]",
                    "*gw0*100%* FAILED test_simple_terminal_verbose.py::T::test_foo*",
                    "*gw0*100%* FAILED test_simple_terminal_verbose.py::T::test_foo*",
                    "*gw0*100%* PASSED test_simple_terminal_verbose.py::T::test_foo*",
                ]
            result.stdout.fnmatch_lines(
                expected_lines
                + [
                    "* T.test_foo [[]custom[]] (i=1) *",
                    "E  * AssertionError: 1 != 0",
                    "* T.test_foo [[]custom[]] (i=3) *",
                    "E  * AssertionError: 1 != 0",
>                   "* 2 failed, 1 passed in *",
                ]
            )
E           Failed: nomatch: '*collected 1 item'
E               and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E               and: 'cachedir: .pytest_cache'
E               and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')"
E               and: 'rootdir: /tmp/pytest-of-fab/pytest-4/test_simple_terminal_verbose3'
E               and: 'plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E               and: 'collecting ... '
E               and: ''
E               and: '―――――――――――――――――――――――――――――――――― T.test_foo ――――――――――――――――――――――――――――――――――'
E               and: ''
E               and: 'self = <test_simple_terminal_verbose.T testMethod=test_foo>'
E               and: ''
E               and: '    def test_foo(self):'
E               and: '        for i in range(5):'
E               and: '            with self.subTest(msg="custom", i=i):'
E               and: '>               self.assertEqual(i % 2, 0)'
E               and: 'E               AssertionError: 1 != 0'
E               and: ''
E               and: 'test_simple_terminal_verbose.py:8: AssertionError'
E               and: '\r'
E               and: '\r \x1b[36mtest_simple_terminal_verbose.py\x1b[0m::T.test_foo\x1b[0m \x1b[31m⨯\x1b[0m                   \x1b[31m100% \x1b[0m\x1b[40m\x1b[31m█\x1b[0m\x1b[40m\x1b[31m█████████\x1b[0m'
E               and: '===Flaky Test Report==='
E               and: ''
E               and: ''
E               and: '===End Flaky Test Report==='
E               and: ''
E               and: 'Results (0.08s):'
E               and: '\x1b[31m       1 failed\x1b[0m'
E               and: '         - \x1b[36m\x1b[0mtest_simple_terminal_verbose.py\x1b[0m:5 \x1b[31mT.test_foo\x1b[0m'
E               and: ''
E           remains unmatched: '*collected 1 item'

/home/fab/Documents/repos/pytest-subtests/tests/test_subtests.py:191: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-4/test_simple_terminal_verbose3
plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0
collecting ... 

―――――――――――――――――――――――――――――――――― T.test_foo ――――――――――――――――――――――――――――――――――

self = <test_simple_terminal_verbose.T testMethod=test_foo>

    def test_foo(self):
        for i in range(5):
            with self.subTest(msg="custom", i=i):
>               self.assertEqual(i % 2, 0)
E               AssertionError: 1 != 0

test_simple_terminal_verbose.py:8: AssertionError

 test_simple_terminal_verbose.py::T.test_foo ⨯                   100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.08s):
       1 failed
         - test_simple_terminal_verbose.py:5 T.test_foo

 tests/test_subtests.py::TestSubTest.test_simple_terminal_verbose[pytest-normal] ⨯                                                                                  58% █████▊    
 tests/test_subtests.py::TestSubTest.test_simple_terminal_verbose[pytest-xdist] s                                                                                   63% ██████▍   
 tests/test_subtests.py::TestSubTest.test_skip[unittest] ✓                                                                                                          68% ██████▉   
 tests/test_subtests.py::TestSubTest.test_skip[pytest-normal] x                                                                                                     74% ███████▍  
 tests/test_subtests.py::TestSubTest.test_skip[pytest-xdist] x                                                                                                      79% ███████▉  

――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestCapture.test_capturing ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestCapture object at 0x7f12c14c03d0>, testdir = <Testdir local('/tmp/pytest-of-fab/pytest-4/test_capturing0')>

    def test_capturing(self, testdir):
        self.create_file(testdir)
        result = testdir.runpytest()
        result.stdout.fnmatch_lines(
            [
                "*__ test (i='A') __*",
                "*Captured stdout call*",
                "hello stdout A",
                "*Captured stderr call*",
                "hello stderr A",
                "*__ test (i='B') __*",
                "*Captured stdout call*",
                "hello stdout B",
                "*Captured stderr call*",
                "hello stderr B",
                "*__ test __*",
                "*Captured stdout call*",
                "start test",
>               "end test",
            ]
        )
E       Failed: nomatch: "*__ test (i='A') __*"
E           and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E           and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')"
E           and: 'rootdir: /tmp/pytest-of-fab/pytest-4/test_capturing0'
E           and: 'plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E           and: ''
E           and: ''
E           and: '―――――――――――――――――――――――――――― ERROR at setup of test ――――――――――――――――――――――――――――'
E           and: 'file /tmp/pytest-of-fab/pytest-4/test_capturing0/test_capturing.py, line 2'
E           and: '  def test(subtests):'
E           and: "E       fixture 'subtests' not found"
E           and: '>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave'
E           and: ">       use 'pytest --fixtures [testpath]' for help on them."
E           and: ''
E           and: '/tmp/pytest-of-fab/pytest-4/test_capturing0/test_capturing.py:2'
E           and: '\r                                                                 \x1b[32m100% \x1b[0m\x1b[40m██████████\x1b[0m'
E           and: '===Flaky Test Report==='
E           and: ''
E           and: ''
E           and: '===End Flaky Test Report==='
E           and: ''
E           and: 'Results (0.06s):'
E           and: '\x1b[31m       1 error\x1b[0m'
E           and: ''
E       remains unmatched: "*__ test (i='A') __*"

/home/fab/Documents/repos/pytest-subtests/tests/test_subtests.py:266: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-4/test_capturing0
plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0


―――――――――――――――――――――――――――― ERROR at setup of test ――――――――――――――――――――――――――――
file /tmp/pytest-of-fab/pytest-4/test_capturing0/test_capturing.py, line 2
  def test(subtests):
E       fixture 'subtests' not found
>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave
>       use 'pytest --fixtures [testpath]' for help on them.

/tmp/pytest-of-fab/pytest-4/test_capturing0/test_capturing.py:2
                                                                 100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.06s):
       1 error

 tests/test_subtests.py::TestCapture.test_capturing ⨯                                                                                                               84% ████████▌ 

―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestCapture.test_no_capture ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestCapture object at 0x7f12c14d5810>, testdir = <Testdir local('/tmp/pytest-of-fab/pytest-4/test_no_capture0')>

    def test_no_capture(self, testdir):
        self.create_file(testdir)
        result = testdir.runpytest("-s")
        result.stdout.fnmatch_lines(
            [
                "start test",
                "hello stdout A",
                "Fhello stdout B",
                "Fend test",
                "*__ test (i='A') __*",
                "*__ test (i='B') __*",
>               "*__ test __*",
            ]
        )
E       Failed: nomatch: 'start test'
E           and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E           and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')"
E           and: 'rootdir: /tmp/pytest-of-fab/pytest-4/test_no_capture0'
E           and: 'plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E           and: ''
E           and: ''
E           and: '―――――――――――――――――――――――――――― ERROR at setup of test ――――――――――――――――――――――――――――'
E           and: 'file /tmp/pytest-of-fab/pytest-4/test_no_capture0/test_no_capture.py, line 2'
E           and: '  def test(subtests):'
E           and: "E       fixture 'subtests' not found"
E           and: '>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave'
E           and: ">       use 'pytest --fixtures [testpath]' for help on them."
E           and: ''
E           and: '/tmp/pytest-of-fab/pytest-4/test_no_capture0/test_no_capture.py:2'
E           and: '\r                                                                 \x1b[32m100% \x1b[0m\x1b[40m██████████\x1b[0m'
E           and: '===Flaky Test Report==='
E           and: ''
E           and: ''
E           and: '===End Flaky Test Report==='
E           and: ''
E           and: 'Results (0.06s):'
E           and: '\x1b[31m       1 error\x1b[0m'
E           and: ''
E       remains unmatched: 'start test'

/home/fab/Documents/repos/pytest-subtests/tests/test_subtests.py:281: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-4/test_no_capture0
plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0


―――――――――――――――――――――――――――― ERROR at setup of test ――――――――――――――――――――――――――――
file /tmp/pytest-of-fab/pytest-4/test_no_capture0/test_no_capture.py, line 2
  def test(subtests):
E       fixture 'subtests' not found
>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave
>       use 'pytest --fixtures [testpath]' for help on them.

/tmp/pytest-of-fab/pytest-4/test_no_capture0/test_no_capture.py:2
                                                                 100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.06s):
       1 error

 tests/test_subtests.py::TestCapture.test_no_capture ⨯                                                                                                              89% ████████▉ 

――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestCapture.test_capture_with_fixture[capsys] ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestCapture object at 0x7f12c1372ad0>, testdir = <Testdir local('/tmp/pytest-of-fab/pytest-4/test_capture_with_fixture0')>, fixture = 'capsys'

    @pytest.mark.parametrize("fixture", ["capsys", "capfd"])
    def test_capture_with_fixture(self, testdir, fixture):
        testdir.makepyfile(
            r"""
            import sys
    
            def test(subtests, {fixture}):
                print('start test')
    
                with subtests.test(i='A'):
                    print("hello stdout A")
                    print("hello stderr A", file=sys.stderr)
    
                out, err = {fixture}.readouterr()
                assert out == 'start test\nhello stdout A\n'
                assert err == 'hello stderr A\n'
        """.format(
                fixture=fixture
            )
        )
        result = testdir.runpytest()
        result.stdout.fnmatch_lines(
>           ["*1 passed*",]
        )
E       Failed: nomatch: '*1 passed*'
E           and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E           and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')"
E           and: 'rootdir: /tmp/pytest-of-fab/pytest-4/test_capture_with_fixture0'
E           and: 'plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E           and: ''
E           and: ''
E           and: '―――――――――――――――――――――――――――― ERROR at setup of test ――――――――――――――――――――――――――――'
E           and: 'file /tmp/pytest-of-fab/pytest-4/test_capture_with_fixture0/test_capture_with_fixture.py, line 3'
E           and: '  def test(subtests, capsys):'
E           and: "E       fixture 'subtests' not found"
E           and: '>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave'
E           and: ">       use 'pytest --fixtures [testpath]' for help on them."
E           and: ''
E           and: '/tmp/pytest-of-fab/pytest-4/test_capture_with_fixture0/test_capture_with_fixture.py:3'
E           and: '\r                                                                 \x1b[32m100% \x1b[0m\x1b[40m██████████\x1b[0m'
E           and: '===Flaky Test Report==='
E           and: ''
E           and: ''
E           and: '===End Flaky Test Report==='
E           and: ''
E           and: 'Results (0.06s):'
E           and: '\x1b[31m       1 error\x1b[0m'
E           and: ''
E       remains unmatched: '*1 passed*'

/home/fab/Documents/repos/pytest-subtests/tests/test_subtests.py:308: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-4/test_capture_with_fixture0
plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0


―――――――――――――――――――――――――――― ERROR at setup of test ――――――――――――――――――――――――――――
file /tmp/pytest-of-fab/pytest-4/test_capture_with_fixture0/test_capture_with_fixture.py, line 3
  def test(subtests, capsys):
E       fixture 'subtests' not found
>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave
>       use 'pytest --fixtures [testpath]' for help on them.

/tmp/pytest-of-fab/pytest-4/test_capture_with_fixture0/test_capture_with_fixture.py:3
                                                                 100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.06s):
       1 error

 tests/test_subtests.py::TestCapture.test_capture_with_fixture[capsys] ⨯                                                                                            95% █████████▌

―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestCapture.test_capture_with_fixture[capfd] ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestCapture object at 0x7f12c1372210>, testdir = <Testdir local('/tmp/pytest-of-fab/pytest-4/test_capture_with_fixture1')>, fixture = 'capfd'

    @pytest.mark.parametrize("fixture", ["capsys", "capfd"])
    def test_capture_with_fixture(self, testdir, fixture):
        testdir.makepyfile(
            r"""
            import sys
    
            def test(subtests, {fixture}):
                print('start test')
    
                with subtests.test(i='A'):
                    print("hello stdout A")
                    print("hello stderr A", file=sys.stderr)
    
                out, err = {fixture}.readouterr()
                assert out == 'start test\nhello stdout A\n'
                assert err == 'hello stderr A\n'
        """.format(
                fixture=fixture
            )
        )
        result = testdir.runpytest()
        result.stdout.fnmatch_lines(
>           ["*1 passed*",]
        )
E       Failed: nomatch: '*1 passed*'
E           and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E           and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')"
E           and: 'rootdir: /tmp/pytest-of-fab/pytest-4/test_capture_with_fixture1'
E           and: 'plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E           and: ''
E           and: ''
E           and: '―――――――――――――――――――――――――――― ERROR at setup of test ――――――――――――――――――――――――――――'
E           and: 'file /tmp/pytest-of-fab/pytest-4/test_capture_with_fixture1/test_capture_with_fixture.py, line 3'
E           and: '  def test(subtests, capfd):'
E           and: "E       fixture 'subtests' not found"
E           and: '>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave'
E           and: ">       use 'pytest --fixtures [testpath]' for help on them."
E           and: ''
E           and: '/tmp/pytest-of-fab/pytest-4/test_capture_with_fixture1/test_capture_with_fixture.py:3'
E           and: '\r                                                                 \x1b[32m100% \x1b[0m\x1b[40m██████████\x1b[0m'
E           and: '===Flaky Test Report==='
E           and: ''
E           and: ''
E           and: '===End Flaky Test Report==='
E           and: ''
E           and: 'Results (0.04s):'
E           and: '\x1b[31m       1 error\x1b[0m'
E           and: ''
E       remains unmatched: '*1 passed*'

/home/fab/Documents/repos/pytest-subtests/tests/test_subtests.py:308: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/pytest-subtests/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-4/test_capture_with_fixture1
plugins: hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0


―――――――――――――――――――――――――――― ERROR at setup of test ――――――――――――――――――――――――――――
file /tmp/pytest-of-fab/pytest-4/test_capture_with_fixture1/test_capture_with_fixture.py, line 3
  def test(subtests, capfd):
E       fixture 'subtests' not found
>       available fixtures: _dj_autoclear_mailbox, _django_clear_site_cache, _django_db_marker, _django_set_urlconf, _django_setup_unittest, _fail_for_invalid_template_variable, _live_server_helper, _template_string_if_invalid_marker, _vcr_marker, admin_client, admin_user, autojump_clock, betamax_parametrized_recorder, betamax_parametrized_session, betamax_recorder, betamax_session, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, client, cov, datafiles, db, django_assert_max_num_queries, django_assert_num_queries, django_db_blocker, django_db_createdb, django_db_keepdb, django_db_modify_db_settings, django_db_modify_db_settings_parallel_suffix, django_db_modify_db_settings_tox_suffix, django_db_modify_db_settings_xdist_suffix, django_db_reset_sequences, django_db_setup, django_db_use_migrations, django_mail_dnsname, django_mail_patch_dns, django_test_environment, django_user_model, django_username_field, doctest_namespace, event_loop, live_server, loop, mailoutbox, mock, mock_clock, mocker, monkeypatch, no_cover, nursery, patching, print_logs, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, requests_mock, rf, settings, smart_caplog, stdouts, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, tmpworkdir, transactional_db, unused_tcp_port, unused_tcp_port_factory, vcr, vcr_cassette, vcr_cassette_dir, vcr_cassette_name, vcr_config, weave
>       use 'pytest --fixtures [testpath]' for help on them.

/tmp/pytest-of-fab/pytest-4/test_capture_with_fixture1/test_capture_with_fixture.py:3
                                                                 100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.04s):
       1 error

 tests/test_subtests.py::TestCapture.test_capture_with_fixture[capfd] ⨯                                                                                            100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (6.78s):
       3 passed
       9 failed
         - tests/test_subtests.py:23 TestFixture.test_simple_terminal_normal[normal]
         - tests/test_subtests.py:39 TestFixture.test_simple_terminal_verbose[normal]
         - tests/test_subtests.py:71 TestFixture.test_skip[normal]
         - tests/test_subtests.py:116 TestSubTest.test_simple_terminal_normal[pytest-normal]
         - tests/test_subtests.py:150 TestSubTest.test_simple_terminal_verbose[pytest-normal]
         - tests/test_subtests.py:248 TestCapture.test_capturing
         - tests/test_subtests.py:270 TestCapture.test_no_capture
         - tests/test_subtests.py:286 TestCapture.test_capture_with_fixture[capsys]
         - tests/test_subtests.py:286 TestCapture.test_capture_with_fixture[capfd]
       2 xfailed
       5 skipped

At least it seems that a fixture is not found.

Tests for 0.3.0 are failing as well during the RPM build process for Fedora.

+ pytest-3.7 -v tests
/usr/lib/python3.7/site-packages/trio/_core/_multierror.py:450: RuntimeWarning: You seem to already have a custom sys.excepthook handler installed. I'll skip installing trio's custom handler, but this means MultiErrors will not show full tracebacks.
  category=RuntimeWarning
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')
rootdir: /home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0
plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0
collecting ... 

―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestFixture.test_simple_terminal_normal[normal] ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestFixture object at 0x7f9f46c9c290>, simple_script = None, testdir = <Testdir local('/tmp/pytest-of-fab/pytest-1/test_simple_terminal_normal0')>
mode = 'normal'

    def test_simple_terminal_normal(self, simple_script, testdir, mode):
        if mode == "normal":
            result = testdir.runpytest()
            expected_lines = ["collected 1 item"]
        else:
            pytest.importorskip("xdist")
            result = testdir.runpytest("-n1")
            expected_lines = ["gw0 [1]"]
    
        expected_lines += [
            "* test_foo [[]custom[]] (i=1) *",
            "* test_foo [[]custom[]] (i=3) *",
            "* 2 failed, 1 passed in *",
        ]
>       result.stdout.fnmatch_lines(expected_lines)
E       Failed: nomatch: 'collected 1 item'
E           and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E           and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')"
E           and: 'rootdir: /tmp/pytest-of-fab/pytest-1/test_simple_terminal_normal0'
E           and: 'plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E           and: '\r'
E           and: ''
E           and: ''
E           and: '――――――――――――――――――――――――――― test_foo [custom] (i=1) ――――――――――――――――――――――――――――'
E           and: ''
E           and: "subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f46ea42d0>, suspend_capture_ctx=<bound method CaptureManager.gl... _in_suspended='<UNSET>'> _current_item=<Function test_foo>>>, request=<SubRequest 'subtests' for <Function test_foo>>)"
E           and: ''
E           and: '    def test_foo(subtests):'
E           and: '        for i in range(5):'
E           and: '            with subtests.test(msg="custom", i=i):'
E           and: '>               assert i % 2 == 0'
E           and: 'E               assert (1 % 2) == 0'
E           and: ''
E           and: 'test_simple_terminal_normal.py:4: AssertionError'
E           and: '\r'
E           and: ''
E           and: ''
E           and: '――――――――――――――――――――――――――― test_foo [custom] (i=3) ――――――――――――――――――――――――――――'
E           and: ''
E           and: "subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f46ea42d0>, suspend_capture_ctx=<bound method CaptureManager.gl... _in_suspended='<UNSET>'> _current_item=<Function test_foo>>>, request=<SubRequest 'subtests' for <Function test_foo>>)"
E           and: ''
E           and: '    def test_foo(subtests):'
E           and: '        for i in range(5):'
E           and: '            with subtests.test(msg="custom", i=i):'
E           and: '>               assert i % 2 == 0'
E           and: 'E               assert (3 % 2) == 0'
E           and: ''
E           and: 'test_simple_terminal_normal.py:4: AssertionError'
E           and: '\r'
E           and: '\r \x1b[36m\x1b[0mtest_simple_terminal_normal.py\x1b[0m \x1b[31m⨯\x1b[0m\x1b[32m✓\x1b[0m\x1b[32m✓\x1b[0m                              \x1b[31m100% \x1b[0m\x1b[40m\x1b[31m█\x1b[0m\x1b[40m\x1b[31m█████████\x1b[0m'
E           and: '===Flaky Test Report==='
E           and: ''
E           and: ''
E           and: '===End Flaky Test Report==='
E           and: ''
E           and: 'Results (0.13s):'
E           and: '\x1b[32m       4 passed\x1b[0m'
E           and: '\x1b[31m       2 failed\x1b[0m'
E           and: '         - \x1b[36m\x1b[0mtest_simple_terminal_normal.py\x1b[0m:1 \x1b[31mtest_foo\x1b[0m'
E           and: '         - \x1b[36m\x1b[0mtest_simple_terminal_normal.py\x1b[0m:1 \x1b[31mtest_foo\x1b[0m'
E           and: ''
E       remains unmatched: 'collected 1 item'

/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/tests/test_subtests.py:37: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-1/test_simple_terminal_normal0
plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0



[...]
rootdir: /tmp/pytest-of-fab/pytest-1/test_simple_terminal_normal3
plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0

―――――――――――――――――――――――――― T.test_foo [custom] (i=1) ―――――――――――――――――――――――――――

self = <test_simple_terminal_normal.T testMethod=test_foo>

    def test_foo(self):
        for i in range(5):
            with self.subTest(msg="custom", i=i):
>               self.assertEqual(i % 2, 0)
E               AssertionError: 1 != 0

test_simple_terminal_normal.py:8: AssertionError


―――――――――――――――――――――――――― T.test_foo [custom] (i=3) ―――――――――――――――――――――――――――

self = <test_simple_terminal_normal.T testMethod=test_foo>

    def test_foo(self):
        for i in range(5):
            with self.subTest(msg="custom", i=i):
>               self.assertEqual(i % 2, 0)
E               AssertionError: 1 != 0

test_simple_terminal_normal.py:8: AssertionError

 test_simple_terminal_normal.py ⨯✓                               100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.07s):
       1 passed
       2 failed
         - test_simple_terminal_normal.py:5 T.test_foo
         - test_simple_terminal_normal.py:5 T.test_foo

 tests/test_subtests.py::TestSubTest.test_simple_terminal_normal[pytest-normal] ⨯                                                                                   42% ████▎     
 tests/test_subtests.py::TestSubTest.test_simple_terminal_normal[pytest-xdist] s                                                                                    47% ████▊     
 tests/test_subtests.py::TestSubTest.test_simple_terminal_verbose[unittest] ✓                                                                                       53% █████▍    

―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestSubTest.test_simple_terminal_verbose[pytest-normal] ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestSubTest object at 0x7f9f532ab3d0>, simple_script = local('/tmp/pytest-of-fab/pytest-1/test_simple_terminal_verbose3/test_simple_terminal_verbose.py')
testdir = <Testdir local('/tmp/pytest-of-fab/pytest-1/test_simple_terminal_verbose3')>, runner = 'pytest-normal'

    @pytest.mark.parametrize("runner", ["unittest", "pytest-normal", "pytest-xdist"])
    def test_simple_terminal_verbose(self, simple_script, testdir, runner):
    
        if runner == "unittest":
            result = testdir.run(sys.executable, simple_script, "-v")
            result.stderr.fnmatch_lines(
                [
                    "test_foo (__main__.T) ... ",
                    "FAIL: test_foo (__main__.T) [custom] (i=1)",
                    "AssertionError: 1 != 0",
                    "FAIL: test_foo (__main__.T) [custom] (i=3)",
                    "AssertionError: 1 != 0",
                    "Ran 1 test in *",
                    "FAILED (failures=2)",
                ]
            )
        else:
            if runner == "pytest-normal":
                result = testdir.runpytest(simple_script, "-v")
                expected_lines = [
                    "*collected 1 item",
                    "test_simple_terminal_verbose.py::T::test_foo FAILED *100%*",
                    "test_simple_terminal_verbose.py::T::test_foo FAILED *100%*",
                    "test_simple_terminal_verbose.py::T::test_foo PASSED *100%*",
                ]
            else:
                pytest.importorskip("xdist")
                result = testdir.runpytest(simple_script, "-n1", "-v")
                expected_lines = [
                    "gw0 [1]",
                    "*gw0*100%* FAILED test_simple_terminal_verbose.py::T::test_foo*",
                    "*gw0*100%* FAILED test_simple_terminal_verbose.py::T::test_foo*",
                    "*gw0*100%* PASSED test_simple_terminal_verbose.py::T::test_foo*",
                ]
            result.stdout.fnmatch_lines(
                expected_lines
                + [
                    "* T.test_foo [[]custom[]] (i=1) *",
                    "E  * AssertionError: 1 != 0",
                    "* T.test_foo [[]custom[]] (i=3) *",
                    "E  * AssertionError: 1 != 0",
>                   "* 2 failed, 1 passed in *",
                ]
            )
E           Failed: nomatch: '*collected 1 item'
E               and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E               and: 'cachedir: .pytest_cache'
E               and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')"
E               and: 'rootdir: /tmp/pytest-of-fab/pytest-1/test_simple_terminal_verbose3'
E               and: 'plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E               and: 'collecting ... '
E               and: '―――――――――――――――――――――――――― T.test_foo [custom] (i=1) ―――――――――――――――――――――――――――'
E               and: ''
E               and: 'self = <test_simple_terminal_verbose.T testMethod=test_foo>'
E               and: ''
E               and: '    def test_foo(self):'
E               and: '        for i in range(5):'
E               and: '            with self.subTest(msg="custom", i=i):'
E               and: '>               self.assertEqual(i % 2, 0)'
E               and: 'E               AssertionError: 1 != 0'
E               and: ''
E               and: 'test_simple_terminal_verbose.py:8: AssertionError'
E               and: '\r'
E               and: ''
E               and: '―――――――――――――――――――――――――― T.test_foo [custom] (i=3) ―――――――――――――――――――――――――――'
E               and: ''
E               and: 'self = <test_simple_terminal_verbose.T testMethod=test_foo>'
E               and: ''
E               and: '    def test_foo(self):'
E               and: '        for i in range(5):'
E               and: '            with self.subTest(msg="custom", i=i):'
E               and: '>               self.assertEqual(i % 2, 0)'
E               and: 'E               AssertionError: 1 != 0'
E               and: ''
E               and: 'test_simple_terminal_verbose.py:8: AssertionError'
E               and: '\r'
E               and: '\r \x1b[36mtest_simple_terminal_verbose.py\x1b[0m::T.test_foo\x1b[0m \x1b[31m⨯\x1b[0m\x1b[32m✓\x1b[0m                  \x1b[31m100% \x1b[0m\x1b[40m\x1b[31m█\x1b[0m\x1b[40m\x1b[31m█████████\x1b[0m'
E               and: '===Flaky Test Report==='
E               and: ''
E               and: ''
E               and: '===End Flaky Test Report==='
E               and: ''
E               and: 'Results (0.06s):'
E               and: '\x1b[32m       1 passed\x1b[0m'
E               and: '\x1b[31m       2 failed\x1b[0m'
E               and: '         - \x1b[36m\x1b[0mtest_simple_terminal_verbose.py\x1b[0m:5 \x1b[31mT.test_foo\x1b[0m'
E               and: '         - \x1b[36m\x1b[0mtest_simple_terminal_verbose.py\x1b[0m:5 \x1b[31mT.test_foo\x1b[0m'
E               and: ''
E           remains unmatched: '*collected 1 item'

/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/tests/test_subtests.py:191: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-1/test_simple_terminal_verbose3
plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0
collecting ... 
―――――――――――――――――――――――――― T.test_foo [custom] (i=1) ―――――――――――――――――――――――――――

self = <test_simple_terminal_verbose.T testMethod=test_foo>

    def test_foo(self):
        for i in range(5):
            with self.subTest(msg="custom", i=i):
>               self.assertEqual(i % 2, 0)
E               AssertionError: 1 != 0

test_simple_terminal_verbose.py:8: AssertionError


―――――――――――――――――――――――――― T.test_foo [custom] (i=3) ―――――――――――――――――――――――――――

self = <test_simple_terminal_verbose.T testMethod=test_foo>

    def test_foo(self):
        for i in range(5):
            with self.subTest(msg="custom", i=i):
>               self.assertEqual(i % 2, 0)
E               AssertionError: 1 != 0

test_simple_terminal_verbose.py:8: AssertionError

 test_simple_terminal_verbose.py::T.test_foo ⨯✓                  100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.06s):
       1 passed
       2 failed
         - test_simple_terminal_verbose.py:5 T.test_foo
         - test_simple_terminal_verbose.py:5 T.test_foo

 tests/test_subtests.py::TestSubTest.test_simple_terminal_verbose[pytest-normal] ⨯                                                                                  58% █████▊    
 tests/test_subtests.py::TestSubTest.test_simple_terminal_verbose[pytest-xdist] s                                                                                   63% ██████▍   
 tests/test_subtests.py::TestSubTest.test_skip[unittest] ✓                                                                                                          68% ██████▉   
 tests/test_subtests.py::TestSubTest.test_skip[pytest-normal] x                                                                                                     74% ███████▍  
 tests/test_subtests.py::TestSubTest.test_skip[pytest-xdist] x                                                                                                      79% ███████▉  

――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestCapture.test_capturing ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestCapture object at 0x7f9f468f91d0>, testdir = <Testdir local('/tmp/pytest-of-fab/pytest-1/test_capturing0')>

    def test_capturing(self, testdir):
        self.create_file(testdir)
        result = testdir.runpytest()
        result.stdout.fnmatch_lines(
            [
                "*__ test (i='A') __*",
                "*Captured stdout call*",
                "hello stdout A",
                "*Captured stderr call*",
                "hello stderr A",
                "*__ test (i='B') __*",
                "*Captured stdout call*",
                "hello stdout B",
                "*Captured stderr call*",
                "hello stderr B",
                "*__ test __*",
                "*Captured stdout call*",
                "start test",
>               "end test",
            ]
        )
E       Failed: nomatch: "*__ test (i='A') __*"
E           and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E           and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')"
E           and: 'rootdir: /tmp/pytest-of-fab/pytest-1/test_capturing0'
E           and: 'plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E           and: ''
E           and: ''
E           and: "――――――――――――――――――――――――――――――――― test (i='A') ―――――――――――――――――――――――――――――――――"
E           and: ''
E           and: "subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f4694c550>, suspend_capture_ctx=<bound method CaptureManager.gl...resumed' _in_suspended='<UNSET>'> _current_item=<Function test>>>, request=<SubRequest 'subtests' for <Function test>>)"
E           and: ''
E           and: '    def test(subtests):'
E           and: '        print()'
E           and: "        print('start test')"
E           and: '    '
E           and: "        with subtests.test(i='A'):"
E           and: '            print("hello stdout A")'
E           and: '            print("hello stderr A", file=sys.stderr)'
E           and: '>           assert 0'
E           and: 'E           assert 0'
E           and: ''
E           and: 'test_capturing.py:9: AssertionError'
E           and: '----------------------------- Captured stdout call -----------------------------'
E           and: 'hello stdout A'
E           and: '----------------------------- Captured stderr call -----------------------------'
E           and: 'hello stderr A'
E           and: '\r'
E           and: ''
E           and: ''
E           and: "――――――――――――――――――――――――――――――――― test (i='B') ―――――――――――――――――――――――――――――――――"
E           and: ''
E           and: "subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f4694c550>, suspend_capture_ctx=<bound method CaptureManager.gl...resumed' _in_suspended='<UNSET>'> _current_item=<Function test>>>, request=<SubRequest 'subtests' for <Function test>>)"
E           and: ''
E           and: '    def test(subtests):'
E           and: '        print()'
E           and: "        print('start test')"
E           and: '    '
E           and: "        with subtests.test(i='A'):"
E           and: '            print("hello stdout A")'
E           and: '            print("hello stderr A", file=sys.stderr)'
E           and: '            assert 0'
E           and: '    '
E           and: "        with subtests.test(i='B'):"
E           and: '            print("hello stdout B")'
E           and: '            print("hello stderr B", file=sys.stderr)'
E           and: '>           assert 0'
E           and: 'E           assert 0'
E           and: ''
E           and: 'test_capturing.py:14: AssertionError'
E           and: '----------------------------- Captured stdout call -----------------------------'
E           and: 'hello stdout B'
E           and: '----------------------------- Captured stderr call -----------------------------'
E           and: 'hello stderr B'
E           and: '\r'
E           and: ''
E           and: ''
E           and: '――――――――――――――――――――――――――――――――――――― test ―――――――――――――――――――――――――――――――――――――'
E           and: ''
E           and: "subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f4694c550>, suspend_capture_ctx=<bound method CaptureManager.gl...spended' _in_suspended='<UNSET>'> _current_item=<Function test>>>, request=<SubRequest 'subtests' for <Function test>>)"
E           and: ''
E           and: '    def test(subtests):'
E           and: '        print()'
E           and: "        print('start test')"
E           and: '    '
E           and: "        with subtests.test(i='A'):"
E           and: '            print("hello stdout A")'
E           and: '            print("hello stderr A", file=sys.stderr)'
E           and: '            assert 0'
E           and: '    '
E           and: "        with subtests.test(i='B'):"
E           and: '            print("hello stdout B")'
E           and: '            print("hello stderr B", file=sys.stderr)'
E           and: '            assert 0'
E           and: '    '
E           and: "        print('end test')"
E           and: '>       assert 0'
E           and: 'E       assert 0'
E           and: ''
E           and: 'test_capturing.py:17: AssertionError'
E           and: '----------------------------- Captured stdout call -----------------------------'
E           and: ''
E           and: 'start test'
E           and: 'end test'
E           and: '\r'
E           and: '\r \x1b[36m\x1b[0mtest_capturing.py\x1b[0m \x1b[31m⨯\x1b[0m                                             \x1b[31m100% \x1b[0m\x1b[40m\x1b[31m█\x1b[0m\x1b[40m\x1b[31m█████████\x1b[0m'
E           and: '===Flaky Test Report==='
E           and: ''
E           and: ''
E           and: '===End Flaky Test Report==='
E           and: ''
E           and: 'Results (0.10s):'
E           and: '\x1b[31m       3 failed\x1b[0m'
E           and: '         - \x1b[36m\x1b[0mtest_capturing.py\x1b[0m:2 \x1b[31mtest\x1b[0m'
E           and: '         - \x1b[36m\x1b[0mtest_capturing.py\x1b[0m:2 \x1b[31mtest\x1b[0m'
E           and: '         - \x1b[36m\x1b[0mtest_capturing.py\x1b[0m:2 \x1b[31mtest\x1b[0m'
E           and: ''
E       remains unmatched: "*__ test (i='A') __*"

/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/tests/test_subtests.py:266: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-1/test_capturing0
plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0


――――――――――――――――――――――――――――――――― test (i='A') ―――――――――――――――――――――――――――――――――

subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f4694c550>, suspend_capture_ctx=<bound method CaptureManager.gl...resumed' _in_suspended='<UNSET>'> _current_item=<Function test>>>, request=<SubRequest 'subtests' for <Function test>>)

    def test(subtests):
        print()
        print('start test')
    
        with subtests.test(i='A'):
            print("hello stdout A")
            print("hello stderr A", file=sys.stderr)
>           assert 0
E           assert 0

test_capturing.py:9: AssertionError
----------------------------- Captured stdout call -----------------------------
hello stdout A
----------------------------- Captured stderr call -----------------------------
hello stderr A



――――――――――――――――――――――――――――――――― test (i='B') ―――――――――――――――――――――――――――――――――

subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f4694c550>, suspend_capture_ctx=<bound method CaptureManager.gl...resumed' _in_suspended='<UNSET>'> _current_item=<Function test>>>, request=<SubRequest 'subtests' for <Function test>>)

    def test(subtests):
        print()
        print('start test')
    
        with subtests.test(i='A'):
            print("hello stdout A")
            print("hello stderr A", file=sys.stderr)
            assert 0
    
        with subtests.test(i='B'):
            print("hello stdout B")
            print("hello stderr B", file=sys.stderr)
>           assert 0
E           assert 0

test_capturing.py:14: AssertionError
----------------------------- Captured stdout call -----------------------------
hello stdout B
----------------------------- Captured stderr call -----------------------------
hello stderr B



――――――――――――――――――――――――――――――――――――― test ―――――――――――――――――――――――――――――――――――――

subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f4694c550>, suspend_capture_ctx=<bound method CaptureManager.gl...spended' _in_suspended='<UNSET>'> _current_item=<Function test>>>, request=<SubRequest 'subtests' for <Function test>>)

    def test(subtests):
        print()
        print('start test')
    
        with subtests.test(i='A'):
            print("hello stdout A")
            print("hello stderr A", file=sys.stderr)
            assert 0
    
        with subtests.test(i='B'):
            print("hello stdout B")
            print("hello stderr B", file=sys.stderr)
            assert 0
    
        print('end test')
>       assert 0
E       assert 0

test_capturing.py:17: AssertionError
----------------------------- Captured stdout call -----------------------------

start test
end test

 test_capturing.py ⨯                                             100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.10s):
       3 failed
         - test_capturing.py:2 test
         - test_capturing.py:2 test
         - test_capturing.py:2 test

 tests/test_subtests.py::TestCapture.test_capturing ⨯                                                                                                               84% ████████▌ 

―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestCapture.test_no_capture ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestCapture object at 0x7f9f4691e690>, testdir = <Testdir local('/tmp/pytest-of-fab/pytest-1/test_no_capture0')>

    def test_no_capture(self, testdir):
        self.create_file(testdir)
        result = testdir.runpytest("-s")
        result.stdout.fnmatch_lines(
            [
                "start test",
                "hello stdout A",
                "Fhello stdout B",
                "Fend test",
                "*__ test (i='A') __*",
                "*__ test (i='B') __*",
>               "*__ test __*",
            ]
        )
E       Failed: nomatch: 'start test'
E           and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E           and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')"
E           and: 'rootdir: /tmp/pytest-of-fab/pytest-1/test_no_capture0'
E           and: 'plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E           and: ''
E       exact match: 'start test'
E       exact match: 'hello stdout A'
E       nomatch: 'Fhello stdout B'
E           and: ''
E           and: ''
E           and: "――――――――――――――――――――――――――――――――― test (i='A') ―――――――――――――――――――――――――――――――――"
E           and: ''
E           and: "subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f46838b90>, suspend_capture_ctx=<bound method CaptureManager.gl...resumed' _in_suspended='<UNSET>'> _current_item=<Function test>>>, request=<SubRequest 'subtests' for <Function test>>)"
E           and: ''
E           and: '    def test(subtests):'
E           and: '        print()'
E           and: "        print('start test')"
E           and: '    '
E           and: "        with subtests.test(i='A'):"
E           and: '            print("hello stdout A")'
E           and: '            print("hello stderr A", file=sys.stderr)'
E           and: '>           assert 0'
E           and: 'E           assert 0'
E           and: ''
E           and: 'test_no_capture.py:9: AssertionError'
E           and: '\r'
E           and: 'hello stdout B'
E           and: ''
E           and: ''
E           and: "――――――――――――――――――――――――――――――――― test (i='B') ―――――――――――――――――――――――――――――――――"
E           and: ''
E           and: "subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f46838b90>, suspend_capture_ctx=<bound method CaptureManager.gl...resumed' _in_suspended='<UNSET>'> _current_item=<Function test>>>, request=<SubRequest 'subtests' for <Function test>>)"
E           and: ''
E           and: '    def test(subtests):'
E           and: '        print()'
E           and: "        print('start test')"
E           and: '    '
E           and: "        with subtests.test(i='A'):"
E           and: '            print("hello stdout A")'
E           and: '            print("hello stderr A", file=sys.stderr)'
E           and: '            assert 0'
E           and: '    '
E           and: "        with subtests.test(i='B'):"
E           and: '            print("hello stdout B")'
E           and: '            print("hello stderr B", file=sys.stderr)'
E           and: '>           assert 0'
E           and: 'E           assert 0'
E           and: ''
E           and: 'test_no_capture.py:14: AssertionError'
E           and: '\r'
E           and: 'end test'
E           and: ''
E           and: ''
E           and: '――――――――――――――――――――――――――――――――――――― test ―――――――――――――――――――――――――――――――――――――'
E           and: ''
E           and: "subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f46838b90>, suspend_capture_ctx=<bound method CaptureManager.gl...spended' _in_suspended='<UNSET>'> _current_item=<Function test>>>, request=<SubRequest 'subtests' for <Function test>>)"
E           and: ''
E           and: '    def test(subtests):'
E           and: '        print()'
E           and: "        print('start test')"
E           and: '    '
E           and: "        with subtests.test(i='A'):"
E           and: '            print("hello stdout A")'
E           and: '            print("hello stderr A", file=sys.stderr)'
E           and: '            assert 0'
E           and: '    '
E           and: "        with subtests.test(i='B'):"
E           and: '            print("hello stdout B")'
E           and: '            print("hello stderr B", file=sys.stderr)'
E           and: '            assert 0'
E           and: '    '
E           and: "        print('end test')"
E           and: '>       assert 0'
E           and: 'E       assert 0'
E           and: ''
E           and: 'test_no_capture.py:17: AssertionError'
E           and: '\r'
E           and: '\r \x1b[36m\x1b[0mtest_no_capture.py\x1b[0m \x1b[31m⨯\x1b[0m                                            \x1b[31m100% \x1b[0m\x1b[40m\x1b[31m█\x1b[0m\x1b[40m\x1b[31m█████████\x1b[0m'
E           and: '===Flaky Test Report==='
E           and: ''
E           and: ''
E           and: '===End Flaky Test Report==='
E           and: ''
E           and: 'Results (0.06s):'
E           and: '\x1b[31m       3 failed\x1b[0m'
E           and: '         - \x1b[36m\x1b[0mtest_no_capture.py\x1b[0m:2 \x1b[31mtest\x1b[0m'
E           and: '         - \x1b[36m\x1b[0mtest_no_capture.py\x1b[0m:2 \x1b[31mtest\x1b[0m'
E           and: '         - \x1b[36m\x1b[0mtest_no_capture.py\x1b[0m:2 \x1b[31mtest\x1b[0m'
E           and: ''
E       remains unmatched: 'Fhello stdout B'

/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/tests/test_subtests.py:281: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-1/test_no_capture0
plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0

start test
hello stdout A


――――――――――――――――――――――――――――――――― test (i='A') ―――――――――――――――――――――――――――――――――

subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f46838b90>, suspend_capture_ctx=<bound method CaptureManager.gl...resumed' _in_suspended='<UNSET>'> _current_item=<Function test>>>, request=<SubRequest 'subtests' for <Function test>>)

    def test(subtests):
        print()
        print('start test')
    
        with subtests.test(i='A'):
            print("hello stdout A")
            print("hello stderr A", file=sys.stderr)
>           assert 0
E           assert 0

test_no_capture.py:9: AssertionError

hello stdout B


――――――――――――――――――――――――――――――――― test (i='B') ―――――――――――――――――――――――――――――――――

subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f46838b90>, suspend_capture_ctx=<bound method CaptureManager.gl...resumed' _in_suspended='<UNSET>'> _current_item=<Function test>>>, request=<SubRequest 'subtests' for <Function test>>)

    def test(subtests):
        print()
        print('start test')
    
        with subtests.test(i='A'):
            print("hello stdout A")
            print("hello stderr A", file=sys.stderr)
            assert 0
    
        with subtests.test(i='B'):
            print("hello stdout B")
            print("hello stderr B", file=sys.stderr)
>           assert 0
E           assert 0

test_no_capture.py:14: AssertionError

end test


――――――――――――――――――――――――――――――――――――― test ―――――――――――――――――――――――――――――――――――――

subtests = SubTests(ihook=<pluggy.hooks._HookRelay object at 0x7f9f46838b90>, suspend_capture_ctx=<bound method CaptureManager.gl...spended' _in_suspended='<UNSET>'> _current_item=<Function test>>>, request=<SubRequest 'subtests' for <Function test>>)

    def test(subtests):
        print()
        print('start test')
    
        with subtests.test(i='A'):
            print("hello stdout A")
            print("hello stderr A", file=sys.stderr)
            assert 0
    
        with subtests.test(i='B'):
            print("hello stdout B")
            print("hello stderr B", file=sys.stderr)
            assert 0
    
        print('end test')
>       assert 0
E       assert 0

test_no_capture.py:17: AssertionError

 test_no_capture.py ⨯                                            100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.06s):
       3 failed
         - test_no_capture.py:2 test
         - test_no_capture.py:2 test
         - test_no_capture.py:2 test
------------------------------------------------------------------------------ Captured stderr call ------------------------------------------------------------------------------
hello stderr A
hello stderr B

 tests/test_subtests.py::TestCapture.test_no_capture ⨯                                                                                                              89% ████████▉ 

――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestCapture.test_capture_with_fixture[capsys] ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestCapture object at 0x7f9f4677c3d0>, testdir = <Testdir local('/tmp/pytest-of-fab/pytest-1/test_capture_with_fixture0')>, fixture = 'capsys'

    @pytest.mark.parametrize("fixture", ["capsys", "capfd"])
    def test_capture_with_fixture(self, testdir, fixture):
        testdir.makepyfile(
            r"""
            import sys
    
            def test(subtests, {fixture}):
                print('start test')
    
                with subtests.test(i='A'):
                    print("hello stdout A")
                    print("hello stderr A", file=sys.stderr)
    
                out, err = {fixture}.readouterr()
                assert out == 'start test\nhello stdout A\n'
                assert err == 'hello stderr A\n'
        """.format(
                fixture=fixture
            )
        )
        result = testdir.runpytest()
        result.stdout.fnmatch_lines(
>           ["*1 passed*",]
        )
E       Failed: nomatch: '*1 passed*'
E           and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E           and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')"
E           and: 'rootdir: /tmp/pytest-of-fab/pytest-1/test_capture_with_fixture0'
E           and: 'plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E           and: '\r'
E           and: '\r \x1b[36m\x1b[0mtest_capture_with_fixture.py\x1b[0m \x1b[32m✓\x1b[0m\x1b[32m✓\x1b[0m                                 \x1b[32m100% \x1b[0m\x1b[40m\x1b[32m█\x1b[0m\x1b[40m\x1b[32m█████████\x1b[0m'
E           and: '===Flaky Test Report==='
E           and: ''
E           and: ''
E           and: '===End Flaky Test Report==='
E           and: ''
E           and: 'Results (0.05s):'
E           and: '\x1b[32m       2 passed\x1b[0m'
E           and: ''
E       remains unmatched: '*1 passed*'

/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/tests/test_subtests.py:308: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-1/test_capture_with_fixture0
plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0

 test_capture_with_fixture.py ✓✓                                 100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.05s):
       2 passed

 tests/test_subtests.py::TestCapture.test_capture_with_fixture[capsys] ⨯                                                                                            95% █████████▌

―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― TestCapture.test_capture_with_fixture[capfd] ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――

self = <test_subtests.TestCapture object at 0x7f9f46788610>, testdir = <Testdir local('/tmp/pytest-of-fab/pytest-1/test_capture_with_fixture1')>, fixture = 'capfd'

    @pytest.mark.parametrize("fixture", ["capsys", "capfd"])
    def test_capture_with_fixture(self, testdir, fixture):
        testdir.makepyfile(
            r"""
            import sys
    
            def test(subtests, {fixture}):
                print('start test')
    
                with subtests.test(i='A'):
                    print("hello stdout A")
                    print("hello stderr A", file=sys.stderr)
    
                out, err = {fixture}.readouterr()
                assert out == 'start test\nhello stdout A\n'
                assert err == 'hello stderr A\n'
        """.format(
                fixture=fixture
            )
        )
        result = testdir.runpytest()
        result.stdout.fnmatch_lines(
>           ["*1 passed*",]
        )
E       Failed: nomatch: '*1 passed*'
E           and: 'Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)'
E           and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')"
E           and: 'rootdir: /tmp/pytest-of-fab/pytest-1/test_capture_with_fixture1'
E           and: 'plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0'
E           and: '\r'
E           and: '\r \x1b[36m\x1b[0mtest_capture_with_fixture.py\x1b[0m \x1b[32m✓\x1b[0m\x1b[32m✓\x1b[0m                                 \x1b[32m100% \x1b[0m\x1b[40m\x1b[32m█\x1b[0m\x1b[40m\x1b[32m█████████\x1b[0m'
E           and: '===Flaky Test Report==='
E           and: ''
E           and: ''
E           and: '===End Flaky Test Report==='
E           and: ''
E           and: 'Results (0.05s):'
E           and: '\x1b[32m       2 passed\x1b[0m'
E           and: ''
E       remains unmatched: '*1 passed*'

/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/tests/test_subtests.py:308: Failed
------------------------------------------------------------------------------ Captured stdout call ------------------------------------------------------------------------------
Test session starts (platform: linux, Python 3.7.6, pytest 4.6.9, pytest-sugar 0.9.2)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/fab/Documents/repos/rpmbuild/BUILD/pytest-subtests-0.3.0/.hypothesis/examples')
rootdir: /tmp/pytest-of-fab/pytest-1/test_capture_with_fixture1
plugins: subtests-0.3.0, hypothesis-4.23.8, requests-mock-1.7.0, case-1.5.3, sugar-0.9.2, betamax-0.8.1, asyncio-0.10.0, toolbox-0.4, timeout-1.3.3, cov-2.8.1, isort-0.3.1, forked-1.0.2, datafiles-2.0, vcr-1.0.2, aspectlib-1.4.2, mock-1.10.4, trio-0.5.2, flaky-3.5.3, django-3.7.0

 test_capture_with_fixture.py ✓✓                                 100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (0.05s):
       2 passed

 tests/test_subtests.py::TestCapture.test_capture_with_fixture[capfd] ⨯                                                                                            100% ██████████
===Flaky Test Report===


===End Flaky Test Report===

Results (5.99s):
       3 passed
       9 failed
         - tests/test_subtests.py:23 TestFixture.test_simple_terminal_normal[normal]
         - tests/test_subtests.py:39 TestFixture.test_simple_terminal_verbose[normal]
         - tests/test_subtests.py:71 TestFixture.test_skip[normal]
         - tests/test_subtests.py:116 TestSubTest.test_simple_terminal_normal[pytest-normal]
         - tests/test_subtests.py:150 TestSubTest.test_simple_terminal_verbose[pytest-normal]
         - tests/test_subtests.py:248 TestCapture.test_capturing
         - tests/test_subtests.py:270 TestCapture.test_no_capture
         - tests/test_subtests.py:286 TestCapture.test_capture_with_fixture[capsys]
         - tests/test_subtests.py:286 TestCapture.test_capture_with_fixture[capfd]
       2 xfailed
       5 skipped
error: Bad exit status from /var/tmp/rpm-tmp.U5gasx (%check)

Discussion about how subtests failures should be displayed

One more test case that is interesting.
This file:

import unittest

class T(unittest.TestCase):
    def test_fail(self):
        with self.subTest():
           self.assertEqual(1, 2)

No passing subtests, just one failure. Still shows PASSED.
I would expect FAILED and the bottom line to show 1 failed
Maybe pytest itself is calling pytest_runtest_logreport() at the end and causing an extra test to be counted.

pytest -v test_sub.py
=============================== test session starts ===============================
platform win32 -- Python 3.7.1, pytest-4.4.0, ...
plugins: subtests-0.2.0
collected 1 item

test_sub.py::T::test_fail FAILED                                         [100%]
test_sub.py::T::test_fail PASSED                                             [100%]

==================================== FAILURES =====================================
_____________________________ T.test_fail (<subtest>) _____________________________

self = <test_sub.T testMethod=test_fail>

    def test_fail(self):
        with self.subTest():
>          self.assertEqual(1, 2)
E          AssertionError: 1 != 2

test_sub.py:6: AssertionError
======================= 1 failed, 1 passed in 0.05 seconds ========================

Originally posted by @okken in #7 (comment)

cc @jurisbu @bskinn

htmlrunner report does not display passed subtests using unittest (selenium+python).

htmlrunner report does not display passed subtests using unittest (selenium+python).
Can anyone help me?

I have created a method which iterates multiple times for all links available on webpage. Also, validating the links url (current and expected) via assertion. htmlrunner generating the report for all those link whose urls are not equal but I want that htmlrunner report should also generate for passed links as well.

Thanks in advance.

Is there a way to see in the test report the assertion failure for each subtest?

I'm running 2 tests that have several subtests failing and I can see all the asserts failing in the console output in jenkins. However when I'm looking in the test report for each failed tests I see only the first assertion failures and for the next ones I see something like this ..........\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]

See the report for one of my tests:

Failed
WORK_DIR.cop_regression.tests.test_main_window.TestMainWindow.test_04_new_contact_by_main_window (from pytest)
Failing for the past 8 builds (Since Failed#20 )
Took 10 sec.
add description
Error Message
AssertionError: 1 not greater than or equal to 2 : Value 102 for field CustNo not present in the receiver address window. Check field selector
Stacktrace
C:\WORK_DIR\cop_regression\tests\test_base.py:296: in __soft_assert_greater_or_equal
    self.assertGreaterEqual(a, b, msg)
E   AssertionError: 1 not greater than or equal to 2 : Value 102 for field CustNo not present in the receiver address window. Check field selector
Standard Output
FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]
..\..\..\..\..\WORK_DIR\cop_regression\tests\test_main_window.py::TestMainWindow::test_04_new_contact_by_main_window FAILED [100%]

I've added the all xml as well.

test_report.docx

The command to run the tests is: pytest --junitxml=test_main_window.xml -v C:\WORK_DIR\cop_regression\tests\test_main_window.py
Thanks for looking into this

Logging in subtests is not displayed unless the parent test fails

Logging in subtests is not displayed unless the parent test fails. Contrast with stdout is displayed for just the failing subtest.

Minimal example:

import logging


def test_logging_in_subtests(subtests):
    logging.info("before")

    with subtests.test("sub1"):
        print("sub1 stdout")
        logging.info("sub1 logging")

    with subtests.test("sub2"):
        print("sub2 stdout")
        logging.info("sub2 logging")
        assert False

Running python3.8 -m pytest test_pytest.py --log-level=INFO stdout is output but not logs

...
------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------
sub2 stdout
================================================================================================ short test summary info ================================================================================================
SUBFAIL test_pytest.py::test_logging_in_subtests - assert False

If instead the parent test fails, logs are output from both failing & succeeding subtests. For example

import logging


def test_logging_in_subtests(subtests):
    logging.info("before")

    with subtests.test("sub1"):
        print("sub1 stdout")
        logging.info("sub1 logging")

    with subtests.test("sub2"):
        print("sub2 stdout")
        logging.info("sub2 logging")
        assert False

    assert False

$ python3.8 -m pytest test_pytest.py --log-level=INFO now produces

------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------
sub2 stdout
_______________________________________________________________________________________________ test_logging_in_subtests ________________________________________________________________________________________________
...<stack trace> ...
--------------------------------------------------------------------------------------------------- Captured log call ---------------------------------------------------------------------------------------------------
INFO     root:test_pytest.py:5 before
INFO     root:test_pytest.py:9 sub1 logging
INFO     root:test_pytest.py:13 sub2 logging
================================================================================================ short test summary info ================================================================================================
SUBFAIL test_pytest.py::test_logging_in_subtests - assert False
FAILED test_pytest.py::test_logging_in_subtests - assert False

Can/should logging be displayed with a per subtest granularity similar to stdout?

Output shows as yellow with no warnings

Once a subtest has been run, the output changes from green to yellow for seemingly no reason. I spent a while trying to diagnose if there were some warnings getting raised and suppressed or if there was something wrong with my usage of subtests but eventually realised that this seems to just be normal behaviour.

image

The reason is that _pytest.terminal.KNOWN_TYPES & _pytest.terminal._color_for_type don't know about the "subtests xxx" status keys so consider them as unknown and hence show them as yellow.

Simply adding the below to the end of pytest_subtests.py makes the output show in the colours I'd expect it to be in. This may not be the best fix given it is reaching into the internals of pytest but it works fine.

import _pytest

_pytest.terminal.KNOWN_TYPES = (
    _pytest.terminal.KNOWN_TYPES +
    tuple(f"subtests {outcome}" for outcome in ("passed", "failed", "skipped"))
)

_pytest.terminal._color_for_type.update({
    f"subtests {outcome}": _pytest.terminal._color_for_type[outcome]
     for outcome in ("passed", "failed", "skipped")
     if outcome in _pytest.terminal._color_for_type
})

image

relevant requirements

pytest==7.2.2
pytest-cov==4.0.0
pytest-mock==3.10.0
pytest-asyncio==0.21.0
pytest-subtests==0.10.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.