pytest-dev / pytest-reportlog Goto Github PK
View Code? Open in Web Editor NEWReplacement for the --resultlog option, focused in simplicity and extensibility
License: MIT License
Replacement for the --resultlog option, focused in simplicity and extensibility
License: MIT License
Is the format of the json-stream documented somewhere?
I've tried to follow the code and it appears to be a serialization of a https://docs.pytest.org/en/7.1.x/reference.html#_pytest.reports.TestReport object (or any of the other BaseReport). Is this correct? In that case can that be clarified in documentation?
Also curious if there is any ambition to document the serialized form of the json, or is the reference to the TestReport object enough?
From reading through the documentation I didn't see support for printing the json data to stdout. It looks like it currently only supports writing to a file. Could we better support this via a CLI flag?
See the following request proposal as well: pytest-dev/pytest#9704 (comment)
Have been using this for a while to write out logs, and then added pytest-xdist
today and got this:
pytest . -n 2 --report-log=test-results/results.log --cov
============================= test session starts ==============================
platform linux -- Python 3.6.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
django: settings: webapp.settings (from ini)
rootdir: /home/circleci/project, inifile: pytest.ini
plugins: assume-2.2.1, pudb-0.7.0, celery-4.0.2, mock-3.0.0, xdist-1.31.0, cov-2.8.1, django-3.9.0, forked-1.1.3, reportlog-0.1.0
gw0 [712] / gw1 [712]m
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/_pytest/main.py", line 191, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/_pytest/main.py", line 247, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pluggy/callers.py", line 203, in _multicall
INTERNALERROR> gen.send(outcome)
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/xdist/dsession.py", line 112, in pytest_runtestloop
INTERNALERROR> self.loop_once()
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/xdist/dsession.py", line 135, in loop_once
INTERNALERROR> call(**kwargs)
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/xdist/dsession.py", line 254, in worker_testreport
INTERNALERROR> self.config.hook.pytest_runtest_logreport(report=rep)
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pluggy/manager.py", line 87, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pytest_reportlog/plugin.py", line 61, in pytest_runtest_logreport
INTERNALERROR> self._write_json_data(data)
INTERNALERROR> File "/home/circleci/project/venv/lib/python3.6/site-packages/pytest_reportlog/plugin.py", line 46, in _write_json_data
INTERNALERROR> self._file.write(json.dumps(data) + "\n")
INTERNALERROR> File "/usr/local/lib/python3.6/json/__init__.py", line 231, in dumps
INTERNALERROR> return _default_encoder.encode(obj)
INTERNALERROR> File "/usr/local/lib/python3.6/json/encoder.py", line 199, in encode
INTERNALERROR> chunks = self.iterencode(o, _one_shot=True)
INTERNALERROR> File "/usr/local/lib/python3.6/json/encoder.py", line 257, in iterencode
INTERNALERROR> return _iterencode(o, 0)
INTERNALERROR> File "/usr/local/lib/python3.6/json/encoder.py", line 180, in default
INTERNALERROR> o.__class__.__name__)
INTERNALERROR> TypeError: Object of type 'WorkerController' is not JSON serializable
============================ no tests ran in 27.53s ============================
Exited with code exit status 3
I'm planning on consuming this in the pytest-html
"next-gen"project.
I don't want the pytest-html users to have to use the --report-log
file as the file is only going be used internally.
What would the best way of accomplishing this within the pytest framwork?
Would you even be open to refactoring the plugin so that when used as a library it won't write to a file? This could be as easy as for example allowing the consumer to pass in io.StringIO
.
Cheers
Hi
Can each BaseReport entry written to json log get timestamps included for when each test was started and finished. Right now it only contains a runtime.
When analyzing test results, we some times need to correlate test runs with other external data not known to pytest, for example environmental factors, such as cpu and network load, which we store in other monitoring-systems.
Hi folks,
I think it would be nice to be able to use report_json
as a fixture.
I have some cases which I would like to add more information to the file generated.
For instance, one thing which I think can be interesting is:
def test_foo(report_log):
report_log.update("more_info": "bar")
So, that will appear in the file as:
...
{"nodeid": "test_report_example.py::test_foo", "more_info": "bar", "location": ["test_report_example.py", 0, "test_ok"], "keywords": {"test_ok": 1, "pytest-reportlog": 1, "test_report_example.py": 1}, "outcome": "passed", "longrepr": null, "when": "setup", "user_properties": [], "sections": [], "duration": 0.0, "$report_type": "TestReport"}
...
What are your opinions about it? ๐
as per https://jsonlines.org/ the recommended file extension for the "json lines" format is jsonl
So e.g. pytest --report-log=$HOME/report.log
does what you expect (i.e. it expands the $HOME
).
Note: I have this fixed locally and can PR if it's desired
This is not an issue, but rather a question/request. Why not have it write out a json file with a list of the json rather than a log file with bunch of lines of dictionaries?
Analogous to the junit XML option for the same thing. Rationale being that captured logs can blow up the reportlog file size pretty significantly and are (I guess) only useful in the case where you have a transiently failing test and you want to compare logs for passes and fails.
Note, I have this fixed already locally and can PR if it's deemed worth including.
This file:
def test_user_properties_list(record_property):
record_property("hello", ["world", "mars"])
def test_user_properties_set(record_property):
record_property("hello", {"world", "mars"})
results in the following reports (pretty printed for readability):
{
"nodeid": "test_replog.py::test_user_properties_list",
"location": [
"test_replog.py",
0,
"test_user_properties_list"
],
"keywords": {
"test_replog.py": 1,
"test_user_properties_list": 1,
"autpy": 1
},
"outcome": "passed",
"longrepr": null,
"when": "call",
"user_properties": [
[
"hello",
[
"world",
"mars"
]
]
],
"sections": [],
"duration": 8.534699736628681e-05,
"$report_type": "TestReport"
}
{
"nodeid": "test_replog.py::test_user_properties_set",
"location": [
"test_replog.py",
3,
"test_user_properties_set"
],
"keywords": {
"test_replog.py": 1,
"test_user_properties_set": 1,
"autpy": 1
},
"outcome": "passed",
"longrepr": null,
"when": "call",
"user_properties": "[('hello', {'mars', 'world'})]",
"sections": [],
"duration": 0.00017495399515610188,
"$report_type": "TestReport"
}
Note the different storage of user_properties
:
"user_properties": [
[
"hello",
[
"world",
"mars"
]
]
],
"user_properties": "[('hello', {'mars', 'world'})]",
i.e. if an user property value is unserializable, the entire value gets turned into a string, rather than turning that specific property into one.
Would it be possible to include warnings in the log, either attached to the test result, or as a separate entry? I'm guessing that since there's pytest_warning_recorded
it would probably be a separate entry.
As an initial draft I'd probably implement this as
def pytest_warning_recorded(self, warning_message, when, nodeid, location):
extra_data = {"$report_type": "WarningReport", "when": when, "location": location, "nodeid": nodeid}
data = vars(warning_message) | extra_data
self._write_json_data(data)
where I'm not too sure about the value of $report_type
, but I've been using that to parse the json lines, so I'd very much like to have $report_type
somewhere.
In any case, my knowledge of pytest
internals is shallow at best so I may very well be missing something (like, maybe _pytest.terminal.WarningReport
would need to be moved to _pytest.reports
and extended to not drop too much information?)
this is a nice to have extension and comes as idea from the conventions of https://jsonlines.org/
for large test suites this likely will intensely reduce the amount of reports
Is any tool visualizing tests from JSON format compatible with the pytest-reportlog
tool (e.g. as a static HTML page)? It would be nice to have one that could group test stages (setup, call, teardown) into a single block for more human readability.
I've found the --result-log
option to be helpful for making a quickly readable test result file for finding a given test's failure message when doing manual debugging. Is it possible to add a feature to print the results in a more human-readable format akin to what --result-log
does currently?
I obviously could write a parser utility in python easily enough - but I then have to instruct all my teammates to use that, whereas they all know how to search a text file already.
Just an idea.
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.