behave-contrib / behave-html-pretty-formatter Goto Github PK
View Code? Open in Web Editor NEWHTML Pretty formatter for Behave
License: GNU General Public License v3.0
HTML Pretty formatter for Behave
License: GNU General Public License v3.0
xz
compression can achieve 2 orders of magnitude compression rate which keeps being substantial even when base64 inflates them again by 33 %. So when embedding a large log file (say larger than 1 MB), it'd make a lot of sense to:
Hi guys,
is it possible to choose the extension of the file downloadable by "Download" button in embed data?
I'm quite sure the answer is: no, at the moment.
One solution should be to add a check to the filename.
If the pattern ".something" is present in the filename, do nothing; if not, append .txt to it.
A basic implementation: in function download_embed(id, filename)
:
if (tag === "span") {
if (filename.search("(\\.\\w+)") == -1){
filename += ".txt";
}
value = "data:text/plain," + encodeURIComponent(decodeHTMLEntities(child.innerHTML));
This shall also be done for other branches, but I'm not sure if this works for non text files. Also, I'm not sure if this is the best workaround for this problem.
Thanks
Luca
Hi guys,
a colleague of mine found a bug that makes the formatter crash.
Line 301 shall be:
self.status = self._scenario.status.name
and not
self.status = self._scenario.status
Now, it is possible to expand/collapse scenarios based on status, but one can not collapse/expand individual scenario.
Also, this should be indicated by CSS by small arrow in the right of scenario header
opening an issue just as an FYI.
This is an attempt to have an another option for html formatter html-pretty.
The formatter is already in functioning state - the generating works dynamically - while the images are a static example.
I am not quite happy with it yet and the project is still work in progress but I pushed it here so me and @fpokryvk can work on it and have history.
I hope you don't mind the name of the project, I tried to follow the naming convention you have here.
I will try to mimic the structure of existing projects here, please let me know if I am doing something wrong, or if I am not doing anything I should. If anything needs to be changed, let me know. Feedback is always appreciated.
Have a nice day.
Sorry I want to ask question rather reporting an issue,
We are using behave-html-pretty-formatter in our tests, and it is producing the required nice report. For some testing, though, we have the retry option activated. If a test fails, it will attempt to perform the same test it a maximum of three times. In such instance, the report displays all three attempts' pass/fail statuses.
E.g.
Scenario: user can read the email
1st attempt : Failed
2nd attempt : Failed
3rd attempt : Passed
Is it possible to include the status of the most recent(last) attempt in the report? We are aware that this needs to be customised, but we're not sure where or how to do it. Many thanks if you could help.
Hi,
is it possible to add the feature description in the report?
For example:
Feature: feature name
Here I test some requirements related to some functionality...
Luca
It would be good to show summary if all following condition holds:
Also, position of date in summary is unfortunate, maybe align it right?
Add header to multi-features report, with summary of features in files. Maybe take some inspiration from BehaveX report composition.
Formatting buttons like [Summary]
is not that nice and looks like from 90's, use some nice CSS instead.
In last update we introduced a duplication in readme.
# Defines if the user is interested in what steps are not executed.
behave.formatter.html-pretty.show_unexecuted_steps = true
Does not break anything but if people will copy it from readme they will get an error. Lets fix it in next update.
Hi,
I have a particular use case in which I would like to add the formatter and outfile from the before_all
method of environment.py
. The reason is because in a previous step, the path where will be stored the report is calculated.
It works correctly whether I add it statically to the behave.ini
as follows:
format = pretty
behave_html_pretty_formatter:PrettyHTMLFormatter
outfiles = -
_output/integration_test_results.html
or if I invoke it from the command line:
behave ... -f behave_html_pretty_formatter:PrettyHTMLFormatter -o C:/Users/user/Desktop/test_results.html
but my intention is to add it from the environment.py
.
I have tried everything and I have not been able to get it to work.
def before_all(context)
...
log_path = os.path.join(OUTPUT_PATH, 'test_results.html')
context.config.format.append('behave_html_pretty_formatter:PrettyHTMLFormatter')
context.config.outfiles.append(log_path)
context.config.outputs.append(StreamOpener(log_path))
It is weird, because debugging the content of the context.config
variable, everything seems to be added correctly, but the report file is not generated.
I would like to ask if it would be possible.
Thanks.
This might be handy, if you want to send someone report with expanded "Error Message" or expanded some debug data of interested step (e.g. Journal or Screenshot).
Another side effect would be, if you update the report (rerun the test) then after the page reload it will remember the state and expand embed that was expanded (e.g. when investigating journal, after F5 it will stay expanded). On the other hand, it might break if the embed index change (e.g. failing test becomes passing, then embed IDs are different), but that is expected and should not be an issue.
This should be doable in JavaScript only.
Sometimes, it might be desirable to embed some text with headers, or other text formats. Maybe we can add support markdown.
There is a use case for appending html logs to other html logs.
In such case we might be better to create a unique ID's for embed instead of numbering them from zero. Which will always expand/collapse the very first embed_0 instead of the one we clicked on.
Most likely will use uuid python library.
Hi,
I want to propose an HTML report improvement when running multiple features.
When running multiple feature files, the report summary does not show the global results but only shows each scenario summary. For instance:
In this case, the global test results are:
15 features passed, 3 failed, 0 skipped
160 scenarios passed, 6 failed, 42 skipped
1299 steps passed, 3 failed, 375 skipped, 0 undefined, 22 untested
Took 115m9.222s
It would be nice if the global info would appear at the top of the report. This will show if all tests have passed without scrolling the entire report to analyze each feature scenario results.
On the other hand as a secondary aspect, regarding the global "took time", instead of being calculated as the sum of each scenario time, it is calculated from the start until the end of the testing session, because there are preconditions and postconditions that are run between scenarios that are not being taken into account (for example, in the above example It shows 115m but it is ~150min). I don't know if this is possible to set at the report level or it will have to be calculated in runtime and overwrite the report data.
Thanks 😄
Hi,
I recently started using the formatter and love it. The question I have is for logging when not using the html formatter (using IDE).
The way I do it rn is:
#context.html_formatter = True if I am using this formatter else False and context.logger (using logging) is set in before_all hook
context.embed(mime_type="text/plain", data="<string>", caption="Text") if context.html_formatter else context.logger.info("<string>")
let me know if there is a more elegant way for this
The following code is not optimal:
with a(href="#"...):
span("Label")
It generates span
inside a
, and is sometimes harder to handle by CSS/JS. Better code in dominate:
a("Label", href="#"...)
It might be nice to group embeds together (e.g. having embed for application, which would carry on its own embeds for STDOUT, journal, config files etc.)
xml validator seems to have issues with the generated page. Try to look into it.
There is a use case where someone might want to load the page with xml parser.
Hi,
The current behavior of the package displays the generated HTML report based on the user's browser settings, which can result in it being styled in dark mode if that's the user's preference. I would like to suggest the addition of a button, similar to a high contrast button, that allows users to easily toggle between light and dark modes.
Thanks.
It would be nicer to just switch class of body
element via JavaScript and let CSS define the rest.
Hi guys,
according to #16, it is possible to download the embedded data. This is cool if you have files/images/videos. However we simply embed text. Is it ok for you if I open a new branch in which I add the possibility to remove the download button (for example adding an optional argument to the embed method)?
Luca
Hi,
The default behavior when opening the report HTML file is that all scenarios are shown expanded. I know there is a button to collapse them all, but I would like to ask or propose (in case it is not implemented) if we can modify the default behavior in some way.
In my case, I would like that when I open the report, by default everything appears collapsed.
In case there is no option to configure this behavior (as for example there is to show the summary), I would like to request it as an improvement.
Thanks 😄
The tests for Python 3.6 are failing. This is due to the setup of the CI runner on GitHub's Azure infrastructure. Python 3.6 is only available on older Ubuntu runners:
Version 3.6 was not found in the local cache
Error: Version 3.6 with arch x64 not found
The list of all available versions can be found here:
https://raw.githubusercontent.com/actions/python-versions/main/versions-manifest.json
I would suggest to drop version 3.6, unless it's important to you to support older (unsupported) Python versions. I'd also add Python 3.10 and 3.11 to the list of Pythons to run tests against.
I'll start with adapting the Tox and GHA pipeline setup for behave-html-formatter. Do you want try to fix that yourself for this repository?
As per the documentation, I have
def before_all(context: Context): for formatter in context._runner.formatters: if formatter.name == "html-pretty": context.embed = formatter.embed
And
def after_scenario(context, scenario): image_location = "./pytest_results/" context.embed(mime_type="image/png", data=image_location, caption="Screenshot")
All my screenshots are generated inside folder "./pytest_results". After running the scenarios the behave.html report is successfully generated, but no screenshots are embedded. If I give path for a one png file then same screenshot has been added to all the scenarios.
Executed cmd : behave -f html-pretty -o behave-report.html .\features\test1.feature
All the screenshots are generated as {context.scenario.name}_fail_screenshot.png
How I can embed multiple screenshots to behave-report.html ? We have more than 500 scenarios and embedding one by one seems not possible. If there is a better way then can I get an example ?
feature-title
box (start seems better to me)Hi,
If I have to run multiple features and want to generate a single HTML report for all of the features, will it work using this command?
behave -f behave_html_pretty_formatter:PrettyHTMLFormatter -o <destination_path> <features folder path>
When I tried with the above command, I am getting a 404 not found error when opening the HTML file in the browser.
Imagine we have a large journal embedded in the behave report. We want to download it and parse it. Now, we have to manually copy and paste it from the report which is quite uncomfortable. There might be other reports too (like hostapd, wpa_supplicant, nmstate, et cetera )
Please provide a mechanism for downloading these files back from the HTML report.
We should not have images with older and current design in src/
directory. The src/
directory in this project is behave_html_pretty_formatter/
.
Move them to examples/
, design/
or differently named directory.
This will require to change links in README page
To have some manual visual checks, that everything is OK.
Hi,
it's been a while since the last time I opened an issue.
This is a proposal for a new feature. I'm not sure if this is practically possible and don't know what is the real effort required to add it, but I guess it is theoretically feasible.
We are testing an environment that shall be configured before every scenario starts; thus, we have some configuration background steps (that basically perform the setup and the cleanup of the environment). If something goes wrong in this background step, sys.exit()
is invoked, because it means that the system itself is not configured in the right way and test procedures cannot be performed (every scenario would fail). In this case, the html report will not be completed, thus it will be impossible to read.
Is there any chance to add an atexit.register(unexpected_exit)
in the formatter that completes the html report?
What do you think? I guess this is not a so common need, but this is the first time I face this case.
Thanks,
Luca
I have encountered quite a race condition when generating debug files from binaries. I managed to get null bytes to the data I wanted to embed.
This in effect failed on null bytes exception which threw away the entire html log. That is an overkill as that was the last embed and there was nothing wrong with the log itself.
We could replace the invalid data on embedding instead of failing.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.