epics-extensions / wetest Goto Github PK
View Code? Open in Web Editor NEWTest automation utility for EPICS modules (from YAML configuration to PDF test reports)
License: Other
Test automation utility for EPICS modules (from YAML configuration to PDF test reports)
License: Other
since this version number is expected in the scenario files to display changelog
In the startup message (Ctrl+D
then ENTER
for start...) also add that one can pause by using Ctrl+Z
and resume using fg
.
However only show that if no GUI is started (otherwise this command freeze the GUI).
It could be nice to have severity on test failure.
My first thought is that, like the EPICS alarms, we could have "minor" and "major" alarms. The default alarm would be "major" if the severity keyword not is provided.
- name: "my test with a low severity"
getter: "${P}myPv"
get_value: 1000 # ns
delta: 200 # ns
skip: "${skip_test0_pass}"
severity: minor
I would say that by default, if a minor alarm is raised and "on_failure" is set to "stop" then test execution should keep going. However, we should have a way to also stop on minor alarm.
To be discussed !
This is different from the delay
parameter which is the delay between caput and caget.
timeout
is used to give up when the test takes too long (typically a PV is not connected).
Using the batch features to monitor a lot of PVs at once (for PV connection status).
There is compatible API to update existing pyepics code:
https://nsls-ii.github.io/caproto/pyepics-compat-client.html
Installation should be easier since it doesn't require libca.
Since a server is featured it could be usefull for a test mode (emulating tested PVs).
This old behaviour is not pertinent anymore, and can lead to macros substitutions unexpected by user.
Maybe keep this behaviour available through a CLI option, but deactivate it by default.
Reasons:
Description of the case:
What I'm expecting:
Has to be really discreet to NOT have to much info on the GUI
Today one need to look for square-bracketted test duration to find retried tests.
The initial request was to sort test per execution time, but other ordering may be interesting too.
I see it as adding a button at the bottom that would have multiple choice (clicking the button would open a list of possible sorts).
Choosing a new sorting criteria would regenerate the tests display (forget and pack again ?). Choosing the same criteria again would reverse sort on this criteria.
Be careful: it should keep the collapsed/expended status of each scenario-test-subtests.
The possible sorts I see are:
I got a warning about my macro CURRENT. However, weTest executes the script with CURRENT = 5.
Non-compulsory validation failed:
- Unknown macro "CURRENT" (2 occurences)
include:
- - 'wetest/generic/specific/weTest_functional_basicPowerSupply.yaml'
- CURRENT: 5
- TEST_TITLE: "Set HV in nominal mode"
P: "${P_EIS}:HVPS-1:"
DELAY: ${DELAY}
# config
VOLTAGE: 20 # Volts
CURRENT: ${CURRENT}
DELAY_RAMP_UP: 1 # sec
# check desired output (measured value)
GET_DESIRED_OUTPUT: "IMes"
DESIRED_OUTPUT: ${CURRENT}
DESIRED_OUTPUT_MARGIN: 1 # %
Being able to run a range test according to the example below would be a huge plus, using a different getter et setter:
To solve this one should probably import Tkinter only in the case of the -G option not set.
Better yet if the import of Tkinter fails the -G option should be set automatically.
macro starting with_
should be ignored and raise a warning in the include line.
The use case is when you want to use the same value several place in your file, you would be interested to use a macro, but then you open this macro to be changeable when the file is included in another file.
This would enable more flexbility over the default unittest.TextTestRunner.
The runner should provide:
The custom runner would be a cleaner way to send tests status to GUI and execution control to processManager, could it deal with:
If not doable in the runner could it be set in the TestCase
?
See SelectableTestCase
With the replay feature we can open the previous report during a run.
This may not be obvious ? (that it's the previous run)
At least the tooltip should be updated with the file date.
Quite easy to mark the report as deprecated when pressing Play as well.
finally
often is not good enough.
We need a restore
block that take a list of PV name, register their values before the test and restore them at the end of the test.
Implementation suggestion:
This could be implemented as two tests: one to register before the other generated tests and one to restore after. The values could be stored in a Multiprocessing queue to carry them from one test to another.
Note:
Maybe finally statement should be executed after the restore action.
When a test fails or errors the corresponding traceback is written in the collapsable frame of a subtest.
It is written on a single line, but if the window is too narrow we can't see the end of the line and it's not obvious that there is more text on the right.
It would be better if the text was wrap to be displayed on other lines when it's to long. Or at least show that there is some text not visible using ... or similar.
Also align it left of the box
when a file format version is not compatible, could you precise file path in the CLI ?
using future imports ? so that it stays compatible with python 2.
Add a text showing "checking PVs connection" in place of the Treeview, that appears like "everything is connected" when there are a lot of PVs and it takes time to prepare the Tree.
Today's behaviour is to ignore missing included file, which is dangerous in most cases.
Because test selection only applies on the next run and not to the tests currently running.
This would allow to work on report more easily (without running the tests again) and would serve as a backup in case of report generation failure.
Also it could be interesting to generate this file on the go rather than at the end of the tests run. So that even in case of abort the report can be generated.
Workout how to add a naming "plugging" that would enable to add a new naming without changing wetest code, without re-installing ?
Does it require to pause the tests, or to let them finish ?
Today, in "commands", it is necessary to indicate a 'get_value' if you set a "getter", otherwise I get the warning:
tests:
- name: "${name} - test 1 - pass"
commands:
# test mode
- name: "${name} - set test mode"
setter: "${P}GenTestModeCmd"
getter: "${P}GenTestModeRb"
set_value: "test"
get_value: "test"
I would like that wetest use the set_value if no get_value is provided:
tests:
- name: "${name} - test 1 - pass"
commands:
# test mode
- name: "${name} - set test mode"
setter: "${P}GenTestModeCmd"
getter: "${P}GenTestModeRb"
set_value: "test"
This could display the test titles associated to this PV.
Today a macro is substituted as a string then changed into a float, int, bool, list or dict.
Sometime we would like to use the value as a string instead. Then we need to cheat by using the macro in a string with additionnal characters.
Sometimes a string is converted into a list (or a dict) because it contains special characters).
Apparently in YAML it is possible to specify the macro type explicitly. The type should be stored along the value to convert the macro as desired by the user.
┌─[vnadot][VM_allInOne_crossCompil][±][detached:master ✓][/iee/tops/vnadot/topSaraf/tools/WeTest]
└─➞ wetest --version
Installed WeTest is of version 1.2.0
┌─[vnadot][VM_allInOne_crossCompil][±][detached:master ✓][/iee/tops/vnadot/topSaraf/tools/WeTest]
└─➞ echo $?
1
add new keywords (compatible with margin, they all need to be validated during the test, ie. use AND):
prec
(decimal precision, such as 7 <=> equal within 10^-7)greater
(measured value should be greater or equal the given value)lesser
(measured value should be lesser or equal the given value)different
(check not equal, and can be used with gretter and lesser for check without equality)abs
(check absolute value of measured value)mod
(check measure value is a modulo of...)For now only add them for commands, but implements them in order to be integrate to values and range later if need be.
delta
and margin
are two "approximation" fields are actually closely linked to the value
field.
Whereas greater
, lesser
, different
, abs
and mod
are independent of value
and the "approximation" fields.
Therefore I believe that the following logic should be applied to a test that as several of these fields defined:
( getvalue == value +/- max(margin*value, delta) )
AND (getvalue >= greater)
AND (getvalue <= lesser)
AND (getvalue != different)
AND (abs(getvalue) == abs)
AND (getvalue % mod = 0).
Using "max" for margin and delta is required: for instance when we want to check a value against 0 then margin is not useful.
The prec
field should be used for any "equality" check (value
, different
, abs
and mod
).
Finally I would add that the field different
could use a single value or a list of values.
And that the mod
field could use a single or a pair value: divider
and (divider, remainder)
where remainder is 0 by default.
Is it possible to extract the scrollable feature from the Suite GUI classe ? By making a Scrollable Frame class ?
If so get the PVs Frame out of the Suite Frame too.
Execution time label width is supposed to be of fixed width (5 characters min), but on some devices the size is variable (maybe due to a different default font).
Use a a minimal size label ? Max length should not be fix.
Through the usual wetest command without option, open the GUI with input widgets enabling to choose the different option from the CLI (test files, DB folder, naming, whether to autoplay...).
Also keep a cache file with to remember the last options used (10 last file/folder, and latest options).
pause popup should show which test paused (id + name)
finish popup should show total duration
WeTest is quite poor in documentation:
=> message is a verbose text detailing what the test is about. It is displayed as tooltips in the GUI, and added to the report. Its enable to keep test titles and commands title shorter.
=> in a unit tests scenario all the tests are executed randomly (subtests such as range iterations, values and commands are still in the defined order). in a functional tests scenario tests are executed in the order defined in the file. Also in case of failure a functional tests execution will pause, while a unit tests will just keep on going (thats default behavior which can be overridden with on_failure field).
=> ignore is a field usable in a test in a scenario file, it tells we-test to not read this test (and therefore not displaying it in the GUI or the report), this is useful for unfinished tests or for generic files
=> skip is a field usable in a test in a scenario file, it tells we-test to read this test as the others (and therefore display it in the GUI and the report), but to not execute it, mainly to spare test time.
=> select is a feature of the GUI, enabling to skip and unskip tests from the GUI after the file has been parsed.
tip & tricks:
We would have no guaranty of the testing order though (a test being a map).
Add an ordered
keyword, which allow to define an order list between the range, commands and values.
Add a command
keyword to define a single command.
Behind the scene I believe there is good re-factorisation to do on this topic, allowing a maximum of common fields. Many fields from commands would actually be useful for range and values (the obvious being margin
and delay
).
Geom does not accept a negative or null start value.
With start=0 WeTest crashes:
Traceback (most recent call last):
File "/home/fgohier/miniconda2/envs/WeTest/bin/wetest", line 11, in <module>
load_entry_point('wetest==0.4.4', 'console_scripts', 'wetest')()
File "/home/fgohier/miniconda2/envs/WeTest/lib/python2.7/site-packages/wetest-0.4.4-py2.7.egg/wetest/command_line.py", line 316, in main
suite, configs = generate_tests(scenarios=scenarios, macros_mgr=macros_mgr)
File "/home/fgohier/miniconda2/envs/WeTest/lib/python2.7/site-packages/wetest-0.4.4-py2.7.egg/wetest/command_line.py", line 182, in generate_tests
tests_gen = TestsGenerator(scenario)
File "/home/fgohier/miniconda2/envs/WeTest/lib/python2.7/site-packages/wetest-0.4.4-py2.7.egg/wetest/testing/generator.py", line 592, in __init__
self._create_tests_list()
File "/home/fgohier/miniconda2/envs/WeTest/lib/python2.7/site-packages/wetest-0.4.4-py2.7.egg/wetest/testing/generator.py", line 668, in _create_tests_list
value_list.update(numpy.geomspace(start, stop, geom, endpoint=include_stop))
File "/home/fgohier/miniconda2/envs/WeTest/lib/python2.7/site-packages/numpy-1.12.1-py2.7-linux-x86_64.egg/numpy/core/function_base.py", line 319, in geomspace
raise ValueError('Geometric sequence cannot include zero')
ValueError: Geometric sequence cannot include zero
With start=-1 WeTest runs a test with NaN values and logs the following error:
/home/fgohier/miniconda2/envs/WeTest/lib/python2.7/site-packages/numpy-1.12.1-py2.7-linux-x86_64.egg/numpy/core/function_base.py:346: RuntimeWarning: invalid value encountered in log10
log_start = _nx.log10(start)
When a test with a margin fails we get a message saying expected XXX+/-Y% but got ZZZ, for big number we can not work out the details of the failures.
would it be possible to use the margin value (in delta form) to workout the number of decimals to show ?
Best option seems to accept a list of image file though a CLI option.
This will also require a resize function to make sure the logo fit at the top of the report (first resize in height, then check all the resized logos fits in width). The resizing should keep the logo proportions.
What about saving the last logos files used in order to not have to set them all the time ? Through a configuration file that could also be used for Naming class implementation ?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.