Giter Site home page Giter Site logo

wetest's People

Contributors

dependabot[bot] avatar gohierf avatar minijackson avatar vnadot avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wetest's Issues

Show the commands to pause in CLI mode

In the startup message (Ctrl+D then ENTER for start...) also add that one can pause by using Ctrl+Z and resume using fg.

However only show that if no GUI is started (otherwise this command freeze the GUI).

Add test failure severity support

It could be nice to have severity on test failure.

My first thought is that, like the EPICS alarms, we could have "minor" and "major" alarms. The default alarm would be "major" if the severity keyword not is provided.

- name:       "my test with a low severity"
      getter:     "${P}myPv"
      get_value:  1000  # ns
      delta:      200   # ns
      skip:       "${skip_test0_pass}"
      severity: minor

I would say that by default, if a minor alarm is raised and "on_failure" is set to "stop" then test execution should keep going. However, we should have a way to also stop on minor alarm.

To be discussed !

need a `timeout` parameters

This is different from the delay parameter which is the delay between caput and caget.

timeout is used to give up when the test takes too long (typically a PV is not connected).

if ioc reboots, then WeTest should retry last stage test if the reason of the failure was "unable to connect to setter..."

Description of the case:

  • Every test stage was a success up to stage "400". Then the ioc rebooted during stage "400". So WeTest raises an alarm on this test stage "unable to connect to setter...". Then the ioc restart and connection is regained. Finally WeTest keeps going on other test stages (800.0 and co) and it works fine.

What I'm expecting:

  • if communication is regained, retry stage "400" if the reason of the failure was "unable to connect to setter..."

image

[GUI] show number of retry

Has to be really discreet to NOT have to much info on the GUI

Today one need to look for square-bracketted test duration to find retried tests.

[GUI] sort tests by

The initial request was to sort test per execution time, but other ordering may be interesting too.

I see it as adding a button at the bottom that would have multiple choice (clicking the button would open a list of possible sorts).

Choosing a new sorting criteria would regenerate the tests display (forget and pack again ?). Choosing the same criteria again would reverse sort on this criteria.

Be careful: it should keep the collapsed/expended status of each scenario-test-subtests.

The possible sorts I see are:

  • file order (get that from tests id)
  • execution order (get that by iterating over the suite)
  • execution duration subtest only (no more scenario and tests).
  • execution duration by scenario, test then subtests
  • test result subtest only (no more scenario and tests)
  • test result by scenario, test then subtests

warning "Unknown macro" in a list of list but working at the end

I got a warning about my macro CURRENT. However, weTest executes the script with CURRENT = 5.

Non-compulsory validation failed:
 - Unknown macro "CURRENT" (2 occurences)
include:
    - - 'wetest/generic/specific/weTest_functional_basicPowerSupply.yaml'
      - CURRENT:        5
      - TEST_TITLE:     "Set HV in nominal mode"
        P:              "${P_EIS}:HVPS-1:"
        DELAY:          ${DELAY}
        # config
        VOLTAGE:          20 # Volts
        CURRENT:          ${CURRENT}
        DELAY_RAMP_UP:    1 # sec
        # check desired output (measured value)
        GET_DESIRED_OUTPUT:     "IMes"  
        DESIRED_OUTPUT:         ${CURRENT} 
        DESIRED_OUTPUT_MARGIN:  1 # %  

set_range & get_range functionalities

Being able to run a range test according to the example below would be a huge plus, using a different getter et setter:

  • name: "Check sampling frequency value range using internal clk"
    setter: "ClockDivider"
    getter: " SamplingFrequencyR "
    set_range: {start: 1, stop: 31, step: 1} (does not exist?)
    get_range: {start: 1, stop: 100, geom: 32} (does not exist?)
    finally:
    value: $(clock_divider_startup)

Use a custom TestRunner

This would enable more flexbility over the default unittest.TextTestRunner.
The runner should provide:

  • a progress-bar visible in CLI
  • write result on the go in a text (CSV, YML ?) file
  • test duration

The custom runner would be a cleaner way to send tests status to GUI and execution control to processManager, could it deal with:

  • retry
  • timeout
  • continue, pause, abort

If not doable in the runner could it be set in the TestCase ?
See SelectableTestCase

Warn when the Report button aims at an older report.

With the replay feature we can open the previous report during a run.

This may not be obvious ? (that it's the previous run)

At least the tooltip should be updated with the file date.

Quite easy to mark the report as deprecated when pressing Play as well.

restore context feature

finally often is not good enough.

We need a restore block that take a list of PV name, register their values before the test and restore them at the end of the test.

Implementation suggestion:

This could be implemented as two tests: one to register before the other generated tests and one to restore after. The values could be stored in a Multiprocessing queue to carry them from one test to another.

Note:

Maybe finally statement should be executed after the restore action.

Wrap traceback info

When a test fails or errors the corresponding traceback is written in the collapsable frame of a subtest.

It is written on a single line, but if the window is too narrow we can't see the end of the line and it's not obvious that there is more text on the right.

It would be better if the text was wrap to be displayed on other lines when it's to long. Or at least show that there is some text not visible using ... or similar.

Also align it left of the box

[GUI] placeholder for PVs tree

Add a text showing "checking PVs connection" in place of the Treeview, that appears like "everything is connected" when there are a lot of PVs and it takes time to prepare the Tree.

Export tests result in temporary file that is then used to generate report.

This would allow to work on report more easily (without running the tests again) and would serve as a backup in case of report generation failure.

Also it could be interesting to generate this file on the go rather than at the end of the tests run. So that even in case of abort the report can be generated.

update logo

  • remove dash between in "We-Test" so as to get WeTest
  • use the yellow square for "!" instead of python logo which should go on blue square

in "commands", use the set_value if no get_value provided

Today, in "commands", it is necessary to indicate a 'get_value' if you set a "getter", otherwise I get the warning:
image

tests:
  - name: "${name} - test 1 - pass"
    commands:
    # test mode
    - name:       "${name} - set test mode"
      setter:     "${P}GenTestModeCmd"
      getter:     "${P}GenTestModeRb"
      set_value:  "test"
      get_value:  "test"

I would like that wetest use the set_value if no get_value is provided:

tests:
  - name: "${name} - test 1 - pass"
    commands:
    # test mode
    - name:       "${name} - set test mode"
      setter:     "${P}GenTestModeCmd"
      getter:     "${P}GenTestModeRb"
      set_value:  "test"

[macros] specify macro type

Today a macro is substituted as a string then changed into a float, int, bool, list or dict.

Sometime we would like to use the value as a string instead. Then we need to cheat by using the macro in a string with additionnal characters.

Sometimes a string is converted into a list (or a dict) because it contains special characters).

Apparently in YAML it is possible to specify the macro type explicitly. The type should be stored along the value to convert the macro as desired by the user.

`wetest --version` command return an error code (it breaks my CI)

┌─[vnadot][VM_allInOne_crossCompil][±][detached:master ✓][/iee/tops/vnadot/topSaraf/tools/WeTest]
└─➞ wetest --version
Installed WeTest is of version 1.2.0
┌─[vnadot][VM_allInOne_crossCompil][±][detached:master ✓][/iee/tops/vnadot/topSaraf/tools/WeTest]
└─➞ echo $?
1

enable more mathematical comparison than just margin and delta, such as: prec, greater, lesser, different

add new keywords (compatible with margin, they all need to be validated during the test, ie. use AND):

  • prec (decimal precision, such as 7 <=> equal within 10^-7)
  • greater (measured value should be greater or equal the given value)
  • lesser (measured value should be lesser or equal the given value)
  • different (check not equal, and can be used with gretter and lesser for check without equality)
  • abs (check absolute value of measured value)
  • mod (check measure value is a modulo of...)

For now only add them for commands, but implements them in order to be integrate to values and range later if need be.

delta and margin are two "approximation" fields are actually closely linked to the value field.

Whereas greater, lesser, different, abs and mod are independent of value and the "approximation" fields.

Therefore I believe that the following logic should be applied to a test that as several of these fields defined:

( getvalue == value +/- max(margin*value, delta) )
AND (getvalue >= greater)
AND (getvalue <= lesser)
AND (getvalue != different)
AND (abs(getvalue) == abs)
AND (getvalue % mod = 0).

Using "max" for margin and delta is required: for instance when we want to check a value against 0 then margin is not useful.

The prec field should be used for any "equality" check (value, different, abs and mod).

Finally I would add that the field different could use a single value or a list of values.

And that the mod field could use a single or a pair value: divider and (divider, remainder) where remainder is 0 by default.

Align execution time

Execution time label width is supposed to be of fixed width (5 characters min), but on some devices the size is variable (maybe due to a different default font).

Use a a minimal size label ? Max length should not be fix.

[GUI] Enable selection of test scenario and PV database from a GUI window.

Through the usual wetest command without option, open the GUI with input widgets enabling to choose the different option from the CLI (test files, DB folder, naming, whether to autoplay...).

Also keep a cache file with to remember the last options used (10 last file/folder, and latest options).

Missing a user manual

WeTest is quite poor in documentation:

  • I don't know what "message" means / does (tests/sequence/mapping/message and tests/sequence/mapping/commands/sequence/message)

=> message is a verbose text detailing what the test is about. It is displayed as tooltips in the GUI, and added to the report. Its enable to keep test titles and commands title shorter.

  • What is the difference between unit test + commands and functionnal test ?

=> in a unit tests scenario all the tests are executed randomly (subtests such as range iterations, values and commands are still in the defined order). in a functional tests scenario tests are executed in the order defined in the file. Also in case of failure a functional tests execution will pause, while a unit tests will just keep on going (thats default behavior which can be overridden with on_failure field).

  • What is the difference between ignore, skip and select:

=> ignore is a field usable in a test in a scenario file, it tells we-test to not read this test (and therefore not displaying it in the GUI or the report), this is useful for unfinished tests or for generic files
=> skip is a field usable in a test in a scenario file, it tells we-test to read this test as the others (and therefore display it in the GUI and the report), but to not execute it, mainly to spare test time.
=> select is a feature of the GUI, enabling to skip and unskip tests from the GUI after the file has been parsed.

  • tip & tricks:

    • generate a report without executing the tests by unselecting all the tests easily in the GUI
    • do a blocking conditional test by using retry: -1 and on_failure: continue to loop on a test until it is successful.

Enable the use of range, values and commands in the same test.

We would have no guaranty of the testing order though (a test being a map).

Add an ordered keyword, which allow to define an order list between the range, commands and values.

Add a command keyword to define a single command.

Behind the scene I believe there is good re-factorisation to do on this topic, allowing a maximum of common fields. Many fields from commands would actually be useful for range and values (the obvious being margin and delay).

use of geom in range should check validate start and stop value

Geom does not accept a negative or null start value.

With start=0 WeTest crashes:

Traceback (most recent call last):
  File "/home/fgohier/miniconda2/envs/WeTest/bin/wetest", line 11, in <module>
    load_entry_point('wetest==0.4.4', 'console_scripts', 'wetest')()
  File "/home/fgohier/miniconda2/envs/WeTest/lib/python2.7/site-packages/wetest-0.4.4-py2.7.egg/wetest/command_line.py", line 316, in main
    suite, configs = generate_tests(scenarios=scenarios, macros_mgr=macros_mgr)
  File "/home/fgohier/miniconda2/envs/WeTest/lib/python2.7/site-packages/wetest-0.4.4-py2.7.egg/wetest/command_line.py", line 182, in generate_tests
    tests_gen = TestsGenerator(scenario)
  File "/home/fgohier/miniconda2/envs/WeTest/lib/python2.7/site-packages/wetest-0.4.4-py2.7.egg/wetest/testing/generator.py", line 592, in __init__
    self._create_tests_list()
  File "/home/fgohier/miniconda2/envs/WeTest/lib/python2.7/site-packages/wetest-0.4.4-py2.7.egg/wetest/testing/generator.py", line 668, in _create_tests_list
    value_list.update(numpy.geomspace(start, stop, geom, endpoint=include_stop))
  File "/home/fgohier/miniconda2/envs/WeTest/lib/python2.7/site-packages/numpy-1.12.1-py2.7-linux-x86_64.egg/numpy/core/function_base.py", line 319, in geomspace
    raise ValueError('Geometric sequence cannot include zero')
ValueError: Geometric sequence cannot include zero

With start=-1 WeTest runs a test with NaN values and logs the following error:

/home/fgohier/miniconda2/envs/WeTest/lib/python2.7/site-packages/numpy-1.12.1-py2.7-linux-x86_64.egg/numpy/core/function_base.py:346: RuntimeWarning: invalid value encountered in log10
  log_start = _nx.log10(start)

AssertionFailure with margin number text not explicit enough

When a test with a margin fails we get a message saying expected XXX+/-Y% but got ZZZ, for big number we can not work out the details of the failures.

would it be possible to use the margin value (in delta form) to workout the number of decimals to show ?

make report logo easilly usable for user

Best option seems to accept a list of image file though a CLI option.

This will also require a resize function to make sure the logo fit at the top of the report (first resize in height, then check all the resized logos fits in width). The resizing should keep the logo proportions.

What about saving the last logos files used in order to not have to set them all the time ? Through a configuration file that could also be used for Naming class implementation ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.