Giter Site home page Giter Site logo

overwatcher's Introduction

overwatcher

Ultra-lightweight automated testing framework for CLIs.

Design ideas:

  • KEEP IT SIMPLE!!!! The framework itself should be in a single file, and each test in a file. The framework does not need to know anything about the device, just runs stuff and keeps an eye out for other stuff.
  • log everything important! The log file can look a bit intimidating, but after looking at a log file, you can understand what happened and why. This is why logging levels were not introduced for now, just dump everything in a file and use the search function :)
  • make sure the results are reproducible! This is the reason for introducing the versioning and keeping everything in just two files. It is easier to make sure that the tests work in the same way even after a while...or force a review of the test if something major changes.

Current state:

  • tested on both serial connection (using ser2net) and telnet straight to the device. Depending on the test, the same test might run on both without any changes.
  • tests can be written as python classes or as YAML files
  • tests can run in a finite time or cycle forever (both on serial and telnet). There is a watchdog implementation which does not let the test freeze. In case of a timeout, some actions to recover the device can be attempted.
  • outcome is a single log file containing all the test information (including version, parameters and options) and the entire flow (including device output). The framework also returns a different code based on the test results, so it can be used in bash scripts.

The future:

  • there will be no 'device-specific dictionary', as this can complicate things with the "reproducible" part. The current idea for solving this is to implement regular expression hadling for markers and writing tests so that the device specific parts are left out. The biggest problem can be on serial, but careful marker choice might solve this (hopefully). This will be seen in time.
  • add more randomness to tests. There is already an option to randomly run commands or to sleep a random amount of time, but this needs to be expanded. Who knows what a simple test might uncover :)

Anatomy of a test

The basic idea: the test defines a list of states (markers) and actions that have to be run. This list is walked on element by element: when a state is the next element, overwatcher waits for the marker that describes that state, when an action/modifier is the next element, it just runs that action/modifier. If overwatcher is looking for one state and a different one is seen, the test fails.

  1. TEST INFORMATION This "header" contains a full test description and it is dumped in the test log, so more stuff can be easily added. There are two mandatory fields: 'version' (which needs to be kept in a reverse order - only the first one is dumped in the log) and 'overwatcher revision required' (this is still WIP, but it should have the format in the example. If the framework revision does not match this field, there will be a warning when starting the test). The 'serial only' field can be added and will generate a warning when tests are run over telnet; if it is not present, it is assumed to be False.
  2. MARKERS These are text elements that overwatcher pays attention to. Can be used to trigger actions immediatly when seen (example: see User -> send username) or to define the actual test flow. Be careful when choosing a marker, as the test fails if the marker found does not match the one expected during the test run. There are two exceptions to this rule: markers that have only MODIFIERS in their triggers and prompts (see below). The first exception was introduced to be able to do some small tasks (ex: count a string that appears from time to time). Prompts are consumed by running actions.
  3. PROMPTS Thse are string that are expected after a command is sent to the device. Why? Because we might run into commands that take a while to run and the test should not keep pushing stuff to the device while it is blocked. For now, this is not blocking; if the prompt is not seen in a while, overwatcher tries to send a CR (only on serial); if the prompt still does not appear, it tries to continue the test (the timeout will stop it anyway if the device is blocked)
  4. TRIGGERS Triggers are automatic actions that are run when a marker is seen. These actions can include sending device commands or setting modifiers. Please note that these triggers do not take into account the test flow...if a marker appears, they are just run. Also triggers do not wait for prompts, the elements are sent with a small delay. NOTE: triggers can contain modifiers. There are critical modifiers which are run even if triggers are disabled (see below).
  5. ACTIONS Actions are commands that will be run during the test flow. Unlike triggers, they are not automatic, they need to be added to the test flow below to be run. After each element of the list of actions is run, overwatcher waits for a prompt before sending the next one. NOTE: actions can contain modifiers.
  6. INITIAL CONFIGURATION This is a sequence identical to the test, but it is only run once when starting the test. It can be used to do some initial sets. The recommanded way to start this is with a marker for a known state...the config blocks until it reaches that state (either via triggers or manually) and then it runs the configuration actions from a known state. There is a watchdog in effect while doing the config...if it take too long, the test fails. The timeout value is configurable.
  7. TEST This is a series of markers, actions and modifiers that are expected and run in the given order. The actual test can be single run (go through it and stop) or infinite (run forever). Take this into account and use the configuration sequence above for initial configuration. To further enhance the functionality you can use the modifiers below. The same watchdog is in effect while running the test. It is reset after passing to a new state. The timeout value is configurable.

Modifiers

These are a sort of "special actions" which control and change the test flow or run special actions (like couting stuff).

  • IGNORE_STATES - if a marker is seen, ignore the transition to that state. This is mostly used in reboots to handle if a new login screen appears. It also cancels any prompt waits in effect. On telnet it closes the socket.
  • WATCH_STATES - allow transitions to a found state. THIS IS A CRITICAL OPTION and it is set when found in a trigger, even if triggers are disabled.
  • TRIGGER_START - run triggers on markers again. THIS IS A CRITICAL OPTION and it is set when found in a trigger, even if triggers are disabled.
  • TRIGGER_STOP - do not run triggers on markers anymore.
  • SLEEP_RANDOM - sleep a random amount of time. The random interval is controlled by sleepMin and sleepMax (see below).
  • RANDOM_START - begins a block of randomly executed commands. Before sending each command to the device, a random draw is made; if it is true, the command is sent, otherwise it is discarded and the test moves on. This can be used to add some randomness to a test.
  • RANDOM_STOP - stop the random draw. All commands are sent to the device.
  • COUNT - Simply counts how many times a marker appears during the test. All counts are displayed once one is incremented (increases the log file, but handels infinte tests easier). NOTE: there are two permanent counts in each test: the number of loops run (if it is infinite) and how many timeouts are left per loop.
  • NOPRWAIT - Following commands are sent to the device without waiting for a prompt. It only applies to the commands in the current action. When all the commands left in the action are executed, the prompt wait returns for the next actions.
  • NOTSTRICT - special modifier, needs to be the first in a trigger for that state. It causes overwatcher to ignore that state in a test; if the state is seen, but not expected, the test continues and does not exit. This is not affected by the strictStates option. Use with caution, as ignoring this can lead to false positives, but it is useful in tests that need to run a long time, or for using other modifiers with some states.
  • LOCAL - All commands after this modifier are ran on the local PC. When the command set is finished, it automatically reverts to running commands on the device. No special handling is required, the modifier can be used anywhere in a command, just there is no way to disable this.

Configurable test options

These are just parameters that control the inner workings of the test:

  • sleep_min and sleep_max: interval in which SLEEP_RANDOM generates values
  • sleep_sockWait: on telnet, how much to wait before trying to re-open the socket
  • infiniteTest: run the test in a loop. When the final state/action/option is reached, starts from the first one again. The configuration is not run again.
  • timeout: how long to wait when looking for a state. NOTE: this is not influenced by the prompt or by running commands.
  • test_max_timeouts - how many timeouts can occur per test loop
  • strictStates: when this is set to FALSE overwatcher ignore the order in which the states come in a test, so if a state comes when it is not expected, the test will not fail but continue executing. This is useful for long running tests as it prevents unwanted stops. For tests that need a pass/fail this should be left to the default state - TRUE.

overwatcher's People

Contributors

axcxl avatar

Watchers

James Cloos avatar  avatar

overwatcher's Issues

Prepare some automated tests

Why? Now overwatcher supports telnet and serial, and should work with fast and slow devices with various endline versions. Testing starts to look time-consuming.

It is not that complicated, just need a few devices with serial/telnet connections. Or maybe look into a VM/emulator?

The scrips already written can be used as base. Long running scripts usually are the best since they can catch a lot of weird situations. Maybe add a loop limit? Or just terminate the test after a while...if it is still running, it is ok.

The entire thing can be run in a bash script including verification (using the return codes).

Overwatcher works only with newer Ser2net

Using ser2net 2.9, default config for port - does not work.

Using 3.5 with the following config works OK:
3000:telnet:600:/dev/ttyUSB0:115200 banner remctl telnet_brk_on_sync -chardelay max-connections=3

This needs to be investigated.

Version everything

How to make sure updates do not silently break stuff: add a revision option to overwatcher and a revision parameter in each test. If the parameter does not match, warn and do not start the test except after user confirmation (lightly forces a review and an update of test).

NOTE: the revision will not be related to the actual version of the framework! It will only be incremented when major change are done to the behavior which can break old tests. Updating stuff transparently increases the version of the framework and keeps the revision unchanged! (example: refactoring the options implementation keeps the same options, so the tests do not care....removing options from triggers breaks tests)

Add custom recovery for timeouts?

This is useful especially for loooong tests:

  • add a custom recovery in each test?
    -> problem with this is that it makes the script non-portable

  • or maybe a custom recovery script?
    -> I know I said that adding a new file means versioning problems, but setups are different and different recovery methods can be used (reboot device via serial, power off device via another one, try SNMP, etc). This is personalized and not versioned in any way, just convenience.

Refactor options

All the options and the init part need a small refactoring. Might solve the startup issue also.

Ability to execute tests in a batch

For now each test was executed by itself.

Need to find a way to run batch tests (like: run all tests in this folder and stop on first error/do not stop on any error).

New modifier needed: NO_WAITPROMPT?

After testing a Beaglebone Black with Linux, found the following situation: when issuing the reboot, there is no prompt after and no extra stuff.

The problem is that the reboot action is waiting for a prompt. On serial, we can just add a text after as a prompt, or set the ignore_states before, but if we want to use the test over telnet, then it might have problems.

This needs some further investigation.

Sporadic startup problems

sometimes, when starting the test, I see:

`<~/workspace/GIT_REPOs/overwatcher>[master][LAST:2018-09-30 23:07:28][ST:0]$ ./overwatcher.py --port 3001 ContinousClear.yaml
2018-09-30 23:09:11.767326 +++>
/ / / / STARTED CONFIG!/ / / /

2018-09-30 23:09:11.767474 +++> Looking for: hios_ena
2018-09-30 23:09:11.767616 +++> SENT

Exception in thread Thread-4:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "./overwatcher.py", line 346, in thread_StateWatcher
for marker in self.statewatcher_markers:
AttributeError: 'Overwatcher' object has no attribute 'statewatcher_markers'
`

Configuration part does not work well

In theory the configuration part should wait for a given state, run stuff and stop.

Problems found:
-the information it displays is completely wrong, it shows that it looks for the first state in the actual test
-triggers seem to behave erratically (for example, it sends the credentials multiple times even after login, etc)

Ability to link to other programs/scripts

Good for more complex testing.

Examples

  • sync with packet generator tool
  • sync with protocol implementation
  • etc

Implementation ideas:

  • local server that sends events to whoever is listening
  • ??
  • not sure if it should be bidirectional

Pass parameters to a test

Useful for batch execution (if every test is ran by hand, you can modify the setup_test function to receive user input for example).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.