Giter Site home page Giter Site logo

ltest's People

Contributors

cinova avatar dokkarr avatar john-goff avatar jschairb avatar oubiwann avatar rvirding avatar skovsgaard avatar yurrriq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ltest's Issues

Add support for running tests in documentation

One of the classic problems with putting code in documentation is that it, sooner or later, begins to bit rot. If code in documentation can be run as part of a test suite (e.g., prior to commits or as a gate in a continuous integration platform), then this can be checked (and hopefully prevented).

The Python programming language pioneered this with doctests. Though sometimes hailed as a great abuse of testing (which it can be), when used appropriately (e.g., ensuring that code in documentation continues to be checked as working) it can be a great boon to software projects which strive to provide good example usage of APIs, etc., in documentation.

I suspect this will actually be comprised of a sizable number of other tickets. The following tasks will likely need to be completed to make this a reality:

  • Parse Markdown files for code blocks tagged with "cl" or "lisp"
    • possibly add doctest metadata and/or support for doctest directives in the first line of the code block
  • Parse commented code in LFE modules - this will definitely need some sort of metadata marker indicating that the code to follow should be parsed by ltest's doctest mechanism
  • Parse docstrings in LFE functions - I have doubts about this one; one can only put so much code in a docstring before it becomes awkward to read (much more to parse and execute as a test ...)
  • Execute parsed code, line-by-line
  • Ensure that executed code returns the same result as provided in the parsed code example/documentation
  • Add testing infrastructure
    • Add a behaviour for "doctest"
    • Add a (defdoctest ...) macro
    • Add supporting helper functions for this macros which:
      • Add support for running all code in provided Markdown files (code that is marked as "doctest", that is)
      • Add support for running all code in the code comments of provided (or discovered) files
      • Add support for running all code in the docstrings of provided (or discovered) files (again, not sure about this one)
  • Update test runner to include these types of test "suites"

Provide an ltest executable

This would provide a convenience for the following:

  • compiling and running all tests
  • compiling and running just one type of tests (e.g., just integration tests)
  • compiling and running just one module

ltest-runner: formatting

  • bracket OKs
  • make OKs lower-case
  • space OKs with dots
  • make suite opening and closing heading bold green
  • make test type divider/heading blue
  • display the message "There were no tests of this type to run." when no tests of type
  • make that message yellow
  • trim dividers/headings when too long
  • add an extra character when dividers/headings too short
  • don't indent test functions so far -- just use 2 or 4 spaces
  • prepend module names with "module: "
  • make module names bold green
  • display stats after all test types have run
  • display individual section times (e.g., unit, integration, system)
  • display section total cumulative time
  • display section total passed/failed counts
  • display section total skipped counts
  • if any tests pass, in the final stats report display the "Passed" string in bold green
  • if any tests fail, in the final stats report display the "Failed" string in red
  • if any tests are skipped, in the final stats report display the "Skipped" string in blue
  • leave off the trailing _test in the module name

Feature idea: add test dirs as argument to ltest-runner:all

I'd like to be able to configure where ltest-runner:all/0 searches for test beams. At the moment, I don't think it can be changed from .eunit/.

One way to implement this would be to add ltest-runner:all/1 where its argument is an options proplist (or dict?):

(defun all () (all '()))

(defun all (opts)
  (let ((dirs (proplists:get_value opts 'dirs '(".eunit"))))
  ...))
(all '(#(dirs ("_build/test/dir" ".eunit"))))

Though, other ltest-runner functions would also have to change.

It is a beautiful runner :)

Error in ltest or test-runner

Note: Also created this ticket in the test runner project: lfe-rebar3/rebar3_lfe_ltest#2

When running the unit tests for the kanin project (see https://github.com/lfex/kanin/blob/master/test/unit-kanin-uri-tests.lfe), an error arises (unspecified ... looks like an undef of some sort). When run with rebar3, this error does not arise.

$ rebar3 as test lfe test -t unit
================================ ltest =================================

------------------------------ Unit Tests ------------------------------

module: unit-kanin-chan-tests
  function_checks ................................................... [ok]
  export_count ...................................................... [ok]
  time: 24ms

module: unit-kanin-conn-tests
  function_checks ................................................... [ok]
  export_count ...................................................... [ok]
  time: 23ms

module: unit-kanin-uri-tests
  function_checks ................................................... [ok]
  export_count ...................................................... [ok]
  parse_net ....................................................... [fail]

      Assertion failure:
      undef

  parse_direct .................................................... [fail]

      Assertion failure:
      undef

  time: 92ms

summary:
  Tests: 8  Passed: 6  Skipped: 0  Failed: 2 Erred: 0
  Total time: 139ms


========================================================================
$ rebar3 as test eunit
======================== EUnit ========================
file "kanin.app"
  application 'kanin'
    module 'kanin-chan'
    module 'kanin-conn'
    module 'kanin-uri'
    module 'kanin-util'
    module 'kanin'
    module 'unit-kanin-chan-tests'
      unit-kanin-chan-tests: function_checks_test...ok
      unit-kanin-chan-tests: export_count_test...[0.011 s] ok
      [done in 0.017 s]
    module 'unit-kanin-conn-tests'
      unit-kanin-conn-tests: function_checks_test...ok
      unit-kanin-conn-tests: export_count_test...ok
      [done in 0.006 s]
    module 'unit-kanin-uri-tests'
      unit-kanin-uri-tests: function_checks_test...ok
      unit-kanin-uri-tests: export_count_test...ok
      unit-kanin-uri-tests: parse_net_test...[0.004 s] ok
      unit-kanin-uri-tests: parse_direct_test...ok
      [done in 0.016 s]
    [done in 0.208 s]
  [done in 0.250 s]
=======================================================
  All 8 tests passed.

ltest Breaks in Erlang 17.4

If you're running LFE and ltest in Erlang 17.4 you will find that the unit tests no longer compile. I haven't isolated the exact culprit, but I'm 99% certain this is due to the EUnit fixes that were added to the latest release of Erlang.

I suspect that something along the lines of changing the test macros to use underscores in module names will fix this, but I haven't tried it yet.

Successful use in Erlang 15, 16, and 17.3:

$ make check-unit
Removing EUnit test files ...
removed ./.eunit/*.beam
Cleaning ebin dir ...
Compiling only project code ...
==> my-test-lib (compile)
Compiled src/my-test-lib-util.lfe
Compiled src/my-test-lib.lfe
Removing old tests ...
rm -rf ./.eunit
mkdir: created directory ./.eunit
Compiling tests ...
Successfully compiled test modules.


------------------
Running unit tests ...
------------------

======================== EUnit ========================
module 'unit-my-test-lib-tests'
  my-adder ............................. [0.002 s] [ok]
  my-sum ......................................... [ok]
  my-sum-zeros ................................... [ok]
  Total module test time: 0.012 s
=======================================================
  All 3 tests passed.

Unsuccessful use in Erlang 17.4:

$ make check-unit
Removing EUnit test files ...
removed ./.eunit/*.beam
Cleaning ebin dir ...
Compiling only project code ...
==> my-test-lib (compile)
Compiled src/my-test-lib-util.lfe
Compiled src/my-test-lib.lfe
Removing old tests ...
rm -rf ./.eunit
mkdir: created directory ./.eunit
Compiling tests ...
Successfully compiled test modules.


------------------
Running unit tests ...
------------------

======================== EUnit ========================
undefined
=ERROR REPORT==== 3-Jan-2015::14:04:40 ===
Loading of /tmp/my-test-lib/.eunit/unit-my-test-lib-tests.beam failed: badfile
*** test module not found ***
**'.eunit/unit-my-test-lib-tests'
=ERROR REPORT==== 3-Jan-2015::14:04:40 ===
beam/beam-load.c(1250): Error loading module '.eunit/unit-my-test-lib-tests':
  module name in object code is unit-my-test-lib-tests
=======================================================
  Failed: 0.  Skipped: 0.  Passed: 0.
One or more tests were cancelled.
make: *** [check-unit-only] Error 127

Not all EUnit states are managed in ltest listener

Sadly, EUnit states are undocumented. The source code has to be examined when attempting to create a 100% EUnit compatible test listener.

The following files need to be compared carefully:

With the latter getting updates for the missing (or commented-out) bits. Note that this set of missing functionality dates from the earlier days of ltest (formerly lunit, formerly lfeunit).

Tasks:

  • Handle cancelled tests
  • Add start/2 (not really state management, but we can hit this while we're bringing the rest of ltest_listener into accord with eunit_listener)
  • In handle_begin add another clause like for the undefined description, but for an empty string
  • In handle_end add another clause like for the undefined description, but for an empty string

Update runner to pass test type option to test state record

Currently, system, integration, and unit tests call ltest-runner:run/1 and then ltest-runner:run-beams/1, but they don't pass the test type (e.g., system, integration unit). This set of functions needs to be updated to pass ltest-type as an option so it can be recorded in the test record when test state is initiated (in ltest-listener:init/1).

Overhaul codebase for modern rebar3

With the latest changes to the rebar.config, the ltest project has removed the conditions under which cyclic dependencies arise, in particular, with the latest LFE plugin for rebar3.

In the process of this, though, some cursory examination of project code revealed that much of the work various utility functions are doing have their history in a pre-mature rebar3 solution, during the time of lfetool.

Tasks:

  • Does rebar3 provide utility functions for extracting metadata from beam files?
  • Should we look at getting the same beam info, but from a different approach?
    • Can we instead get a list of all modules for a project (not reliable)
    • what about getting all beams in an app's ebin dir?: rebar_app_info:ebin_dir
  • Should we continue using the behaviours as essentially marker interfaces
  • Should we instead use a naming convention on file names?
  • Remove all build tool/rebar specific code (move to rebar3 plugin)
    • create a default/constant for rebar3 dir, but don't use it in the beam-finder code
    • instead, offer it as a convenience for running project tests from a REPL

Add macros for _assert*

I'm using these in xlfe right now, but it'd be nice to bake something into ltest.

(defmacro is-equal* (x y)
  `(_assertEqual ,x ,y))

(defmacro is-error* (x y)
  `(_assertError ,x ,y))

TODO

  • Update is-{,not-}exception{,*}
  • Update is-{,not-}error{,*}
  • Update is-{,not-}exit{,*}
  • Update is-{,not-}throw{,*}

Add convenient unary clauses that pass _ as the first argument to the binary versions.

Cyclic dependency upon clj

In order to test lfex/clj one needs to specify ltest as a dependency, but currently ltest depends on clj. Is there a way to skip this check? I'm using rebar3.

Move skip-test functions from lfetool v2 into lunit

There is some functionality recently added to lfetool that really belongs here in lunit. The following commits are related to this work:

In particular, the code from lfe-deprecated/lfetool@46e35eb that wasn't moved to lutil in lfe-deprecated/lfetool@23a1ed6 (and lfex/lutil@5776c9b) needs to be moved to lunit for this ticket. It is specific to lunit and can be called from other projects (especially lfetool's test runner) to get lunit skip-test info.

Create assertion error records

There are places in ltest that have to do some very ugly destructuring:

This is due to Erlang having some pretty ugly (also probably quite old) error-creation for assertions, e.g.:

Let's create some records (just the ones that are necessary) to at least keep our code clean.

These are going to need LOTS OF TESTS.

System tests in lutil are currently failing

This popped up in the lutil project, but it looks like it's because of ltest. It might be a problem with how beams are looked up -- I believe the code was originally designed to use the _build/default directory, but tests are now being run with the test profile (and thus the _build/test directory).

Admittedly the system tests are pretty useless right now, but hey, they should still pass.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.