Giter Site home page Giter Site logo

nextest-rs / nextest Goto Github PK

View Code? Open in Web Editor NEW
2.0K 6.0 87.0 11.86 MB

A next-generation test runner for Rust.

Home Page: https://nexte.st

License: Apache License 2.0

Rust 98.77% Awk 0.04% Shell 0.30% Handlebars 0.07% CSS 0.81% Batchfile 0.01%
rust testing flaky-tests cargo-plugin cargo-subcommand junit nextest

nextest's People

Contributors

arxanas avatar bestgopher avatar bmwill avatar dependabot-preview[bot] avatar dependabot[bot] avatar epage avatar github-actions[bot] avatar guiguiprim avatar inanna-malick avatar jake-shadle avatar metajack avatar nextest-bot avatar nobodyxu avatar novedevo avatar pmsanford avatar poopsicles avatar ralfjung avatar renovate-bot avatar rexhoffman avatar saethlin avatar senekor avatar skyzh avatar sourcefrog avatar steveej avatar sunshowers avatar tabokie avatar taiki-e avatar tomasol avatar tripleight avatar ymgyt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

nextest's Issues

loading shared libraries error

I used tcmalloc in my one project, but when I tried to use nextest, it complained about cannot open shared object file:

/home/chien/Developments/Rust/lock-free/target/debug/deps/lock_free-502d561706e2470e: error while loading shared libraries: libtcmalloc.so.4: cannot open shared object file: No such file or directory
Error:
   0: error building test list
   1: running ''/home/chien/Developments/Rust/lock-free/target/debug/deps/lock_free-502d561706e2470e --list --format --terse'' failed
   2: command ["/home/chien/Developments/Rust/lock-free/target/debug/deps/lock_free-502d561706e2470e", "--list", "--format", "terse"] exited with code 127

Backtrace omitted.
Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.

but it works fine when i use cargo test.
here is my environment information:

rustc  ver: 1.60.0-nightly
rustup ver: 1.24.3
cargo  ver: 1.60.0-nightly
os: Debian GNU/Linux 11 (bullseye)
kernel: 5.10.0-11-amd64

Feature Suggestion: Dynamic tests

Hi there, first off: this is a really cool project!

I'd like to suggest a feature that JUnit 5 has introduced, and which I personally use heavily: Dynamic tests

Essentially, I'd like to build the list of test cases during runtime. This is extremely useful for table-driven tests, and the flexibility of defining tests in runtime allows developers to write more interesting test inputs.

Handle tests that should be run within the same process

Currently, nextest runs every test in its own process. Should running tests within the same process be supported?

  • How should test binaries be marked this way? package.metadata is one solution.
  • How does concurrency control work? Rust's libtest doesn't support the jobserver protocol, so it's hard to communicate with it. One option is to run them serially at the end, once all the regular tests are run.
  • How can test timing, stdout/stderr for individual tests, and other results be obtained? libtest's --format json is nightly-only. Maybe nextest can just give up on reporting individual statuses for these tests.

Compatibility with coverage tools?

Is there any way to use this in conjunction with coverage tools? Maybe it would be better to go see if the coverage tool has support for this rather than the other way around, but if someone here has gotten nextest working with a coverage tool I'd love to hear about it.

use `CARGO_TERM_COLOR` for term color?

#[clap(
long,
arg_enum,
default_value_t,
hide_possible_values = true,
global = true,
value_name = "WHEN",
env = "NEXTEST_COLOR"
)]
pub(crate) color: Color,

As nextest is a cargo subcommand, it's better to follow cargo's convention? Therefore, I suggest using CARGO_TERM_COLOR instead of NEXTEST_COLOR environment variable for this config.

incremental test run

May be I missed something, but I can non find any mention for this.
It would be nice to get from cargo list of crates that would be rebuilt
by cargo build, and then run tests only for affected crates in workspace.

Fail tests that leak pipes

It is possible to write a test that ends up leaking a pipe (e.g. creates a process but doesn't terminate it). Currently, the test runner hangs on encountering such a test.

Instead, we should figure out a way to:

  • detect such a situation (a small amount of raciness between waiting on process exit and checking that the handles are closed is fine)
  • mark such a test as failure with a LEAK message or similar

slow down, leaking ansi escapes in output

Nice job with this! I was testing on nushell and it started out nice, then slowed way down and started leaking ansi escapes in the output.

My command line was cargo nextest run --all --all-features

image

Keep up the good work!

Tracking issue for reuse build options

cargo-nextest 0.9.10 introduces experimental support for reusing the build across machines and invocations. This issue tracks stabilizing this option:

  • How does this interact with #82?
  • Verify option names.
  • Ensure that the "cargo present on destination" and "cargo not present on destination" scenarios both work well.
  • Is the current path remapping support sufficient?

cc @Guiguiprim

Ensure graceful experience for trybuild users

Unlike #38, there are "custom test harnesses" that are just libtest tests. A very common one among proc-macro authors is trybuild (I also maintain trycmd which is modeled off of trybuild).

I have not tested this yet but I'm assuming there are issues because I think trybuild assumes it is the only test running and the output becomes difficult to read when other tests are running (from my trycmd experience)

Run --no-default-features and --all-features suites

Hi! Nextest looks pretty great!

One common thing I have in many crates is running tests with different feature sets. Would be nice to be able to merge those tests into the single set of tests instead of running sequentially via separate commands, if it's at all possible.

Reduce the binary size needed for build reusing

Currently the entire target directory is needed when reusing build on another machine. This is quite a setback for us because previously we only need to transfer a specific test binary under target/debug/deps/ generated by cargo test --no-run.

I'm wondering if it's possible to reduce the binaries that must be shared between two machines, and make this information easy enough to use. Or better yet, package the necessary test artifacts by nextest itself.

This would be especially useful for test partitioning. For our use case, test cases from one large repo are sharded to tens of partitions. With this feature, each partition wouldn't have to download all the test binaries.

Add filtering for packages, not just tests

Add filtering for packages (-p can be overloaded to mean this).

This would accept the package, deps and rdeps functions, but not test. The point of this would be to possibly build a smaller set of tests (at the cost of a different feature unification result, plus the inability to reuse builds).

This doesn't block the full release of filter expressions.

Add support for stress tests to nextest

Add support to run a single test multiple times in parallel. This is a somewhat different mode of operation from nextest as usual (the runner should be fed the same test multiple times, and should maybe not run other tests at the same time), but is worth doing as a future improvement.

Unexpected behavior when custom test harnesses don't parse arguments

Foremost, thank you for this project! It is saving me a lot of time in my core dev workflow.

As part of porting a monorepo to support this project, I encountered some friction around custom test harnesses that I thought worth reporting.

I'm using version 0.9.11.

https://nexte.st/book/custom-test-harnesses.html says that custom test harnesses MUST support being run with --list --format terse and a few other argument patterns. That's all fair. Price you pay for admission I suppose.

However, what tripped me up was how the current behavior of nextest list and nextest run can apparently silently ignore non-conformance in certain scenarios.

I have a few custom tests declared with [[test]] harness = false that are literally a simple executable with fn main() that exits 0 on success or non-0 on failure. No test harness whatsoever. No argument parsing to be found. e.g. https://github.com/indygreg/PyOxidizer/blob/75d63db88d4e4823a94480b4dfa7750bf190ec74/pyoxy/tests/yaml.rs

Maybe I'm not abiding by best practices when it comes to using harness = false. However, cargo test runs these tests/executables just fine.

However, with nextest list and nextest run, these harness = false executables are effectively ignored, often silently.

I think the problem stems from nextest list not being able to discover tests since the test executable doesn't implement --list --format terse. Since output for this invocation is the actual test output and not the expected test list output, nextest assumes the test list is empty.

I suspect many people don't hit this because they are using actual custom test harnesses with command argument parsing and those get tripped up on unknown arguments. Since my binaries don't even look at argv, --list --format terse invocations just run the actual test!

Some questions / areas for potential improvement:

  • Should nextest complain when stdout from --list format terse invocations doesn't conform to the my-test-1: test syntax? The docs clearly say Other output MUST NOT be written to stdout.
  • If nextest complains, should this be a warning or fatal error? If configurable, should there be CLI flags to control behavior?
  • Should there be some kind of special output from test harnesses to indicate no tests or skip support? My thinking here is nextest might want to receive positive confirmation from a test harness that it is aware of the custom arguments nextest is passing in. If an executable can't demonstrate that it is nextest aware, nextest might want to do something special with that knowledge, like tell the user their custom test harness doesn't support nextest. This would be actionable information for project developers.
  • What do you think about the idea of a simple crate for enabling test executables to implement handling of arguments like --list --format terse? This isn't the hardest thing to implement in my custom tests (especially when there is a single test in each binary). But if there were a crate where I could easily register named tests and that also handled argument parsing and test dispatch, this is something I'd probably use since I don't want to have to think about implementation details of the CLI arguments protocol.

Thanks again for this terrific project. Even with the custom harness quirks, it is still light years ahead of my old workflow.

Add partition operator to filter expressions?

Consider adding a partition() operator to filter expressions, and possibly deprecating the current --partition flag.

This is in principle possible to do, since (assuming a specific state of source code) all partition assignments for tests are completely deterministic. However, the current partition scheme for count: uses mutable state, which is too complicated -- instead, we can simply give each test a numeric ID and simply use modulo N for the count partition operator.

Strictly speaking, this doesn't block the release of filter expressions. However, if we do it before releasing filter expressions, we can ban using --partition and -E simultaneously. If we implement this after the release, we'll have to scan the query to look for partition operators, raising the complexity of the implementation -- so there's some advantages to doing it now.

`list` (maybe `run`?) doesn't respect `CARGO_TARGET_{TRIPLE}_RUNNER`

We build and test Windows binaries from Linux, and use CARGO_TARGET_{TRIPLE}_RUNNER to specify (in our case) wine so that we can test the binaries, however nextest doesn't seem to read this configuration and attempts to directly execute the test binaries when performing the initial test list before running the tests.

I can add support for this tomorrow in a PR, just thought I would file this issue for visibility. Great work on this btw!

Incompatibility with rstest's once

Thank you very much for this project, it really makes the testing experience in Rust much more fun and productive.

I've come across what I suspect to be an incompatibility between nextest and another crate that helps with testing in Rust: rstest, specifically its once fixture attribute, whose purpose is to ensure that a fixture is executed only once per test run.

I use rstest's fixture in some tests, and wanted to write the data created by the fixture to a file, to be able to inspect them manually after the tests run. When using cargo nextest the fixture (which is marked with #[once] and hence should only run once in the whole test run) fails because it tries to write to the file multiple times simultaneously. Executing the same test suite with plain cargo test does not suffer from this problem. So it looks like nextest execution model is working around the rstest #[once] mechanism for ensuring single execution.

Is there any chance that these two excellent projects might be made usable together, in this particular instance?

I've opened a similar issue in rstest.

Skipping tests

Hi,

I am using cargo-nextest 0.9.14

I have a crate with the following test code

#[cfg(test)]
mod tests {
    #[test]
    fn simple() {    }

    #[test]
    fn extended() {    }

    mod extended
    {
        #[test]
        fn long_running() {  }
    }
}

In my CI I use two steps. The first one executes all tests which are not in the extended module and the second one
executes the 'extended' tests:

cargo test -- --skip extended::

   Compiling playground v0.1.0 (/tmp/playground/playground)
   Finished test [unoptimized + debuginfo] target(s) in 0.36s
   Running unittests (target/debug/deps/playground-7791abe006ea8c1f)

running 2 tests
test tests::extended ... ok
test tests::simple ... ok

test result: ok. 2 passed; 0 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.00s
cargo test --  extended::                                       

    Finished test [unoptimized + debuginfo] target(s) in 0.00s
    Running unittests (target/debug/deps/playground-7791abe006ea8c1f)

running 1 test
test tests::extended::long_running ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 2 filtered out; finished in 0.00s

Running the extended tests with nextest is easy.

cargo nextest run extended::  

But how can I run the not-extended tests ?

I tried the following commands:

cargo nextest run -E not(extended::)
NEXTEST_EXPERIMENTAL_FILTER_EXPR=1 cargo nextest run -E 'not(extended::)' 

Default package filtering to exact while keeping default test filtering as contains

Having used filter expressions for a bit -- generally, most users expect package filtering to be based exactly on the package name, and test filtering to match on part of the test name. So default to these while allowing users to change them with the matchers that already exist.

This blocks the full release of filter expressions.

cc @Guiguiprim (sorry for ccing you to these, lmk if I should stop!)

feature suggestion: terminate slow tests after a given deadline and regard them as failed

without this, terminating a frozen test suite relies on an external watcher that keeps track of time. since the test harness and nextest itself already track time for each test, it would be great to have deadline support natively in the cargo stack.

for this potential feature it's actually an advantage that nextest runs each test in a separate process, because terminating processes works reliably as opposed to terminating threads which requires cooperation.

Incorrect padded width of crate names when running in a workspace containing binaries

Running cargo nextest run in a workspace containing two libraries, lexer and parser, and a binary, soilc, produces incorrect padded width of crate names in the runner reporter:

~/d/soil > cargo nextest run
    Finished test [unoptimized] target(s) in 0.01s
  Executable unittests src/lib.rs (target/x86_64-unknown-linux-musl/debug/deps/lexer-f26b099b66f61ac0)
  Executable unittests src/lib.rs (target/x86_64-unknown-linux-musl/debug/deps/parser-10e48dd98bd493db)
  Executable unittests src/main.rs (target/x86_64-unknown-linux-musl/debug/deps/soilc-d15e0318b1d9c4c2)
    Starting 12 tests across 3 binaries
        PASS [   0.001s]            lexer tests::identifier_underscore
        PASS [   0.001s]            lexer tests::identifier_alphanumeric
        PASS [   0.001s]            lexer tests::identifier_mixed_case
        PASS [   0.001s]            lexer tests::identifier
        PASS [   0.001s]            lexer tests::identifier_single_character
        PASS [   0.040s]           parser tests::number
        PASS [   0.040s]            lexer tests::number
        PASS [   0.040s]            lexer tests::keyword_func
        PASS [   0.040s]            lexer tests::whitespace
        PASS [   0.040s]            lexer tests::invalid_identifier
        PASS [   0.040s]           parser tests::advancing_past_none
        PASS [   0.040s]            lexer tests::invalid_hex_number
     Summary [   0.040s] 12 tests run: 12 passed, 0 skipped

This happens because the width calculation also takes into a consideration binary soilc::bin/soilc, which contains no test cases.

The simplest fix is to filter out crates for which testcases field is 0, which produces the expected output:

~/d/soil > cargo nextest run
    Finished test [unoptimized] target(s) in 0.01s
  Executable unittests src/lib.rs (target/x86_64-unknown-linux-musl/debug/deps/lexer-f26b099b66f61ac0)
  Executable unittests src/lib.rs (target/x86_64-unknown-linux-musl/debug/deps/parser-10e48dd98bd493db)
  Executable unittests src/main.rs (target/x86_64-unknown-linux-musl/debug/deps/soilc-d15e0318b1d9c4c2)
    Starting 12 tests across 3 binaries
        PASS [   0.001s]  lexer tests::identifier
        PASS [   0.047s]  lexer tests::identifier_mixed_case
        PASS [   0.047s] parser tests::number
        PASS [   0.047s]  lexer tests::number
        PASS [   0.047s]  lexer tests::invalid_identifier
        PASS [   0.047s]  lexer tests::identifier_single_character
        PASS [   0.047s] parser tests::advancing_past_none
        PASS [   0.047s]  lexer tests::keyword_func
        PASS [   0.047s]  lexer tests::identifier_underscore
        PASS [   0.047s]  lexer tests::whitespace
        PASS [   0.047s]  lexer tests::invalid_hex_number
        PASS [   0.047s]  lexer tests::identifier_alphanumeric
     Summary [   0.048s] 12 tests run: 12 passed, 0 skipped

Since it's a really small fix, here's the diff output, which fixes this issue:

diff --git a/nextest-runner/src/list/test_list.rs b/nextest-runner/src/list/test_list.rs
index 4afaabe..b3d8f86 100644
--- a/nextest-runner/src/list/test_list.rs
+++ b/nextest-runner/src/list/test_list.rs
@@ -341,6 +341,7 @@ impl<'g> TestList<'g> {
     pub fn iter(&self) -> impl Iterator<Item = (&Utf8Path, &RustTestSuite)> + '_ {
         self.rust_suites
             .iter()
+            .filter(|(_, info)| info.testcases.len() > 0)
             .map(|(path, info)| (path.as_path(), info))
     }

unable to parse benchmark

Any functions marked as benchmark by #[bench] using rust builtin bench will fail to be parsed.

   0: error building test list
   1: line 'byte::benches::bench_memcmp_decode_first_asc_large: benchmark' did not end with the string ': test'

If they are not supported on purpose, then ignore them may be a better alternative.

Collaborate with libtest-mimic?

In reading the expectations for custom test harnesses, it made me wonder if there was room for some kind of collaboration with libtest-mimic which provides a libtest-like interface for custom test harnesses.

Ideas

  • Create a conformance CLI test crate and update libtest-mimic to use it
  • Suggest libtest-mimic in the docs

Add support for running tests without building them

This might be a weird request, but:

To optimize CI resources, improve build reproducibility and other things, I have the following setup:

  • One CI step: cross compilation for windows from a linux CI runner (using nix-docker image)
  • An other CI step: run the tests in a windows VM

Currently I build the tests with cargo test --no-run and then execute them without cargo test in the VMs (with an ugly script).

If it was possible to:

  • cargo nixtest --no-run in the CI runner
  • copy the target folder (or at least the relevant part) in the VM
  • cargo nixtest --no-build in the VM

It would be really awesome.

As is, I don't know if it would be easily doable. If it is not to complex and an acceptable feature, I might be open in doing it under mentoring.

Build fails on the latest 1.62.0-nightly (e85edd9a8 2022-04-28)

Steps to reproduce

  1. git clone https://github.com/nextest-rs/nextest
  2. cd nextest
  3. cargo +nightly build --release

Relevant log output

➜ cargo +nightly build --release
   Compiling nextest-filtering v0.1.0 (/Users/breathx/Work/nextest/nextest-filtering)
error[E0106]: missing lifetime specifier
   --> nextest-filtering/src/parsing.rs:160:78
    |
160 | fn silent_expect<'a, F, T>(mut parser: F) -> impl FnMut(Span<'a>) -> IResult<Option<T>>
    |                                                                              ^ expected named lifetime parameter
    |
    = help: this function's return type contains a borrowed value with an elided lifetime, but the lifetime cannot be derived from the arguments
help: consider using the `'a` lifetime
    |
160 | fn silent_expect<'a, F, T>(mut parser: F) -> impl FnMut(Span<'a>) -> IResult<'a, Option<T>>
    |                                                                              +++

error[E0106]: missing lifetime specifier
   --> nextest-filtering/src/parsing.rs:238:17
    |
238 | ) -> impl FnMut(Span) -> IResult<Option<NameMatcher>> {
    |                 ^^^^ expected named lifetime parameter
    |
    = help: this function's return type contains a borrowed value with an elided lifetime, but the lifetime cannot be derived from the arguments
help: consider using the `'static` lifetime
    |
238 | ) -> impl FnMut(Span<'static>) -> IResult<Option<NameMatcher>> {
    |                 ~~~~~~~~~~~~~

error[E0106]: missing lifetime specifier
   --> nextest-filtering/src/parsing.rs:238:34
    |
238 | ) -> impl FnMut(Span) -> IResult<Option<NameMatcher>> {
    |                                  ^ expected named lifetime parameter
    |
    = help: this function's return type contains a borrowed value with an elided lifetime, but the lifetime cannot be derived from the arguments
help: consider using the `'static` lifetime
    |
238 | ) -> impl FnMut(Span) -> IResult<'static, Option<NameMatcher>> {
    |                                  ++++++++

error[E0106]: missing lifetime specifier
   --> nextest-filtering/src/parsing.rs:307:17
    |
307 | ) -> impl FnMut(Span) -> IResult<Option<NameMatcher>> {
    |                 ^^^^ expected named lifetime parameter
    |
    = help: this function's return type contains a borrowed value with an elided lifetime, but the lifetime cannot be derived from the arguments
help: consider using the `'static` lifetime
    |
307 | ) -> impl FnMut(Span<'static>) -> IResult<Option<NameMatcher>> {
    |                 ~~~~~~~~~~~~~

error[E0106]: missing lifetime specifier
   --> nextest-filtering/src/parsing.rs:307:34
    |
307 | ) -> impl FnMut(Span) -> IResult<Option<NameMatcher>> {
    |                                  ^ expected named lifetime parameter
    |
    = help: this function's return type contains a borrowed value with an elided lifetime, but the lifetime cannot be derived from the arguments
help: consider using the `'static` lifetime
    |
307 | ) -> impl FnMut(Span) -> IResult<'static, Option<NameMatcher>> {
    |                                  ++++++++

For more information about this error, try `rustc --explain E0106`.
error: could not compile `nextest-filtering` due to 5 previous errors

cargo install build err

i tried installing via cargo install cargo-nextest, but multiple errors occurred:

image

here is my rust tool chain version:

λ  cargo -V
cargo 1.58.0-nightly (294967c53 2021-11-29)

λ rustc -V
rustc 1.59.0-nightly (0b6f079e4 2021-12-07)

And my OS:

Windows 10 Enterprise 20H2

Support for running criterion benches as tests?

Hi all!

I'm a big fan of what this project is doing.

I noticed when trying to integrate this into https://github.com/vectordotdev/vector that it fails to run test binaries built from criterion benchmarks which don't support the same --format flag that normal test binaries support:

cargo nextest run --workspace --no-fail-fast --no-default-features --features "default metrics-benches codecs-benches language-benches remap-benches statistic-benches dnstap-benches benches"
    Finished test [unoptimized + debuginfo] target(s) in 1.31s
error: Found argument '--format' which wasn't expected, or isn't valid in this context

USAGE:
    limit-2c27c6bee8522ca1 --list

For more information try --help
Error:
   0: error building test list
   1: running ''/Users/jesse.szwedko/workspace/vector/target/debug/deps/limit-2c27c6bee8522ca1 --list --format terse'' failed
   2: command ["/Users/jesse.szwedko/workspace/vector/target/debug/deps/limit-2c27c6bee8522ca1", "--list", "--format", "terse"] exited with code 1

Backtrace omitted.
Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
make: *** [test] Error 1

Running --help on the test binary:

Criterion Benchmark 

USAGE:
    limit-2c27c6bee8522ca1 [FLAGS] [OPTIONS] [FILTER]

FLAGS:
    -h, --help       Prints help information
        --list       List all benchmarks
    -n, --noplot     Disable plot and HTML generation.
    -v, --verbose    Print additional statistical information.

OPTIONS:
    -b, --baseline <baseline>                        Compare to a named baseline.
    -c, --color <color>
            Configure coloring of output. always = always colorize output, never = never colorize output, auto =
            colorize output if output is a tty and compiled for unix. [default: auto]  [possible values: auto, always,
            never]
        --confidence-level <confidence-level>        Changes the default confidence level for this run. [default: 0.95]
        --load-baseline <load-baseline>              Load a previous baseline instead of sampling new data.
        --measurement-time <measurement-time>        Changes the default measurement time for this run. [default: 5]
        --noise-threshold <noise-threshold>          Changes the default noise threshold for this run. [default: 0.01]
        --nresamples <nresamples>
            Changes the default number of resamples for this run. [default: 100000]

        --output-format <output-format>
            Change the CLI output format. By default, Criterion.rs will use its own format. If output format is set to
            'bencher', Criterion.rs will print output in a format that resembles the 'bencher' crate. [default:
            criterion]  [possible values: criterion, bencher]
        --plotting-backend <plotting-backend>
            Set the plotting backend. By default, Criterion.rs will use the gnuplot backend if gnuplot is available, or
            the plotters backend if it isn't. [possible values: gnuplot, plotters]
        --profile-time <profile-time>
            Iterate each benchmark for approximately the given number of seconds, doing no analysis and without storing
            the results. Useful for running the benchmarks in a profiler.
        --sample-size <sample-size>                  Changes the default size of the sample for this run. [default: 100]
    -s, --save-baseline <save-baseline>              Save results under a named baseline. [default: base]
        --significance-level <significance-level>
            Changes the default significance level for this run. [default: 0.05]

        --warm-up-time <warm-up-time>                Changes the default warm up time for this run. [default: 3]

ARGS:
    <FILTER>    Skip benchmarks whose names do not contain FILTER.


This executable is a Criterion.rs benchmark.
See https://github.com/bheisler/criterion.rs for more details.

To enable debug output, define the environment variable CRITERION_DEBUG.
Criterion.rs will output more debug information and will save the gnuplot
scripts alongside the generated plots.

To test that the benchmarks work, run `cargo test --benches`

NOTE: If you see an 'unrecognized option' error using any of the options above, see:
https://bheisler.github.io/criterion.rs/book/faq.html

I was just curious to get thoughts on handling this. Should I stick with normal cargo test --benches for that target for now and use nextest for the other targets?

Add a way to escape `/` in regexes

While testing out expression filtering, I noticed that there's currently no way to escape a / in a regex. Ideally we'd support this via a \/ sequence.

cc @Guiguiprim, what's the best way to handle this in your opinion?

Workspace support

How can I execute tests in the child projects within a workspace?

I have some projects that define a workspace like this:

ROOT
|_ Cargo.toml # workspace config and main bin config
|_ src
|___ main.rs # main file
|___ tests
|______ test.rs # test file
|_ crate1
|___ Cargo.toml # lib config
|___ src
|_____ lib.rs # lib file
|_____ tests
|_______ test.rs # test file
|_ crate2
|___ Cargo.toml # lib config
|___ src
|_____ lib.rs # lib file
|_____ tests
|_______ test.rs # test file

So I have tests in the root of the workspace and in each child crate. How can I consolidate the junit reports from the tests? The way I run them in CI is basically running cargo test in each folder, including the root.

Do I need to add a config for each crate?

Add support for rerunning failed tests at a later point

This needs to be done with a little care:

  • The initial, simpler use requires keeping track of which Cargo arguments the build was run with -- this is relatively simple.
  • An extended use case to solve is "what if the user wants to grow the set of tests or binaries that are run?" This is going to require us keeping track of the exact set of binaries run and Cargo arguments passed. Probably worth discussing with some folks before doing so.

Overall this is a stateful operation, kind of like a source control bisect.

feature/Support for miri?

Hi

First, thank you for this awesome project!

Using nextest in medium size projects such as compact_str and bytes result in significant speedup.

It will be great if we can also run miri using nextest, as miri is very slow at running tests and is single-threaded.

--recursive to test an entire workspace?

Hi, first of all awesome crate! :D
On just a dual core system it seems to be 10-15% faster than default cargo test. :)

While playing around with it, I wondered if it would be useful to have some kind of --recursive flag that does not only run test in the current directory but also on all other crates of the current workspace.

Handle commas in filter expressions

We should leave open commas as an extension point for further arguments. This means rejecting commas in name matchers with unary set operators for now, and adding \, as an escape (doesn't need to be done within a regex context).

This probably blocks the full release so filing this to track this issue.

cc @Guiguiprim if you're interested in solving this.

In .cargo/config `runner` can be an array of string

Recently support for custom runner was added, but it fails to parse some valid config file.

We currently assume that the runner parameter is a simple string, but it can be an array of string.

see target.runner: Type: string or array of strings ([program path and args])

Add support for doctests

Currently, nextest doesn't support Rust doctests. This is because doctests are not exposed in stable Rust the way regular test binaries are, and are instead treated as special by cargo test.

One nightly-only fix that might work is to collect doctest executables through -Z unstable-options --persist-doctests. However, this isn't a stable approach so it must be used with care.

Note: You can run cargo test --doc as a separate step from cargo nextest run. This will not incur a performance penalty: cargo nextest run && cargo test --doc will not cause any more builds than a plain cargo test that runs doctests.

Tests grouping

My next maybe weird feature request.

This is somehow related to #27 or at least could be a first step for it.

My initial use case

I have a bunch of tests (across multiple binaries) who need to uniquely acquire a system resource. Currently I run the tests with -j1 which is sub-optimal. I could run my tests in two passes: cargo nextest -j1 <some filtering> and cargo nextest <opposite filtering>, but it's not ideal too and force to maintain two filtering (hopefully without missing some tests).

Having a way of defining test groups with custom execution configuration would be great.

Proposed feature

Adding groups support !

It could allow:

  • only running predefined groups of tests
  • running differently different groups of tests
  • more ?

Defining groups

I think it could go inside ./config/nextest.conf. Maybe like:

struct CustomProfileImpl {
  // ...
  groups: Vec<TestGroupConfig>,
}

struct TestGroupConfig {
  name: String,
  include: Vec<String>, // filters
  exclude: Vec<String>, // filters
  test_threads: Option<usize>, // what I need
  processes: ExecutionProcesses, // for #27 
  // maybe more option:
  // - execution config (group run concurrently with other group or not, ...)
}

enum ExecutionProcesses {
  OneByTest,
  // execute all test from the same binary in the same process
  // two binaries can be run at the same time
  OneByBinaryParallel,
  // execute all test from the same binary in the same process
  // two binaries can NOT be run at the same time
  OneByBinarySequentiel,
}

Constructing groups

Constructing groups could be as easy as:

  • Applying TestBuildFilter to get the tests pool
  • For every TestGroupConfig construct a group by taking out of the pool the matching tests
  • All the remaining tests are put in a unnamed group with the config give to the CLI

This would give an importance to the group definitions order while also avoiding any duplication.

Running tests

Here things can be as complex as we want.

  • In the first version I would go with running groups one after the other.
  • In following version we could add more complex scheduling scheme
    • If we have 3 groups with -j1 maybe they are allowed to run concurrently
    • ...

New CLI options

With all of this, we could probably add some options (for both list and run): --group <NAME> and --unamed-group/--not-in-group (or any better name) to only run tests within some groups.

Group related tests

first off, really like this so far.
would it be possible to group related tests? right now tests are output in the order the run(i assume), which means that they can get printed out of order in relation to their mod. it would be helpful to me to have all tests in a mod grouped together after they have finished running to make comparing times and looking at failures easier.
current behavior

 Starting 3 tests across 1 binaries
        PASS [   0.026s]                foo bar::whatever
        PASS [   0.036s]                foo baz::what
        PASS [   0.039s]                foo bar::something_else
     Summary [   0.300s] 3 tests run: 3 passed, 0 skipped

desired behavior

 Starting 3 tests across 1 binaries
        PASS [   0.026s]                foo bar::whatever
        PASS [   0.039s]                foo bar::something_else
        PASS [   0.036s]                foo baz::what
     Summary [   0.300s] 3 tests run: 3 passed, 0 skipped

Windows and macOS performance issues

Antivirus and Gatekeeper checks can cause performance issues with nextest on Windows and macOS, respectively.

I've added a note about them to the nextest site, but I'm going to leave this issue open as a catch-all in case people still have performance issues afterwards (I've definitely seen some reports on macOS even after the terminal was added to Developer Tools).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.