Giter Site home page Giter Site logo

common-workflow-language / cwltest Goto Github PK

View Code? Open in Web Editor NEW
17.0 17.0 17.0 478 KB

Framework for testing CWL tools and workflows

License: Apache License 2.0

Python 94.39% Makefile 3.26% Shell 1.77% Common Workflow Language 0.58%
common-workflow-language cwl

cwltest's Introduction

Common Workflow Language

Main website: https://www.commonwl.org

GitHub repository for www.commonwl.org: https://www.github.com/common-workflow-language/cwl-website

CWL v1.0.x: https://github.com/common-workflow-language/common-workflow-language (this repository)

CWL v1.1.x: https://github.com/common-workflow-language/cwl-v1.1/

CWL v1.2.x: https://github.com/common-workflow-language/cwl-v1.2/

Support Gitter GitHub stars

[Video] Common Workflow Language explained in 64 seconds The Common Workflow Language (CWL) is a specification for describing analysis workflows and tools in a way that makes them portable and scalable across a
variety of software and hardware environments, from workstations to cluster, cloud, and high performance computing (HPC) environments. CWL is designed to meet the needs of data-intensive science, such as Bioinformatics, Medical Imaging, Astronomy, Physics, and Chemistry.

Open Stand badge CWL is developed by a multi-vendor working group consisting of organizations and individuals aiming to enable scientists to share data analysis workflows. The CWL project is maintained on Github and we follow the Open-Stand.org principles for collaborative open standards development. Legally, CWL is a member project of Software Freedom Conservancy and is formally managed by the elected CWL leadership team, however every-day project decisions are made by the CWL community which is open for participation by anyone.

CWL builds on technologies such as JSON-LD for data modeling and Docker for portable runtime environments.

User Guide

The CWL user guide provides a gentle introduction to learning how to write CWL command line tool and workflow descriptions.

CWLの日本語での解説ドキュメント is a 15 minute introduction to the CWL project in Japanese.

CWL Recommended Practices

CWLの日本語での解説ドキュメント is a 15 minute introduction to the CWL project in Japanese.

A series of video lessons about CWL is available in Russian as part of the Управление вычислениями(Computation Management) free online course.

Citation

To reference the CWL project in a scholary work, please use the following citation:

Michael R. Crusoe, Sanne Abeln, Alexandru Iosup, Peter Amstutz, John Chilton, Nebojša Tijanić, Hervé Ménager, Stian Soiland-Reyes, Bogdan Gavrilović, Carole Goble, and The CWL Community. (2022): Methods Included: Standardizing Computational Reuse and Portability with the Common Workflow Language. Commun. ACM 65, 6 (June 2022), 54–63. https://doi.org/10.1145/3486897

To cite version 1.0 of the CWL standards specifically, please use the following citation inclusive of the DOI.

Peter Amstutz, Michael R. Crusoe, Nebojša Tijanić (editors), Brad Chapman, John Chilton, Michael Heuer, Andrey Kartashov, Dan Leehr, Hervé Ménager, Maya Nedeljkovich, Matt Scales, Stian Soiland-Reyes, Luka Stojanovic (2016): Common Workflow Language, v1.0. Specification, Common Workflow Language working group. https://w3id.org/cwl/v1.0/ doi:10.6084/m9.figshare.3115156.v2

A collection of existing references to CWL can be found at https://zotero.org/groups/cwl

Code of Conduct

The CWL Project is dedicated to providing a harassment-free experience for everyone, regardless of gender, gender identity and expression, sexual orientation, disability, physical appearance, body size, age, race, or religion. We do not tolerate harassment of participants in any form. This code of conduct applies to all CWL Project spaces, including the Google Group, the Gitter chat room, the Google Hangouts chats, both online and off. Anyone who violates this code of conduct may be sanctioned or expelled from these spaces at the discretion of the leadership team.

For more details, see our Code of Conduct.

For the following content:

  • Support, Community and Contributing
  • CWL Implementations
  • Repositories of CWL Tools and Workflows
  • Software for working with CWL
    • Editors and viewers
    • Utilities
    • Converters and code generators
    • Code libraries
  • Projects the CWL community is participating in
  • Participating Organizations
  • Individual Contributors
  • CWL Advisors
  • CWL Leadership team

Please see https://www.commonwl.org

cwltest's People

Contributors

adamnovak avatar bosonogi avatar dependabot-preview[bot] avatar dependabot[bot] avatar glassofwhiskey avatar halilozercan avatar joelarmstrong avatar kapilkd13 avatar kinow avatar manabuishii avatar manu-chroma avatar mr-c avatar mvdbeek avatar nsoranzo avatar requires avatar rupertnash avatar tetron avatar tom-tan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cwltest's Issues

No feedback when --test file.yml fails to parse?

cwltest [--verbose] --test foo.yml where foo.yml doesn't exist (or doesn't conform to the required schema) fails quietly, with no feedback whatsoever, skipping any valid test items.

  • Expected: Failed to find / parse foo.yml with non-zero exit flag.
  • Nice to have: Descriptive message, failed to parse test with id: ... in foo.yml because [...]

Examples and/or brief instructions?

Hi,

To use this externally to test CWL an example or two and/or brief instructions would help. It looks to me as though I'd have to read the code to find out what to do, unless I missed something?

Sarah

JUnit result does not provide the results for individual testcases

cwltest support to output the results in JUnit XML format.
However, it only provides summarized results and lacks the results for individual test cases.

Here is an entry for the result of one test case.

<testcase class="required, command_line_tool" name="General test of command line generation" time="3.668437">
	<system-out>{
    &quot;args&quot;: [
        &quot;bwa&quot;,
        &quot;mem&quot;,
        &quot;-t&quot;,
        &quot;2&quot;,
        &quot;-I&quot;,
        &quot;1,2,3,4&quot;,
        &quot;-m&quot;,
        &quot;3&quot;,
        &quot;chr20.fa&quot;,
        &quot;example_human_Illumina.pe_1.fastq&quot;,
        &quot;example_human_Illumina.pe_2.fastq&quot;
    ]
}
	</system-out>
</testcase>

It would be nice if it provides another attribute that takes success, failure or error.

Move label to id

  • In the tests YAML, replace id with the contents of label and remove label
  • Deprecate label
    • If both label and id are present, then label will be used for messages.
  • Explain all this in https://github.com/common-workflow-language/cwltest/blob/main/cwltest/cwltest-schema.yml
  • Make sure the test id is printed on each line along with the N/M
  • Adjust documentation of command line option -s to indicate it takes a list of ids
  • Add new command line option that takes a list of test ids to exclude: #111

Ability to "mock" the base command

Lets say I have a CWL tool that interacts with an ftp server that requires a username and password. It would be nice to be able to mock the base command + expected output, so I don't need to pass credentials to test the tool, but merely test that the right parameters are called, and that the expected output binds correctly.

Add option to disable stderr capture

What do you think about adding a flag --no-capture-stderr that disables capturing stderr of the cwl-runner? This would be useful during debugging a cwl-runner (such as rabix/bunny).

I believe it would be as simple as not passing stderr=subprocess.PIPE here:
https://github.com/common-workflow-language/cwltest/blob/master/cwltest/__init__.py#L183

Not sure if not capturing stdout is valid/useful. For my case (bunny) I only need stderr. Possibly --no-capture for both stdout/err is better.

Losing JUnit output for timed-out tests

I tried using --junit-xml, and while it does keep the tests output straight from each other, one of my tests that timed out got no output stored in the XML; there's no system-out or system-err elements like for other tests.

This is because we're using communicate() with a timeout, but not doing the special dance from the docs to actually get the output from the process if the timeout expires.

Originally posted by @adamnovak in #120 (comment)

Print test number if test failed

if ran with -j option, the failing test stdout can be asynchronous with test number printing:

$ ./run_test.sh RUNNER=cwltool -j$(nproc)
--- Running conformance test v1.0 on /path/to/common-workflow-language/env/bin/cwltool ---
/path/to/common-workflow-language/env/bin/cwltool 1.0.20180224115000
Test [1/123]
Test [2/123]
...
Test [20/123]
Test [21/123]
Test [22/123]
Test <test-number-to-insert> failed: /path/to/common-workflow-language/env/bin/cwltool --outdir=/tmp/tmputmgA5 --quiet v1.0/any-type-compat.cwl v1.0/any-type-job.json
Testing Any type compatibility in outputSource
Returned non-zero
Tool definition failed validation:
...

support simple test names

These test names will be seen/used in three ways:

  • Printed when a test is running and when it fails
  • Available as a test identifier for running a specific test (instead of -n 42)
  • Presented in the JUnit XML

Add more unit tests

see also #24

Tests needed:

  • should_fail field
  • comparing directory

This list will be updated.

After N timeouts, abort tests

If we're running CWL conformance tests and there is a 15 minutes timeout per test. there are 133 tests so this ends up in a 33 hours total timeout when running against an unhealthy cluster.

Is there a way to parametrize "after N timeouts/fails, abort the tests and mark them as failed" ?

Thanks,
Nico

Manage log streams when runnign multiple tests in parallel

I'm trying to debug Toil running the CWL conformance tests; a few random tests seem to hit their timeouts.

I'm running cwltest with -j8 or so to run multiple tests in parallel, and I've set Toil to log debug logs even with the --quiet that cwltest passes. But my terminal ends up being logs from all the simultaneously running and passing tests, and I'm never going to be able to tease out the logs specifically from the tests that time out (different tests every time) to see what is going wrong in those cases.

I need cwltest to be able to capture the output streams from the runner processes for each test, and to print out the streams for the tests that fail or time out after the failure or timeout happens, without interleaving with the streams from other simultaneous successful tests.

cwltest validates the field values in `File` and `Directory` objects but does not validate files and directories on disk

It would be nice if cwltest can validate that the File and Directory objects correspond to actual files and directories.
The current behavior does not catch some issues such as DataBiosphere/toil#4210.

How to reproduce

test.yml:

- job: issue-job.yml
  tool: issue.cwl
  output:
    file:
      class: File
      location: foo.txt
  id: 0

issue.cwl:

#!/usr/bin/env cwl-runner
class: CommandLineTool
cwlVersion: v1.0
baseCommand: touch
inputs:
  - id: file_name
    type: string
    inputBinding: {}
outputs:
  - id: file
    type: File
    outputBinding:
      glob: "$(inputs.file_name)"

issue-job.yml:

file_name: foo.txt

dummy-runner:

#!/bin/bash

cat << EOS
{
  "file": {
    "basename": "bar.txt",
    "class": "File",
    "location": "foo.txt"
  }
}
EOS

and execute the following command:

$ ls
dummy-runner  issue.cwl  issue-job.yml  test.yml
$ cwltest --tool $PWD/dummy-runner --test test.yml

Expected behavior

A given test case fails because the output object has the following problems:

  • foo.txt does not exist
  • The basename of location field is not consistent with the value of basename field

Actual behavior

The above command accidentally succeeds:

$ cwltest --tool $PWD/dummy-runner --test test.yml 
Test [1/1] 
All tests passed

An invalid output object from the engine causes an exception internally

I reproduced it with cwltest commit: f43b98d in Debian 12 (bookworm).
I will send a pull request for it.

How to reproduce

  • dummy-executor.sh
#!/bin/sh

echo "it is not JSON format!"
  • test.yaml
- job: empty.yml
  tool: true.cwl
  output: {}
  id: do_nothing
  doc: Example of doing nothing
- job: empty.yml
  tool: true.cwl
  output: {}
  id: do_nothing2
  doc: Example of doing nothing more
  • true.cwl
class: CommandLineTool
cwlVersion: v1.0
inputs: {}
outputs: {}
baseCommand: ["true"]
  • empty.yml
{}

And execute the following command:

$ cwltest --tool $PWD/dummy-executor.sh --test $PWD/test.yml --junit-xml=result.xml

Expected behavior

Obviously dummy-executor.sh prints an invalid output object.
Thus we expect that cwltest runs the tests, shows that all the tests are failed, and generates result.xml as shown below, for example.

$ cwltest --tool $PWD/dummy-executor.sh --test $PWD/test.yml --junit-xml=result.xml
Test [1/2] do_nothing: Example of doing nothing
Test 1 failed: /workspaces/cwltest/dummy-executor.sh --outdir=/tmp/tmpjd0k_i1i --quiet true.cwl empty.yml
...
Test [2/2] do_nothing2: Example of doing nothing more
Test 2 failed: /workspaces/cwltest/dummy-executor.sh --outdir=/tmp/tmpo3m3mw2t --quiet true.cwl empty.yml
...
0 tests passed, 2 failures, 0 unsupported features
$ ls result.xml 
result.xml

Actual behavior

It internally throws an exception in the first test, does not run the second test, and does not generate result.xml.

$ cwltest --tool $PWD/dummy-executor.sh --test $PWD/test.yml --junit-xml=result.xml
Test [1/2] do_nothing: Example of doing nothing
Test [2/2] do_nothing2: Example of doing nothing more
Traceback (most recent call last):
  File "/home/vscode/.local/bin/cwltest", line 8, in <module>
    sys.exit(main())
             ^^^^^^
  File "/home/vscode/.local/lib/python3.12/site-packages/cwltest/main.py", line 223, in main
    ) = utils.parse_results(
        ^^^^^^^^^^^^^^^^^^^^
  File "/home/vscode/.local/lib/python3.12/site-packages/cwltest/utils.py", line 233, in parse_results
    for i, test_result in enumerate(results):
  File "/home/vscode/.local/lib/python3.12/site-packages/cwltest/main.py", line 224, in <genexpr>
    (job.result() for job in jobs), tests, suite_name, report
     ^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vscode/.local/lib/python3.12/site-packages/cwltest/main.py", line 69, in _run_test
    return utils.run_test_plain(config, test, test_number)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vscode/.local/lib/python3.12/site-packages/cwltest/utils.py", line 382, in run_test_plain
    out = json.loads(outstr) if outstr else {}
          ^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
$ ls result.xml
ls: cannot access 'result.xml': No such file or directory

quotes need escaping

From https://ci.commonwl.org/job/cwltool-conformance/lastCompletedBuild/testReport/result/xml/_failed_to_read_/

org.dom4j.DocumentException: Error on line 65 of document file:///var/lib/jenkins/jobs/cwltool-conformance/workspace/common-workflow-language/v1.0/result.xml : Element type "testcase" must be followed by either attribute specifications, ">" or "/>". Nested exception: Element type "testcase" must be followed by either attribute specifications, ">" or "/>".

https://ci.commonwl.org/job/cwltool-conformance/ws/common-workflow-language/v1.0/result.xml/*view*/

add category command line option

To record other axis like python35 or similar

Should propagate down to the TestResult.__init__ for use in TestResult.create_test_case by passing to junit_xml.TestCase as the category attribute.

Feature request: handling known failure testing

Sometimes we need to test that the workflow engine fails its execution with certain inputs as expected.

For example, section 11 in user guide shows an example CWL file and a job file that will fail its execution.
Another example is the conformance test for workflow engines for CWL. The specification says that the execution should fail its execution in some cases.

It will be nice if cwltest can handle this kind of tests.

Related: Issue #37 in user_guide

Add ability to run cwltest without testing the output

Hi all, is there a way to run cwltest without actually testing the output? For instance, if I had a tool that retrieved an access token that changed every single time, I couldn't actually compare the outputs.

I get errors that the key is missing if I leave it out. I also tried the wildcard.

- job: get-token.yaml
  output:
    "json_out": {
      "class": "File",
      "location": "output.json",
    }
    "accesskey_id": "*"
  tool: ../cwl/get-token.cwl
  label: get_token
  id: 0
  doc: Get token

The file validation did work though by leaving out the size and checksum. I can look into this feature.

[Regression?] `--only-tools` does not work with Python 3.6.5

We expect that cwltest works with Python 3.6.5 because README.rst says that it is tested for Python 3.6.

However, cwltest 1.0.20180601100346 (latest in PyPI) with --only-tools does not work with Python 3.6.5.
I saw the following message:

$ python --version
Python 3.6.5
$ cwltest --version
/Users/tom-tan/.pyenv/versions/3.6.5/bin/cwltest 1.0.20180601100346
$ cd common-workflow-language/v1.0
$ cwltest --test conformance_test_v1.0.yaml --only-tools
Traceback (most recent call last):
  File "/Users/tom-tan/.pyenv/versions/3.6.5/bin/cwltest", line 11, in <module>
    sys.exit(main())
  File "/Users/tom-tan/.pyenv/versions/3.6.5/lib/python3.6/site-packages/cwltest/__init__.py", line 261, in main
    raise Exception("Unexpected code path.")
Exception: Unexpected code path.

Also, I checked all the released version of cwltest from 1.0.20170715115658 to the latest but any released versions of cwltest do not work with --only-tools.

In cwltest 1.0.20170715115658:

$ cwltest --test conformance_test_v1.0.yaml --only-tools
Traceback (most recent call last):
  File "/Users/tom-tan/.pyenv/versions/3.6.5/bin/cwltest", line 11, in <module>
    sys.exit(main())
  File "/Users/tom-tan/.pyenv/versions/3.6.5/lib/python3.6/site-packages/cwltest/__init__.py", line 269, in main
    args.testargs = [testarg for testarg in args.testargs if testarg.count('==') == 1]
TypeError: 'NoneType' object is not iterable

In the latter versions: same output as the latest version

Is --only-tools option really works with Python 3.6.5 or am I missing something?

logging of debug messages not possible since --quiet hardcoded in cwltest

Hello,

This is a suggestion to help debug when running conformance tests. cwltest appends "--quiet" to the command line.

"--quiet",

Also, removing "--quiet" in cwltest makes all test cases fail, probably because the output of executor is obtained from stdout.
When running conformance tests, enabling debug messages and logging them is no longer an option from the command line.

One suggestion is to provide an option for cwltool to write the output of executor to a file instead of stdout. This enables cwltest to compare output from a specified file instead of disabling debug messages to get output string from stdout.

test failed, not sure why

Hi,
I am having problems passing test 107 with my implementation and do not understand what is wrong. The File object for file a should be what is expected, still cwltest complains. Any ideas?

Test [107/197] Test if a writable input directory is recursively copied and writable
Test 107 failed: /go/bin/awe-cwl-submitter-wrapper.sh --outdir=/tmp/tmp2k39bljr --quiet v1.0/recursive-input-directory.cwl v1.0/recursive-input-directory.yml
Test if a writable input directory is recursively copied and writable
Compare failure expected: {
    "output_dir": {
        "basename": "work_dir",
        "class": "Directory",
        "listing": [
            {
                "basename": "a",
                "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
                "class": "File",
                "location": "work_dir/a",
                "size": 0
            },
            {
                "basename": "b",
                "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
                "class": "File",
                "location": "work_dir/b",
                "size": 0
            },
            {
                "basename": "c",
                "class": "Directory",
                "listing": [
                    {
                        "basename": "d",
                        "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
                        "class": "File",
                        "location": "work_dir/c/d",
                        "size": 0
                    }
                ],
                "location": "work_dir/c"
            },
            {
                "basename": "e",
                "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
                "class": "File",
                "location": "work_dir/e",
                "size": 0
            }
        ],
        "location": "work_dir"
    },
    "test_result": {
        "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
        "class": "File",
        "location": "output.txt",
        "size": 0
    }
}
got: {
    "output_dir": {
        "basename": "work_dir",
        "class": "Directory",
        "listing": [
            {
                "basename": "a",
                "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
                "class": "File",
                "location": "work_dir/a",
                "size": 0
            },
            {
                "basename": "c",
                "class": "Directory",
                "listing": [
                    {
                        "basename": "d",
                        "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
                        "class": "File",
                        "location": "work_dir/c/d",
                        "size": 0
                    }
                ],
                "location": "c"
            },
            {
                "basename": "e",
                "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
                "class": "File",
                "location": "work_dir/e",
                "size": 0
            },
            {
                "basename": "b",
                "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
                "class": "File",
                "location": "work_dir/b",
                "size": 0
            }
        ],
        "location": "work_dir"
    },
    "test_result": {
        "basename": "output.txt",
        "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
        "class": "File",
        "location": "input_807229205/output.txt",
        "nameext": ".txt",
        "size": 0
    }
}
caused by: failed comparison for key 'output_dir': expected: {
    "basename": "work_dir",
    "class": "Directory",
    "listing": [
        {
            "basename": "a",
            "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
            "class": "File",
            "location": "work_dir/a",
            "size": 0
        },
        {
            "basename": "b",
            "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
            "class": "File",
            "location": "work_dir/b",
            "size": 0
        },
        {
            "basename": "c",
            "class": "Directory",
            "listing": [
                {
                    "basename": "d",
                    "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
                    "class": "File",
                    "location": "work_dir/c/d",
                    "size": 0
                }
            ],
            "location": "work_dir/c"
        },
        {
            "basename": "e",
            "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
            "class": "File",
            "location": "work_dir/e",
            "size": 0
        }
    ],
    "location": "work_dir"
}
got: {
    "basename": "work_dir",
    "class": "Directory",
    "listing": [
        {
            "basename": "a",
            "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
            "class": "File",
            "location": "work_dir/a",
            "size": 0
        },
        {
            "basename": "c",
            "class": "Directory",
            "listing": [
                {
                    "basename": "d",
                    "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
                    "class": "File",
                    "location": "work_dir/c/d",
                    "size": 0
                }
            ],
            "location": "c"
        },
        {
            "basename": "e",
            "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
            "class": "File",
            "location": "work_dir/e",
            "size": 0
        },
        {
            "basename": "b",
            "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
            "class": "File",
            "location": "work_dir/b",
            "size": 0
        }
    ],
    "location": "work_dir"
}
caused by: {
    "basename": "a",
    "checksum": "sha1$da39a3ee5e6b4b0d3255bfef95601890afd80709",
    "class": "File",
    "location": "work_dir/a",
    "size": 0
} not found
0 tests passed, 1 failures, 0 unsupported features

1 tool tests failed

thx

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.