Giter Site home page Giter Site logo

bowtie-json-schema / bowtie Goto Github PK

View Code? Open in Web Editor NEW
51.0 4.0 38.0 7.24 MB

JSON Schema in every programming language

Home Page: https://bowtie.report/

License: Other

Python 54.99% Dockerfile 1.69% JavaScript 3.24% C# 4.69% Rust 1.66% Lua 1.92% Go 1.50% Ruby 0.62% Clojure 0.41% C++ 1.52% TypeScript 15.63% Java 5.38% HTML 0.05% CSS 0.18% Kotlin 3.61% Just 0.09% PHP 0.56% Scala 2.25%
jsonschema json schema specification validation

bowtie's People

Contributors

adwait-godbole avatar agniveshchaubey avatar akshaybagai52 avatar aku1310 avatar ashmit-coder avatar dante381 avatar davishmcclurg avatar dependabot[bot] avatar github-actions[bot] avatar gregsdennis avatar harrel56 avatar hauner avatar jdesrosiers avatar jeelrajodiya avatar julian avatar jviotti avatar mwadams avatar nomandhoni-cs avatar optimumcode avatar pre-commit-ci[bot] avatar sajal-j25 avatar sanskar-soni-9 avatar santhosh-tekuri avatar sd1p avatar siddharth-singh-2004 avatar simondmc avatar skles avatar sudo-jarvis avatar vishrutaggarwal avatar xdreamist avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

bowtie's Issues

Run tests on reports, rather than ad hoc output

Probably depends on #5 but likely should include one or two that run against the HTML report.

Right now the integration tests just munge together some ad hoc assertions against the raw output. They should probably instead assert against the summary report.

Add `bowtie suite` for specifically reading the test suite format

We'll evolve the test suite format, and it's already close to bowtie's input format, but let's just make it trivial (and not require jq) to run it.

I.e. do the equivalent of:

jq -c 'walk( if type == "object" then with_entries( .key |= if . == "data" then "instance" else . end ) else . end) | .[]' ~/Development/JSON-Schema-Test-Suite/tests/draft2020-12/*.json | bowtie run

May depend on #14.

Indicate run metadata in the report

E.g.:

  • Bowtie's own version
  • A link to Bowtie's issue tracker
  • Which implementations were involved
  • How many cases ran
  • How many errors occured
  • Whether we failed fast

Add support for implementation variants

An implementation container should be able to signal it's running itself in multiple configurations, and pass back values for each.

E.g. ajv has a strict mode which can be enabled or disabled, and we should collect results from both.

The report should then be updated to group variants together. We should display the "default" variant first, where default is defined to be the variant one gets when passing minimal or no additional configuration, or if configuration is always necessary, the one one gets when following the first documented or most recommended configuration from the implementation's documentation.

Worth considering whether to treat versions (of the implementation) as variants, or whether these are separate axes.

(The presumption is that we can handle this within the same runtime rather than running multiple instances of a container, which we already support)

Though it may complicate the just-mentioned presumption, we may also wish to support testing CLIs as if they were variants of the implementations they're based on.

Investigate clojure-json-schema's slow startup

⊙  cat foo.json                                                                                                                                                                     julian@Airm
{"description": "test case 1", "schema": {}, "tests": [{"description": "a test", "instance": {}}] }

⊙  hyperfine --warmup 3 -L implementation python-jsonschema,js-ajv,lua-jsonschema,dotnet-json-everything,go-jsonschema,js-hyperjump,python-fastjsonschema,python-jschon,ruby-json_schemer,badsonschema,rust-jsonschema,envsonschema,lintsonschema,clojure-json-schema 'bowtie run -i {implementation} foo.json'
Summary
  'bowtie run -i lua-jsonschema foo.json' ran
    1.01 ± 0.07 times faster than 'bowtie run -i rust-jsonschema foo.json'
    1.10 ± 0.06 times faster than 'bowtie run -i go-jsonschema foo.json'
    1.95 ± 0.11 times faster than 'bowtie run -i python-fastjsonschema foo.json'
    2.12 ± 0.09 times faster than 'bowtie run -i python-jsonschema foo.json'
    2.57 ± 0.14 times faster than 'bowtie run -i python-jschon foo.json'
    3.10 ± 0.12 times faster than 'bowtie run -i ruby-json_schemer foo.json'
    4.40 ± 0.14 times faster than 'bowtie run -i dotnet-json-everything foo.json'
    4.83 ± 0.22 times faster than 'bowtie run -i js-ajv foo.json'
    8.69 ± 0.30 times faster than 'bowtie run -i js-hyperjump foo.json'
   14.77 ± 0.47 times faster than 'bowtie run -i clojure-json-schema foo.json'

Refs: #45

Implementation / language-specific REPL environments / `bowtie repl`

This is clearly not something that can be done (easily) in general, but when things fail, it'd be nice to have a general way to drop into an interactive environment for a particular implementation.

E.g. bowtie repl -i bowtie/clojure-json-schema could drop into a Clojure REPL with the library available, ready to interactively validate schemas. Obviously doing so now requires authors to know about the library they're using.

If we want to get really fancy, perhaps hooking up interactive environments in the browser while looking at a report is also a thing.

Support case groups

Or generalize the notion itself of cases to groups.

Right now in the test suite we group cases by file (often by keyword).

We should reflect this structure in the report emitted by bowtie.

Backing off should perhaps take into account groups.

schema and registry (#14) are also probably in need of consideration after this is done -- right now they're per-case, but probably can be per-case-group, especially if the notions merge.

Add --dialect

Should either be supported per-inputted case, or with a separate start-speaking-dialect kind of request.

Should support the implementation signalling it doesn't support the dialect (in which case all test cases should get skipped), and should allow passing either a dialect URI or a short form for "well known" dialects.

Add --set-schema

For explicitly ensuring $schema is always present in tests passed to implementation containers.

Investigate ruby-json_schemer's slow execution

⊙  nox -s bench'(suite)' -- ~/Development/JSON-Schema-Test-Suite/tests/draft7                                   julian@Airm ●e": "string"}, "nodes": {"type": "array", "items": {"$ref": "node"}}}, "required": ["meta", "no
nox > Running session bench(suite)
nox > Creating virtual environment (virtualenv) using python in .nox/bench-suite
nox > python -m pip install /Users/julian/Development/bowtie
nox > hyperfine --warmup 1 --ignore-failure -L implementation js-ajv,js-hyperjump,go-jsonschema,clojure-json-schema,cpp-valijson,ruby-json_schemer,dotnet-json-everything,python-jsonschema,python-fastjsonschema,python-jschon,rust-jsonschema,lua-jsonschema '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i {implementation} /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
Benchmark 1: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i js-ajv /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7                                        ╮
  Time (mean ± σ):      3.908 s ±  0.272 s    [User: 0.228 s, System: 0.042 s]
  Range (min … max):    3.618 s …  4.383 s    10 runs
 
Benchmark 2: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i js-hyperjump /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7                                  DefaultSchemaClient.get (DefaultSchemaClient.java:27)                                          
  Time (mean ± σ):      9.213 s ±  0.755 s    [User: 0.233 s, System: 0.049 s]
  Range (min … max):    8.552 s … 10.858 s    10 runs
 
Benchmark 3: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i go-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7                                 │
  Time (mean ± σ):      5.322 s ±  0.167 s    [User: 0.289 s, System: 0.048 s]
  Range (min … max):    5.117 s …  5.692 s    10 runs
 
Benchmark 4: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i clojure-json-schema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7                                                                                                                          
  Time (mean ± σ):     11.723 s ±  0.126 s    [User: 0.289 s, System: 0.057 s]
  Range (min … max):   11.593 s … 11.990 s    10 runs
 
Benchmark 5: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i cpp-valijson /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7                                  │
  Time (mean ± σ):      4.603 s ±  0.297 s    [User: 0.287 s, System: 0.051 s]
  Range (min … max):    4.126 s …  5.124 s    10 runs
 
Benchmark 6: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i ruby-json_schemer /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7                             ───────────────────────────────────────────────────────────────────────────────────────────────
  Time (mean ± σ):     11.905 s ±  0.536 s    [User: 0.270 s, System: 0.057 s]
  Range (min … max):   11.197 s … 12.852 s    10 runs
 
Benchmark 7: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i dotnet-json-everything /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7                        ct', 'properties': {'value': {'type': 'number'}, 'subtree': {'$ref': 'tree'}}, 'required': ['va
  Time (mean ± σ):      4.283 s ±  0.061 s    [User: 0.213 s, System: 0.038 s]
  Range (min … max):    4.228 s …  4.423 s    10 runs
 
Benchmark 8: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i python-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7                             .java:121)\n    org.everit.json.schema.loader.JsonPointerEvaluator.query (JsonPointerEvaluator.
  Time (mean ± σ):      5.941 s ±  0.033 s    [User: 0.223 s, System: 0.037 s]
  Range (min … max):    5.912 s …  6.000 s    10 runs
 
Benchmark 9: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i python-fastjsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7                         on": "object with numbers is valid", "instance": {"foo\"bar": 1}, "valid": true}, {"description
  Time (mean ± σ):     11.075 s ±  0.061 s    [User: 0.314 s, System: 0.049 s]
  Range (min … max):   11.025 s … 11.227 s    10 runs
 
Benchmark 10: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i python-jschon /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7                                ", "instance": 1, "valid": true}, {"description": "mismatch", "instance": "a", "valid": false}]
  Time (mean ± σ):     801.2 ms ±   8.2 ms    [User: 131.4 ms, System: 25.8 ms]
  Range (min … max):   790.4 ms … 818.7 ms    10 runs                                                ╮
 
  Warning: Ignoring non-zero exit code.
 
Benchmark 11: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i rust-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7
  Time (mean ± σ):      1.374 s ±  0.031 s    [User: 0.213 s, System: 0.036 s]
  Range (min … max):    1.289 s …  1.400 s    10 runs
 
Benchmark 12: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i lua-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7
  Time (mean ± σ):      5.093 s ±  0.073 s    [User: 0.285 s, System: 0.043 s]
  Range (min … max):    5.004 s …  5.213 s    10 runs
 
Summary
  '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i python-jschon /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7' ran
    1.72 ± 0.04 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i rust-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
    4.88 ± 0.34 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i js-ajv /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
    5.35 ± 0.09 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i dotnet-json-everything /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
    5.74 ± 0.38 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i cpp-valijson /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
    6.36 ± 0.11 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i lua-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
    6.64 ± 0.22 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i go-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
    7.42 ± 0.09 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i python-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
   11.50 ± 0.95 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i js-hyperjump /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
   13.82 ± 0.16 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i python-fastjsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
   14.63 ± 0.22 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i clojure-json-schema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
   14.86 ± 0.69 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i ruby-json_schemer /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
nox > Session bench(suite) was successful.

Consider how to discover out-of-tree bowtie image implementations

Right now all of bowtie's implementation images live in-tree (in the implementations folder). Perhaps this won't always be the case though, and it may be useful to have a discovery mechanism if/when bowtie learns to run over all known implementations (and therefore may want to include those additional images).

E.g. perhaps we look for a particular GitHub topic.

Finish writing the documentation page

  • SHOULD catch errors broadly
  • uncaught errors count against backoff, can be caught and then do not
  • add a formatter or linter to pre-commit
  • flush stdout

Make the default set of bowtie run implementations smarter

Right now one has to pass -i explicitly for each image.

We should support a number of easier options:

  • Run all known implementations in the registry
  • Run all local images from implementations/
  • Run all implementations supporting the current dialect
  • Run all implementations written in language X
  • Run all implementations from --implementations-file which contains a list of images

bowtie smoke and other commands may also want to default to a wide set of implementations once we can do this (i.e. "smoke test all implementations", if none are otherwise provided)

Add --validate / --no-validate

Which should enable bowtie to validate requests/responses against the IO schema at runtime.

Likely related to #20, and should apply validation regardless of whether this appears in the schema.

Failed test cases should output a command to run just the failure

E.g. whenever an implementation barfs / throws an error / has an unexpected response, the interactive output should show something copy-pastable to run just that example in two ways:

  • by filtering the input command (e.g. via bowtie suite -k)
  • in a fully self contained example (e.g. via bowtie run)

Support passing a schema registry alongside cases

Should support including a set of schemas via specified retrieval URIs which tests in the case may reference.

Probably worth doing per-case, though perhaps a single global registry is also desirable (a separate request with schemas, say).

Collect timing information during runs

This is slightly tricky as we don't synchronously wait for responses, which may arrive in later messages read from the stream -- especially so if we move away from reading from implementations line-by-line.

But it'd be nice to time how long it takes to get responses back.

Add `bowtie diff` for diffing reports with each other

Should take as input 2 (or more?) reports and output a diff of changes between them -- i.e. tests that fail in one and not the other in an implementation, etc.

More thought required to deal with how we align tests between the two (seq may differ) as well as what to do about missing or additional implementations in one or the other, etc.

Add `bowtie fuzz` for running fuzz testing generated test cases against implementations

Given that Bowtie provides uniform interfaces to downstream implementations, an "obvious" complimentary tool might be to fuzz-test across all implementations, looking for cases they disagree, or blow up, or more generally produce behavior non-compliant with the specification which isn't already covered by an explicit test in the suite.
Doing this likely simply means hooking such a tool up to Bowtie and letting it rip.

We likely can do this in-process most easily using hypothesis-jsonschema and feed them through implementations.

Add --max-fail / --max-error to bowtie suite and run

Specifically:

  • bowtie suite --max-fail N should stop (i.e. exit) if N tests fail in total across implementations
  • bowtie suite --max-error N should stop if N errors occur in total across implementations

The same option should be added to bowtie run.

Structure the IO schema more like request/responses, and display them in the docs

It sort of is already structured in this way, but can be improved a bit, and isn't documented.

We should also split the schema into the bit meant for end-users (which speccs input rows) and the bit meant for implementers (which speccs the protocol spoken between bowtie and containers).

AsyncAPI is probably worth looking into here (especially considering that unlike OpenAPI it makes no assumptions about the protocol being spoken supposedly).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.