bowtie-json-schema / bowtie Goto Github PK
View Code? Open in Web Editor NEWJSON Schema in every programming language
Home Page: https://bowtie.report/
License: Other
JSON Schema in every programming language
Home Page: https://bowtie.report/
License: Other
And run some sort of check in CI
Probably depends on #5 but likely should include one or two that run against the HTML report.
Right now the integration tests just munge together some ad hoc assertions against the raw output. They should probably instead assert against the summary report.
We'll evolve the test suite format, and it's already close to bowtie's input format, but let's just make it trivial (and not require jq
) to run it.
I.e. do the equivalent of:
jq -c 'walk( if type == "object" then with_entries( .key |= if . == "data" then "instance" else . end ) else . end) | .[]' ~/Development/JSON-Schema-Test-Suite/tests/draft2020-12/*.json | bowtie run
May depend on #14.
We should have a shortcut for running a bunch of examples across all implementations for a single schema (useful for interactive use cases).
E.g.:
An implementation container should be able to signal it's running itself in multiple configurations, and pass back values for each.
E.g. ajv has a strict mode which can be enabled or disabled, and we should collect results from both.
The report should then be updated to group variants together. We should display the "default" variant first, where default is defined to be the variant one gets when passing minimal or no additional configuration, or if configuration is always necessary, the one one gets when following the first documented or most recommended configuration from the implementation's documentation.
Worth considering whether to treat versions (of the implementation) as variants, or whether these are separate axes.
(The presumption is that we can handle this within the same runtime rather than running multiple instances of a container, which we already support)
Though it may complicate the just-mentioned presumption, we may also wish to support testing CLIs as if they were variants of the implementations they're based on.
Right now we hand-write classes corresponding to the types in the io-schema.json
file.
It'd be nice to pick a library (or write one) which does so for us and generates attrs-using classes.
jq
/ jmespath
?⊙ cat foo.json julian@Airm
{"description": "test case 1", "schema": {}, "tests": [{"description": "a test", "instance": {}}] }
⊙ hyperfine --warmup 3 -L implementation python-jsonschema,js-ajv,lua-jsonschema,dotnet-json-everything,go-jsonschema,js-hyperjump,python-fastjsonschema,python-jschon,ruby-json_schemer,badsonschema,rust-jsonschema,envsonschema,lintsonschema,clojure-json-schema 'bowtie run -i {implementation} foo.json'
Summary
'bowtie run -i lua-jsonschema foo.json' ran
1.01 ± 0.07 times faster than 'bowtie run -i rust-jsonschema foo.json'
1.10 ± 0.06 times faster than 'bowtie run -i go-jsonschema foo.json'
1.95 ± 0.11 times faster than 'bowtie run -i python-fastjsonschema foo.json'
2.12 ± 0.09 times faster than 'bowtie run -i python-jsonschema foo.json'
2.57 ± 0.14 times faster than 'bowtie run -i python-jschon foo.json'
3.10 ± 0.12 times faster than 'bowtie run -i ruby-json_schemer foo.json'
4.40 ± 0.14 times faster than 'bowtie run -i dotnet-json-everything foo.json'
4.83 ± 0.22 times faster than 'bowtie run -i js-ajv foo.json'
8.69 ± 0.30 times faster than 'bowtie run -i js-hyperjump foo.json'
14.77 ± 0.47 times faster than 'bowtie run -i clojure-json-schema foo.json'
Refs: #45
This is clearly not something that can be done (easily) in general, but when things fail, it'd be nice to have a general way to drop into an interactive environment for a particular implementation.
E.g. bowtie repl -i bowtie/clojure-json-schema
could drop into a Clojure REPL with the library available, ready to interactively validate schemas. Obviously doing so now requires authors to know about the library they're using.
If we want to get really fancy, perhaps hooking up interactive environments in the browser while looking at a report is also a thing.
E.g. a JSON summarization
Or generalize the notion itself of cases to groups.
Right now in the test suite we group cases by file (often by keyword).
We should reflect this structure in the report emitted by bowtie.
Backing off should perhaps take into account groups.
schema
and registry
(#14) are also probably in need of consideration after this is done -- right now they're per-case, but probably can be per-case-group, especially if the notions merge.
We currently build a shiv but only upload it to GH Actions runs, not releases.
Should either be supported per-inputted case, or with a separate start-speaking-dialect
kind of request.
Should support the implementation signalling it doesn't support the dialect (in which case all test cases should get skipped), and should allow passing either a dialect URI or a short form for "well known" dialects.
I.e. be able to harness into a container running perhaps locally, or over HTTP, or over some general interface, rather than via a managed docker container.
I.e. we should emit an output format version number from bowtie run
, so bowtie report
knows it understands the format it's receiving.
For explicitly ensuring $schema
is always present in tests passed to implementation containers.
The intention is that these values should be compared against implementations' responses (and if they differ, to indicate as such in the report).
⊙ nox -s bench'(suite)' -- ~/Development/JSON-Schema-Test-Suite/tests/draft7 julian@Airm ●e": "string"}, "nodes": {"type": "array", "items": {"$ref": "node"}}}, "required": ["meta", "no
nox > Running session bench(suite)
nox > Creating virtual environment (virtualenv) using python in .nox/bench-suite
nox > python -m pip install /Users/julian/Development/bowtie
nox > hyperfine --warmup 1 --ignore-failure -L implementation js-ajv,js-hyperjump,go-jsonschema,clojure-json-schema,cpp-valijson,ruby-json_schemer,dotnet-json-everything,python-jsonschema,python-fastjsonschema,python-jschon,rust-jsonschema,lua-jsonschema '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i {implementation} /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
Benchmark 1: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i js-ajv /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7 ╮
Time (mean ± σ): 3.908 s ± 0.272 s [User: 0.228 s, System: 0.042 s]
Range (min … max): 3.618 s … 4.383 s 10 runs
Benchmark 2: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i js-hyperjump /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7 DefaultSchemaClient.get (DefaultSchemaClient.java:27)
Time (mean ± σ): 9.213 s ± 0.755 s [User: 0.233 s, System: 0.049 s]
Range (min … max): 8.552 s … 10.858 s 10 runs
Benchmark 3: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i go-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7 │
Time (mean ± σ): 5.322 s ± 0.167 s [User: 0.289 s, System: 0.048 s]
Range (min … max): 5.117 s … 5.692 s 10 runs
Benchmark 4: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i clojure-json-schema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7
Time (mean ± σ): 11.723 s ± 0.126 s [User: 0.289 s, System: 0.057 s]
Range (min … max): 11.593 s … 11.990 s 10 runs
Benchmark 5: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i cpp-valijson /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7 │
Time (mean ± σ): 4.603 s ± 0.297 s [User: 0.287 s, System: 0.051 s]
Range (min … max): 4.126 s … 5.124 s 10 runs
Benchmark 6: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i ruby-json_schemer /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7 ───────────────────────────────────────────────────────────────────────────────────────────────
Time (mean ± σ): 11.905 s ± 0.536 s [User: 0.270 s, System: 0.057 s]
Range (min … max): 11.197 s … 12.852 s 10 runs
Benchmark 7: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i dotnet-json-everything /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7 ct', 'properties': {'value': {'type': 'number'}, 'subtree': {'$ref': 'tree'}}, 'required': ['va
Time (mean ± σ): 4.283 s ± 0.061 s [User: 0.213 s, System: 0.038 s]
Range (min … max): 4.228 s … 4.423 s 10 runs
Benchmark 8: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i python-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7 .java:121)\n org.everit.json.schema.loader.JsonPointerEvaluator.query (JsonPointerEvaluator.
Time (mean ± σ): 5.941 s ± 0.033 s [User: 0.223 s, System: 0.037 s]
Range (min … max): 5.912 s … 6.000 s 10 runs
Benchmark 9: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i python-fastjsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7 on": "object with numbers is valid", "instance": {"foo\"bar": 1}, "valid": true}, {"description
Time (mean ± σ): 11.075 s ± 0.061 s [User: 0.314 s, System: 0.049 s]
Range (min … max): 11.025 s … 11.227 s 10 runs
Benchmark 10: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i python-jschon /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7 ", "instance": 1, "valid": true}, {"description": "mismatch", "instance": "a", "valid": false}]
Time (mean ± σ): 801.2 ms ± 8.2 ms [User: 131.4 ms, System: 25.8 ms]
Range (min … max): 790.4 ms … 818.7 ms 10 runs ╮
Warning: Ignoring non-zero exit code.
Benchmark 11: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i rust-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7
Time (mean ± σ): 1.374 s ± 0.031 s [User: 0.213 s, System: 0.036 s]
Range (min … max): 1.289 s … 1.400 s 10 runs
Benchmark 12: /Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i lua-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7
Time (mean ± σ): 5.093 s ± 0.073 s [User: 0.285 s, System: 0.043 s]
Range (min … max): 5.004 s … 5.213 s 10 runs
Summary
'/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i python-jschon /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7' ran
1.72 ± 0.04 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i rust-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
4.88 ± 0.34 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i js-ajv /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
5.35 ± 0.09 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i dotnet-json-everything /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
5.74 ± 0.38 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i cpp-valijson /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
6.36 ± 0.11 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i lua-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
6.64 ± 0.22 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i go-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
7.42 ± 0.09 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i python-jsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
11.50 ± 0.95 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i js-hyperjump /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
13.82 ± 0.16 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i python-fastjsonschema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
14.63 ± 0.22 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i clojure-json-schema /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
14.86 ± 0.69 times faster than '/Users/julian/Development/bowtie/.nox/bench-suite/bin/bowtie suite -i ruby-json_schemer /Users/julian/Development/JSON-Schema-Test-Suite/tests/draft7'
nox > Session bench(suite) was successful.
E.g. cargo fmt
for rust, go fmt
for go, rubocop
for Ruby, black
for Python, etc. and configure either pre-commit or CI to check them.
Probably once twbs/bootstrap#35857 is merged.
Right now all of bowtie's implementation images live in-tree (in the implementations
folder). Perhaps this won't always be the case though, and it may be useful to have a discovery mechanism if/when bowtie learns to run over all known implementations (and therefore may want to include those additional images).
E.g. perhaps we look for a particular GitHub topic.
Possibly of #15 across all implementations, which regardless is a goal.
Right now one has to pass -i
explicitly for each image.
We should support a number of easier options:
implementations/
--implementations-file
which contains a list of imagesbowtie smoke
and other commands may also want to default to a wide set of implementations once we can do this (i.e. "smoke test all implementations", if none are otherwise provided)
Which should enable bowtie to validate requests/responses against the IO schema at runtime.
Likely related to #20, and should apply validation regardless of whether this appears in the schema.
What's there now is hacked together.
They should be shown syntax highlighted (and not cut off).
(We likely will want to be able to test e.g. output formats via the same structured runners)
E.g.:
bowtie suite -i bowtie/clojure-json-schema https://github.com/json-schema-org/JSON-Schema-Test-Suite/blob/main/tests/draft2020-12/anyOf.json
We could either use the GitHub API or do so via GitPython
or dulwich
.
E.g. whenever an implementation barfs / throws an error / has an unexpected response, the interactive output should show something copy-pastable to run just that example in two ways:
bowtie suite -k
)bowtie run
)Should support including a set of schemas via specified retrieval URIs which tests in the case may reference.
Probably worth doing per-case, though perhaps a single global registry is also desirable (a separate request with schemas, say).
We should ensure the examples in the documentation stay accurate by doctesting them. Likely depends on #5.
Possibly depends on #6 or #5, and maybe even on revisiting json-schema-org/JSON-Schema-Test-Suite#53.
Would be useful for tracking changes to implementations, or changes to the input suite, or both.
We should deploy a benchmark runner that times runs for bowtie itself -- right now there's a bit of busy looping / inefficient parallelization that it'd be nice to track (and fix).
When building container images, smoke test each one via some trivial input before pushing them to the registry.
A $ref
may send an implementation to a schema whose dialect is unsupported by the implementation.
This is slightly tricky as we don't synchronously wait for responses, which may arrive in later messages read from the stream -- especially so if we move away from reading from implementations line-by-line.
But it'd be nice to time how long it takes to get responses back.
Or alternatively teach bowtie report
to convert a report into subunit.
Perhaps we should pass the JSON text along as-is, or be very pedantic about it.
As is, deserializing 1.00
obviously will lead to the float 1.0
ultimately making it to implementations.
Should take as input 2 (or more?) reports and output a diff of changes between them -- i.e. tests that fail in one and not the other in an implementation, etc.
More thought required to deal with how we align tests between the two (seq may differ) as well as what to do about missing or additional implementations in one or the other, etc.
Really they should be schemas valid under the dialect specified by #13 (or whatever the default is, if e.g. we require $schema
without --dialect
).
We'd want to differentiate between failures on optional tests and required ones.
Doing this likely depends on #27 as we likely will use separate groups to differentiate these.
E.g. we likely will want to include the test suite commit hash or version number when running its tests, and that metadata should be shown in the outputted report.
(Relates to #15)
Given that Bowtie provides uniform interfaces to downstream implementations, an "obvious" complimentary tool might be to fuzz-test across all implementations, looking for cases they disagree, or blow up, or more generally produce behavior non-compliant with the specification which isn't already covered by an explicit test in the suite.
Doing this likely simply means hooking such a tool up to Bowtie and letting it rip.
We likely can do this in-process most easily using hypothesis-jsonschema
and feed them through implementations.
(e.g. hide the traceback)
Specifically:
bowtie suite --max-fail N
should stop (i.e. exit) if N tests fail in total across implementationsbowtie suite --max-error N
should stop if N errors occur in total across implementationsThe same option should be added to bowtie run
.
It sort of is already structured in this way, but can be improved a bit, and isn't documented.
We should also split the schema into the bit meant for end-users (which speccs input rows) and the bit meant for implementers (which speccs the protocol spoken between bowtie and containers).
AsyncAPI is probably worth looking into here (especially considering that unlike OpenAPI it makes no assumptions about the protocol being spoken supposedly).
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.