hasura / graphql-bench Goto Github PK
View Code? Open in Web Editor NEWA super simple tool to benchmark GraphQL queries
License: Apache License 2.0
A super simple tool to benchmark GraphQL queries
License: Apache License 2.0
Hi,
I'm completely new to Docker and a bit lost as to what my problem is when trying to use this tool.
When I start the server using cat bench.yaml | docker run -i --rm -p 8050:8050 -v C:/github/graphql-bench/examples/starwars/queries.graphql hasura/graphql-bench:v0.3
I get:
====================
benchmark: query-comparisoncandidate: HeroNameQuery on hero_name at http://172.17.0.1:5000/graphql
Warmup:
++++++++++++++++++++
100Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
200Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
300Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
400Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
500Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
Benchmark:
++++++++++++++++++++
100Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
200Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
300Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
400Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
500Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refusedcandidate: HeroNameFriendsQuery on hero_name_friends at http://172.17.0.1:5000/graphql
Warmup:
++++++++++++++++++++
100Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
200Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
300Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
400Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
500Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
Benchmark:
++++++++++++++++++++
100Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
200Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
300Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
400Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
500Req/s Duration:300s open connections:20
unable to connect to 172.17.0.1:5000 Connection refusedbenchmark: webserver-comparison
candidate: HeroNameQuery on uwsgi at http://172.17.0.1:5001/graphql
Warmup:
++++++++++++++++++++
100Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5001 Connection refused
++++++++++++++++++++
200Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5001 Connection refused
++++++++++++++++++++
300Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5001 Connection refused
Benchmark:
++++++++++++++++++++
100Req/s Duration:100s open connections:20
unable to connect to 172.17.0.1:5001 Connection refused
++++++++++++++++++++
200Req/s Duration:100s open connections:20
unable to connect to 172.17.0.1:5001 Connection refused
++++++++++++++++++++
300Req/s Duration:100s open connections:20
unable to connect to 172.17.0.1:5001 Connection refusedcandidate: HeroNameQuery on dev-server at http://172.17.0.1:5000/graphql
Warmup:
++++++++++++++++++++
100Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
200Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
300Req/s Duration:60s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
Benchmark:
++++++++++++++++++++
100Req/s Duration:100s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
200Req/s Duration:100s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
++++++++++++++++++++
300Req/s Duration:100s open connections:20
unable to connect to 172.17.0.1:5000 Connection refused
- Serving Flask app "bench" (lazy loading)
- Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.- Debug mode: off
- Running on http://0.0.0.0:8050/ (Press CTRL+C to quit)
As you can see from the logs, it can't connect to 172.17.0.1:5000, and therefore when I open http://127.0.0.1:8050 in my browser I can see the front end but not the graphs, like so:
I would very appreciate any advice,
Thank you
Would be cool to define custom values in the header to use it with API Key based endpoints.
For example AWS GraphQL Service AppSync use X-Api-Key
with the API key as value.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functionsrule.endpoint
File "/usr/local/lib/python3.7/site-packages/dash/dash.py", line 1151, in dispatch
response.set_data(self.callback_map[output]'callback')
File "/usr/local/lib/python3.7/site-packages/dash/dash.py", line 1037, in add_context
output_value = func(*args, **kwargs)
File "/graphql-bench/plot.py", line 93, in updateGraph
benchMarkIndex=int(benchMarkIndex)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
I was running a benchmark for Hasura Pro -
set it to 10 reqs/second and time to 1 second, resulted in 20 requests (should have been 10 requests)
set it to 10 reqs/second and time to 5 seconds, resulted in 81 requests (should have been 50)
The request count is taken from Pro metrics.
There was a warmup time set initially, which I removed and then got these results.
@0x777
Following the example directoy:
====================
benchmark: query-comparison
--------------------
candidate: HeroNameQuery on hero_name at http://172.17.0.1:5000/graphql
Warmup:
++++++++++++++++++++
100Req/s Duration:60s open connections:20
Running 1m test @ http://172.17.0.1:5000/graphql
4 threads and 20 connections
Thread calibration: mean lat.: 24.057ms, rate sampling interval: 102ms
Thread calibration: mean lat.: 37.867ms, rate sampling interval: 101ms
Thread calibration: mean lat.: 40.488ms, rate sampling interval: 106ms
Thread calibration: mean lat.: 36.296ms, rate sampling interval: 96ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 35.98ms 16.65ms 133.76ms 71.89%
Req/Sec 24.77 21.41 52.00 41.66%
5980 requests in 1.00m, 1.03MB read
Socket errors: connect 0, read 0, write 0, timeout 1
Requests/sec: 99.66
Transfer/sec: 17.52KB
++++++++++++++++++++
200Req/s Duration:60s open connections:20
Running 1m test @ http://172.17.0.1:5000/graphql
4 threads and 20 connections
Thread calibration: mean lat.: 38.743ms, rate sampling interval: 121ms
Thread calibration: mean lat.: 45.739ms, rate sampling interval: 123ms
Thread calibration: mean lat.: 47.466ms, rate sampling interval: 124ms
Thread calibration: mean lat.: 44.317ms, rate sampling interval: 110ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 42.98ms 23.42ms 962.05ms 87.69%
Req/Sec 49.72 12.15 120.00 77.33%
12004 requests in 1.00m, 2.06MB read
Requests/sec: 200.05
Transfer/sec: 35.17KB
Benchmark:
++++++++++++++++++++
100Req/s Duration:300s open connections:20
wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.
++++++++++++++++++++
200Req/s Duration:300s open connections:20
wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.
--------------------
candidate: HeroNameFriendsQuery on hero_name_friends at http://172.17.0.1:5000/graphql
Warmup:
++++++++++++++++++++
100Req/s Duration:60s open connections:20
Running 1m test @ http://172.17.0.1:5000/graphql
4 threads and 20 connections
Thread calibration: mean lat.: 44.057ms, rate sampling interval: 125ms
Thread calibration: mean lat.: 48.205ms, rate sampling interval: 123ms
Thread calibration: mean lat.: 49.266ms, rate sampling interval: 121ms
Thread calibration: mean lat.: 46.410ms, rate sampling interval: 112ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 46.20ms 14.88ms 105.92ms 77.60%
Req/Sec 24.81 17.46 45.00 63.81%
6001 requests in 1.00m, 1.50MB read
Non-2xx or 3xx responses: 1
Requests/sec: 99.96
Transfer/sec: 25.58KB
++++++++++++++++++++
200Req/s Duration:60s open connections:20
wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.
Benchmark:
++++++++++++++++++++
100Req/s Duration:300s open connections:20
wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.
++++++++++++++++++++
200Req/s Duration:300s open connections:20
wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.
* Serving Flask app "bench" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:8050/ (Press CTRL+C to quit)
localhost:8050
I don't see any histograms:This seems to be the culprit from the logs:
wrk2: src/hdr_histogram.c:54: counts_index: Assertion `bucket_index < h->bucket_count' failed.
This would make testing easier
Just took a look at this project, and it's a great framework to get started with queries, but are you guys planning on expanding graphql-bench to be used with mutations, and subscriptions?
It's clear that you guys have an emphasis on queries in your current readme.
Having some sort of "driver" interface would be really nice, especially if we could use a syntax similar to faker for mutations. It's no easy undertaking.
We are trying to benchmark our GraphQL server implementation for Doublets (database engine based on associative model of data).
This the YAML file we tried:
url: 'http://linksplatform.ddns.net:29018/graphql'
queries:
- name: GetSingleLink
tools: [k6, wrk2, autocannon]
execution_strategy: REQUESTS_PER_SECOND
rps: 1
duration: 1s
query: '{ links(where: {from_id: {_eq: 2}, to_id: {_eq: 1}}) { id from_id to_id } }'
- name: UseFromIndex
tools: [k6, wrk2, autocannon]
execution_strategy: REQUESTS_PER_SECOND
rps: 1
duration: 1s
query: '{ links(where: {from_id: {_eq: 1}}) { id } }'
- name: UseToIndex
tools: [k6, wrk2, autocannon]
execution_strategy: REQUESTS_PER_SECOND
rps: 1
duration: 1s
query: '{ links(where: {to_id: {_eq: 1}}) { id } }'
- name: FullScan
tools: [k6, wrk2, autocannon]
execution_strategy: REQUESTS_PER_SECOND
rps: 1
duration: 1s
query: '{ links { id } }'
But all requests are ended up with 400 or 500 codes. And I'm not able to see exact error in the benchmark tool. Is there a way to see how the request is sent and what response is received via the benchmark tool?
If I use any other client (via UI, insomnia
or plain JavaScript
ApolloClient
from the node.js) I do not get any errors with these requests. Only graphql-bench
is unable to make request for some reason.
On docker build
Package 'libssl1.0.2' has no installation candidate is output
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.