hatoo / oha Goto Github PK
View Code? Open in Web Editor NEWOhayou(おはよう), HTTP load generator, inspired by rakyll/hey with tui animation.
License: MIT License
Ohayou(おはよう), HTTP load generator, inspired by rakyll/hey with tui animation.
License: MIT License
The calculation of histogram differs from hey
#157 (comment)
The relevant codes here
https://github.com/hatoo/oha/blob/master/src/histogram.rs
oha on master is 📦 v0.1.4 via 🦀 v1.43.0-nightly
❯ cargo run --release -- --no-tui -z 6m http://192.168.10.201 | grep Requests/sec: Finished release [optimized] target(s) in 0.09s
Running `target/release/oha --no-tui -z 6m 'http://192.168.10.201'`
^Cthread 'tokio-runtime-worker' panicked at 'failed printing to stdout: Broken pipe (os error 32)', src/libstd/io/stdio.rs:805:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Any examples exist with post login credentials?
When trying out oha with a custom header before the url it throws an error that no url is provided:
oha -n 100 -H "Authorization: Bearer $TOKEN_VALUE" 'https://github.com/llala'
error: The following required arguments were not provided:
<url>
USAGE:
oha [FLAGS] [OPTIONS] <url>
For more information try --help
As headers takes a Vec the url is parsed as a header which is unexpected, the following options do work:
oha -H "Authorization: Bearer $TOKEN_VALUE" -n 100 'https://github.com/llala'
oha -n 100 'https://github.com/llala' -H "Authorization: Bearer $TOKEN_VALUE"
The simple "fix" would be to make -H not take a vec but a Option and let users append -H multiple times to set multiple headers. But this will break existing users workflow, as alternative maybe allow users to provide the url as parameter?
Or structopt could verify if the provided argument was an valid HTTP header, but that might add more complexity.
reqwest supports gzip and brotli but it requres to enable with features.
It would be neat if oha had an option to automatically scale the time bar depending on how long the whole load test takes.
Is it possible to set a github action that will release once you tag a release? If you think its acceptable i can setup and PR ?
It seems Oha does not report errors at the end of run.
# Start Nginx
$ docker run --name nginx -d -p 8080:80 nginx
# Set the limit for file descriptors to its max value.
$ ulimit -n
1024
$ ulimit -H -n
524288
$ ulimit -n $(ulimit -H -n)
$ ulimit -n
524288
# Run Oha with a large number of workers.
$ ./target/release/oha -n 100000 -c 2000 'http://[::1]:8080'
Summary:
...
Response time histogram:
0.102 [314] |■■■■■■■■■■■■■■■■■
0.129 [582] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.155 [97] |■■■■■
0.182 [0] |
0.209 [0] |
0.235 [3] |
0.262 [10] |
0.289 [23] |■
0.316 [24] |■
0.342 [35] |■
0.369 [14] |
Latency distribution:
...
Status code distribution:
[200] 1102 responses
As you can see there were only 1,102 responses received for 100,000 requests. There may be 98,898 errors but no errors were reported.
FYI, hey reports some errors like the followings.
$ ulimit -n
524288
$ ./hey -n 100000 -c 2000 'http://[::1]:8080'
...
Status code distribution:
[200] 99669 responses
Error distribution:
[137] Get http://[::1]:8080: EOF
[194] Get http://[::1]:8080: http: server closed idle connection
99,669 + 137 + 194 = 100,000
Environment
oha -z 20s -H "Authorization: Bearer $TOKEN" -T application/json -d to=+25078xxxxxx text="Hello" sender="Hello" 'https://api.pindo.io/v1/sms/'
error: Found argument 'sender=Hello' which wasn't expected, or isn't valid in this context
USAGE:
oha [FLAGS] [OPTIONS] <url>
For more information try --help
Hey, not really an issue but I think this is a cool tool and I packaged it for Arch Linux here. Once you make a release including the LICENSE file, I'll add that as well. If it's popular it has a good chance to become part of the [community] repository.
This may show more appropriate statics for the short running time. But I don't know this effectively.
Windows and MacOS are supported now.
I'm trying to load test an internal host that is only resolvable via a special internal DNS and only IPv4. However, I can resolve that name just fine via nslookup
, host
, curl
, and even hey
but oha
tells me
[69] no record found for name: internal.host type: A class: IN
I'm calling
oha --ipv4 -q 200 -n 2000 https://internal.host/test
However:
$ nslookup internal.host
Server: 127.0.0.1
Address: 127.0.0.1#53
internal.host canonical name = redacted.
Name: redacted
Address: 10.1.241.80
I think for very quickly looking at the output of your load test it would be cool if oha used some colors with sane defaults for how long something takes (maybe green 0s - 0.3s, orange, 0.3-0.8, red, >=0.8?).
I suggest using colors in the TUI view and in the final output. Of course, make it into an option but I think it should be "auto" by default so that if the terminal is detected to be capable of colors, it should output using colors.
# oha -q 0 http://server:8080
panicked at 'divide by zero error when dividing duration by scalar', src/libcore/time.rs:794:9
# hey -q 0 http://server:8080
Summary:
Total: 0.0168 secs
Slowest: 0.0127 secs
Fastest: 0.0003 secs
Average: 0.0036 secs
Requests/sec: 11938.9656
Would be nice to support this. Not a big issue, but its pretty common for load clients to behave this way, especially hey
which has the same command line parameters.
would be good
Yay! https://tokio.rs/blog/2020-10-tokio-0-3
assert_cmd looks good.
This is probably a rare use case (but it's ours!).
curl
provides a connect-to
option to override DNS resolution and always connect directly to an IP address for a given hostname+port.
Since oha
uses hyper, and hyper
allows passing a Resolver
, it seems this would be reasonably easy to implement. I can try and give it a shot if you'd accept a PR for that in oha.
Let me know what you think!
Currently, only amd64 binaries are published. It would be useful to additionally build arm64 binaries.
Currently oha
plots per 1 sec.
It's good to make the scale configurable via keyboard shortcut.
Termion doesn't work on windows
Since I'm currently packaging this for Arch, I noticed that you're missing a LICENSE file. You should probably add one for MIT.
tokio-tls will be depricated.
https://docs.rs/hyper/0.13.4/hyper/client/conn/index.html
example
use anyhow::Context;
use futures_util::stream::*;
use tokio::prelude::*;
use std::str::FromStr;
trait AsyncRW: AsyncRead + AsyncWrite {}
impl<T: AsyncRead + AsyncWrite> AsyncRW for T {}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let url = url::Url::from_str("https://www.google.com")?;
let addr = (
url.host_str().context("get host")?,
url.port_or_known_default().context("get port")?,
);
let addr = tokio::net::lookup_host(addr)
.await?
.next()
.context("get addr")?;
let stream: Box<dyn AsyncRW + Unpin + Send + 'static> = if url.scheme() == "https" {
let stream = tokio::net::TcpStream::connect(addr).await?;
let connector = native_tls::TlsConnector::new()?;
let connector = tokio_tls::TlsConnector::from(connector);
Box::new(
connector
.connect(url.domain().context("get domain")?, stream)
.await?,
)
} else {
Box::new(tokio::net::TcpStream::connect(addr).await?)
};
let (mut send, conn) = hyper::client::conn::handshake(stream).await?;
// I don't know whe this line is needed
let join = tokio::spawn(conn);
// keep_alive
for _ in 0..2 {
let request = http::Request::builder()
.version(http::Version::HTTP_11)
.uri("/")
.body(hyper::Body::empty())?;
let res = send.send_request(request).await?;
dbg!(
res.into_body()
.map(|bytes| bytes.unwrap().len())
.collect::<Vec<_>>()
.await
);
}
Ok(())
}
Currently, oha
shows the sum of the HTTP body.
But showing in/out TCP bandwidth is better.
I think it's hard to implement because all communication is abstracted by hyper
.
Hello,
It would be nice to have a proxy (socks/http) support as hey does.
Thank you.
Summary:
Success rate: 1.0000
Total: 50.0131 secs
Slowest: 0.0768 secs
Fastest: 0.0006 secs
Average: 0.0053 secs
Requests/sec: 18847.3182
Total data: 10.79 MiB
Size/request: 12.00 B
Size/sec: 220.87 KiB
Response time histogram:
0.001 [18563] |■■
0.002 [118922] |■■■■■■■■■■■■■
0.003 [284620] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.005 [275058] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.006 [127936] |■■■■■■■■■■■■■■
0.008 [62017] |■■■■■■
0.009 [30270] |■■■
0.010 [12834] |■
0.012 [5407] |
0.013 [2621] |
0.015 [4365] |
...
ref https://blog.cardina1.red/2020/12/14/dont-leave-cargo-toml-broken/
I want to CI with cargo +nightly update -Z minimal-versions
to ensure the deps in Cargo.toml
isn't broken.
But unfortunately, I don't know about GH Action.
Currently, an error like connection error is shown in summary but not realtime.
It should be shown on realtime.
Print final summary as JSON
Here I set the rate limit to 10qps, and my response are crazy high, up to 18s. This service is absolutely not that slow.
# oha -q 10 -c 100 "${DESTINATION}"
Summary:
Success rate: 1.0000
Total: 19.9057 secs
Slowest: 19.4002 secs
Fastest: 0.1017 secs
Average: 7.4746 secs
Requests/sec: 10.0473
Total data: 200.00 KiB
Size/request: 1024 B
Size/sec: 10.05 KiB
Response time histogram:
1.754 [19] |■■■■■■■■■■■■■■■■
3.509 [23] |■■■■■■■■■■■■■■■■■■■■
5.263 [32] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■
7.018 [36] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
8.772 [33] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
10.526 [7] |■■■■■■
12.281 [9] |■■■■■■■■
14.035 [13] |■■■■■■■■■■■
15.790 [14] |■■■■■■■■■■■■
17.544 [9] |■■■■■■■■
19.298 [5] |■■■■
Latency distribution:
10% in 1.9994 secs
25% in 4.3002 secs
50% in 6.6000 secs
75% in 10.6993 secs
90% in 14.7995 secs
95% in 16.6002 secs
99% in 18.6991 secs
Details (average, fastest, slowest):
DNS+dialup: 5.5760 secs, 0.0336 secs, 16.1002 secs
DNS-lookup: 5.4315 secs, 0.0298 secs, 16.0029 secs
Status code distribution:
[200] 200 responses
Without a high rate limit, this drops down to more reasonable levels:
# oha -q 10000 -c 100 "${DESTINATION}"
Summary:
Success rate: 1.0000
Total: 0.0714 secs
Slowest: 0.0631 secs
Fastest: 0.0010 secs
Average: 0.0235 secs
Requests/sec: 2801.0961
Total data: 200.00 KiB
Size/request: 1024 B
Size/sec: 2.74 MiB
Response time histogram:
0.006 [100] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.011 [0] |
0.017 [0] |
0.023 [0] |
0.028 [0] |
0.034 [0] |
0.039 [6] |■
0.045 [51] |■■■■■■■■■■■■■■■■
0.051 [42] |■■■■■■■■■■■■■
0.056 [0] |
0.062 [1] |
Latency distribution:
10% in 0.0012 secs
25% in 0.0014 secs
50% in 0.0365 secs
75% in 0.0456 secs
90% in 0.0489 secs
95% in 0.0497 secs
99% in 0.0512 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0398 secs, 0.0309 secs, 0.0624 secs
DNS-lookup: 0.0370 secs, 0.0272 secs, 0.0622 secs
Status code distribution:
[200] 200 responses
One other note, in hey
, the rate is -q*-z
:
# hey -q 10 -c 100 "${DESTINATION}"
Summary:
Total: 0.2059 secs
Slowest: 0.0145 secs
Fastest: 0.0023 secs
Average: 0.0082 secs
Requests/sec: 971.3312
https://docs.rs/reqwest/0.10.4/reqwest/struct.ClientBuilder.html#method.tcp_nodelay
Enable by default or add an option about this?
Hi,
in order to make life for us (package maintainers) a bit easier, I wonder if it's possible to release also properly rolled, immutable, source archives along with binary files.
The way it is done now relies on github-created archives that can, under some circumstances, change, causing checksums to also change (among other things).
What do you think about it?
Thanks.
I wrote the code for this so I'm to blame 🙈 but it just splits across :
and expects 4 tokens, which obviously doesn't work for IPv6 addresses.
Since curl supports it, and oha's feature is modelled after curl's, I think it should support IPv6 syntax with brackets:
$ curl -I https://example.org --connect-to 'example.org:443:[2606:2800:220:1:248:1893:25c8:1946]:443'
HTTP/2 200
content-encoding: gzip
accept-ranges: bytes
age: 295951
cache-control: max-age=604800
content-type: text/html; charset=UTF-8
date: Fri, 27 May 2022 09:52:13 GMT
etag: "3147526947+gzip"
expires: Fri, 03 Jun 2022 09:52:13 GMT
last-modified: Thu, 17 Oct 2019 07:18:26 GMT
server: ECS (bsa/EB23)
x-cache: HIT
content-length: 648
This command should not result in this error, as I also tried without the header and the URL is correctly parsed, only when adding the header the command does not work anymore.
> oha -H "foo: bar" http://localhost:8080/
error: The following required arguments were not provided:
<url>
USAGE:
oha [FLAGS] [OPTIONS] <url>
And the headers -H
resides in the OPTIONS
section, so this should be fine.
-H "foo: bar"
was taken from the example in the help.
oha version 0.4.7
It is interesting to emit logs that compatible with vegeta.
https://github.com/tsenart/vegeta
$ ./oha --version
oha 0.4.3
$ ./oha http://127.0.0.1:8888
Error: Error parsing resolv.conf: InvalidOption(17)
$ cat /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "resolvectl status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 trust-ad
search onsec.ru lan
❯ cargo run
Finished dev [unoptimized + debuginfo] target(s) in 0.06s
Running `target/debug/oha`
error: The following required arguments were not provided:
<url>
USAGE:
oha <url> --fps <fps> --method <method> -n <n-requests> -c <n-workers> --redirect <redirect>
For more information try --help
it's better to
USAGE:
oha <url>
Summary:
Success rate: 1.0000
Total: 199.9830 secs
Slowest: 0.1966 secs
Fastest: 0.0112 secs
Average: 0.0165 secs
Requests/sec: 50.0042
Total data: 40.45 MiB
Size/request: 4.14 KiB
Size/sec: 207.10 KiB
Response time histogram:
0.003 [4391] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.007 [3777] |■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.010 [1398] |■■■■■■■■■■
0.014 [178] |■
0.017 [52] |
0.021 [18] |
0.024 [37] |
0.028 [39] |
0.031 [25] |
0.035 [7] |
0.038 [78] |
Latency distribution:
10% in 0.0128 secs
25% in 0.0136 secs
50% in 0.0150 secs
75% in 0.0173 secs
90% in 0.0196 secs
95% in 0.0212 secs
99% in 0.0401 secs
Details (average, fastest, slowest):
DNS+dialup: 0.0006 secs, 0.0001 secs, 0.0019 secs
DNS-lookup: 0.0000 secs, 0.0000 secs, 0.0002 secs
Status code distribution:
[200] 10000 responses
It says that fastest is 0.0112. But in response time histogram there are more lower numbers 0.003 0.007 0.010, all of then are lower then fastest 0.0112. How this could happen?
P.S. oha was executed with following params:
$ ./oha-linux-amd64 -V
oha 0.5.0
$ ./oha-linux-amd64 $'http://127.0.0.1:8080/api/graphql?orgId=77129' \
-H $'ACCEPT: */*' \
-H $'ACCEPT-ENCODING: gzip, deflate, br' \
-H $'ACCEPT-LANGUAGE: en-US,en;q=0.9' \
-H $'HOST: experiment.amplitude.com' \
-H $'REFERER: https://experiment.amplitude.com/ford/296855/config/3818/overview' \
-H $'USER-AGENT: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36' \
-H $'CONTENT-TYPE: application/json' \
-H $'X-FORWARDED-FOR: 94.192.0.171' \
-H $'X-FORWARDED-PROTO: https' \
-H $'X-FORWARDED-PORT: 443' \
-H $'X-AMZN-TRACE-ID: Root=1-623b20da-1a78a5ad4133d446797ec370' \
-H $'SEC-CH-UA: " Not;A Brand";v="99", "Google Chrome";v="97", "Chromium";v="97"' \
-H $'SEC-CH-UA-MOBILE: ?0' \
-H $'SEC-CH-UA-PLATFORM: "macOS"' \
-H $'ORIGIN: https://experiment.amplitude.com' \
-H $'SEC-FETCH-SITE: same-origin' \
-H $'SEC-FETCH-MODE: cors' \
-H $'SEC-FETCH-DEST: empty' \
-H $'COOKIE: corp_utm={%22utm_source%22:%22adwordsb%22%2C%22utm_medium%22:%22ppc%22%2C%22utm_campaign%22:%22Search_EMEA_UK_EN_Brand%22%2C%22utm_content%22:%22Brand_Exact%22%2C%22utm_term%22:%22amplitude%22%2C%22gclid%22:%22Cj0KCQiA3rKQBhCNARIsACUEW_bybYh1mI2zCPPjvvkkGlgLLvjFCQ0b9eoqpZmNu8CZN9PnOn9RgOUaAgXmEALw_wcB%22%2C%22blaid%22:%22%22%2C%22referrer%22:%22https://www.google.com/%22%2C%22referring_domain%22:%22www.google.com%22}; amp_9ff40c=Jy-igGuAUSYA6WsRm7kQS8...1fs1kjeqs.1fs1kjerb.0.2.2; __utmzz=utmcsr=google|utmcmd=organic|utmccn=(not set)|utmctr=(not provided); __utmzzses=1; amp_e3e918=...0.0.0.0.0; CookieControl={"necessaryCookies":["__utmzz","__utmzzses","corp_utm","membership_token_*","wordpress_pricing_page_uuid","wordpress_pricing_page_variant","wordpress_pricing_page_uuid_http_only","wordpress_pricing_page_variant_http_only"],"optionalCookies":{"performance":"accepted","functional":"accepted","advertising":"accepted"},"statement":{"shown":true,"updated":"25/04/2018"},"consentDate":1645027639419,"consentExpiry":90,"interactedWith":true,"user":"782D5A6E-935A-4FC6-A1FF-CDFFBA494BD7"}; amp_e3e918_amplitude.com=Jy-igGuAUSYA6WsRm7kQS8...1fs1kjg38.1fs1kjg44.7.3.a; _mkto_trk=id:138-CDN-550&token:_mch-amplitude.com-1645027639544-70830; _rdt_uuid=1645027639614.88316fc4-a3d7-4116-81d8-d13d928da49d; _ga=GA1.2.1995760543.1645027639; _biz_uid=4c1a4c78f8924bc1d9f8d406bb6cb30f; _biz_nA=3; _biz_flagsA=%7B%22Version%22%3A1%2C%22Mkto%22%3A%221%22%2C%22ViewThrough%22%3A%221%22%2C%22XDomain%22%3A%221%22%7D; _biz_pendingA=%5B%5D; _ga_2FY44PPV92=GS1.1.1645087713.2.0.1645087713.60; org_login_production="2|1:8|10:1647855916|20:org_login_production|116:eyJlbWFpbCI6ImxtY2dyYXQ4QGZvcmQuY29tIiwibG9naW5zIjpbeyJvcmdfaWQiOjc3MTI5LCJ0aW1lIjoxNjQ3ODU1OTE2LjIxNTQ1MTQ3OX1dfQ==|b3de52d364947420d9a6e2255ed6f6c6e5181b4c16c82ed6f6497c9cf095f715"; access_token_production=2|1:8|10:1647855916|23:access_token_production|48:NWU2ODNiNjUtZDNlNC00OTQ5LTk3MGItYjYzYzYxMjRmYmMz|48363b3f62e6cf4d49d38f8f3b2cd179b65d21b540fc59c421fb28003c67d5fe; amp_e5a2c9=ZP09Aj69GRhOvJDBIGm41h...1furej3on.1furej3os.l.i.17; amp_7f21dc=OdeEULvqqrw1AiAEjRT9x6...1furekh1d.1furekh1d.0.0.0; intercom-session-gjvo8fgi=d0dWcUluVENXbE5IVm5LMWpRQVpzRUd2azlreWxKRXZtSHFKSUhPRjZBWXhCK3BCSHRqUndiMzN1N1gwdytLYi0tL2E4SThrTEpiNVQ1dnBOMFBudm5BZz09--a648b8f55ebb74baa798dabe54fbd5a1531cce86; amp_e5a2c9_amplitude.com=UbutnSW8aK-u0AkJodB6LK.bG1jZ3JhdDhAZm9yZC5jb20=..1furej3p5.1furev6kd.1sf.jo.2g7; amp_6d2283=uqpetFBJEpQYUMDcOQ8kqZ.bG1jZ3JhdDhAZm9yZC5jb20=..1furej3p3.1furev6kh.6l.g1.mm; amp_fb0efa=DOctVsiLXCSJa1RIc-_XG2.bG1jZ3JhdDhAZm9yZC5jb20=..1furej3p0.1furff3d5.55q.gb.5m5; amp_7f21dc_amplitude.com=OdeEULvqqrw1AiAEjRT9x6.bG1jZ3JhdDhAZm9yZC5jb20=..1furekh1p.1furfgl02.di.8r.md; amp_99dd8b=X8LeGQH6I5XD5G0FNPUDqF.bG1jZ3JhdDhAZm9yZC5jb20=..1furekh1h.1furfgl12.4i.8r.dd' \
-d $'{"operationName":"flagKeysInEnv","variables":{"withDeleted":true,"projectId":"296855"},"query":"query flagKeysInEnv($projectId: ID!, $deploymentId: ID, $withDeleted: Boolean = false) {\\n configs: flagConfigsInEnv(\\n projectId: $projectId\\n deploymentId: $deploymentId\\n withDeleted: $withDeleted\\n ) {\\n key\\n __typename\\n }\\n}\\n"}' \
-n 10000 \
-q 50
https://docs.rs/async-h1/1.0.1/async_h1/
async_h1's API looks good.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.