Giter Site home page Giter Site logo

quic-interop-runner's People

Contributors

a-denoyelle avatar anrossi avatar fierralin avatar fiestajetsam avatar gfx avatar ghedo avatar haproxyfred avatar huitema avatar jaikiran avatar janaiyengar avatar jlaine avatar junhochoi avatar kulsk avatar larseggert avatar lnicco avatar marten-seemann avatar martinthomson avatar mpiraux avatar neild avatar nibanks avatar ptrd avatar ralith avatar route443 avatar ruiqizhou avatar sedrubal avatar stammw avatar tatsuhiro-t avatar thresheek avatar wesleyrosenblum avatar yanmei-liu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

quic-interop-runner's Issues

Add version variable

We should consider adding something to indicate the version supported by the interop runner.

At a minimum, supplying version as a variable allows implementations to return with not_supported during transition periods between versions.

Eventually, this can be turned into a compliance test.

Say which files were not transferred

Instead of just saying Downloaded the wrong number of files. Got 49, expected 50. would it be possible to print which ones were not downloaded? I used to look at the served and downloaded files, but those are no longer part of the runner output.

CC @nibanks

Byte count in zerortt scenario looks funny.

I am trying to debug the reason why the Z scenario is marked failing between picoquic client and server in the interop run started 4/7/2020 4:21:05 AM. The output.txt file says:

2020-04-08 03:24:37,236 0-RTT size: 11119
2020-04-08 03:24:37,236 1-RTT size: 5462
2020-04-08 03:24:37,236 Client sent too much data in 1-RTT packets.

The weird part is that the client sent less than 3K in 1RTT packets, not 5.4K. How is the byte count computed?

improve logging output

Removing all the pyshark output definitely helped, but I'd like a bit more. For instance, take this output:

Saving logs to ./logs.
Expected exactly 1 handshake. Got: 3
Expected at least 2 ClientHellos. Got: 0
Expected exactly 2 handshake. Got: 4
Missing files: ['dleshiaewf']
Expected 50 handshakes. Got: 46
Missing files: ['wjrfeuzvmi']
Expected 50 handshakes. Got: 48
Missing files: ['arwuloafqh']
Missing files: ['nxgwhmfkkg']
Missing files: ['htjehiteld']
Run took 0:11:58.096941
Server: quant. Client: msquic. Running test case: handshake
Server: quant. Client: msquic. Running test case: transfer
Server: quant. Client: msquic. Running test case: longrtt
Server: quant. Client: msquic. Running test case: chacha20
Server: quant. Client: msquic. Running test case: multiplexing
Server: quant. Client: msquic. Running test case: retry
Server: quant. Client: msquic. Running test case: resumption
Server: quant. Client: msquic. Running test case: zerortt
Server: quant. Client: msquic. Running test case: http3
Server: quant. Client: msquic. Running test case: blackhole
Server: quant. Client: msquic. Running test case: handshakeloss
Server: quant. Client: msquic. Running test case: transferloss
Server: quant. Client: msquic. Running test case: handshakecorruption
Server: quant. Client: msquic. Running test case: transfercorruption
Server: quant. Client: msquic. Running test case: goodput
Server: quant. Client: msquic. Running test case: crosstraffic
+--------+------------------+
|        |      quant       |
+--------+------------------+
| msquic |        H         |
|        |      Z3C20       |
|        | LRRDCSMC1L1BC2L2 |

IMO, things would be a lot better if the output was modified so that it prints which test is being run, then any errors, and then the final result. Something more like:

Saving logs to ./logs.

Server: quant. Client: msquic. Running test case: handshake
Success

Server: quant. Client: msquic. Running test case: transfer
Expected exactly 1 handshake. Got: 3
Failure

Server: quant. Client: msquic. Running test case: longrtt
Expected at least 2 ClientHellos. Got: 0
Failure
...

fyi, @marten-seemann I don't seem to have the permissions to reactivate the issue.

Originally posted by @nibanks in #158 (comment)

add a test case for HTTP/3 Push

Discussed with @martinthomson on Slack: Apparently Firefox wants to have seen the URL before it accepts a Push.
We could generated n files to be pushed, and a .txt file that contains links to these files.
We'd then tell the client the URL of the .txt file. The server would then be expected to push these resources to the client.

How does this sound? quic-go doesn't have a push implementation (yet), so I can't really judge what makes sense here. @mjoras, @ghedo, @kazuho, @huitema, @tatsuhiro-t Any opinions on this?

flake8 suddenly failing

interop.py:104:57: E741 ambiguous variable name 'l'
interop.py:105:45: E741 ambiguous variable name 'l'
interop.py:384:64: E741 ambiguous variable name 'l'

The only reasonable explanation is that they updated the linter, and we're always installing the newest version on CI.

Clean up Test Output

There's a bunch of "noise" spit out during the tests that not only clutters things up but confuses the user (me) and the CI system (Azure Pipelines) and sometimes gets treated as a failure. For instance:

Saving logs to ./logs.
Expected at least 2 ClientHellos. Got: 0
Expected only ChaCha20 cipher suite to be offered. Got: set()
Exception ignored in: <bound method Capture.__del__ of <FileCapture /tmp/logs_sim_dafz6mfg/trace_node_left.pcap>>
Traceback (most recent call last):
  File "/home/vsts/.local/lib/python3.5/site-packages/pyshark/capture/capture.py", line 435, in __del__
    self.close()
  File "/home/vsts/.local/lib/python3.5/site-packages/pyshark/capture/capture.py", line 426, in close
    self.eventloop.run_until_complete(self.close_async())
  File "/usr/lib/python3.5/asyncio/base_events.py", line 375, in run_until_complete
    self.run_forever()
  File "/usr/lib/python3.5/asyncio/base_events.py", line 340, in run_forever
    raise RuntimeError('Event loop is running.')
RuntimeError: Event loop is running.
Task exception was never retrieved
future: <Task finished coro=<Capture.close_async() done, defined at /home/vsts/.local/lib/python3.5/site-packages/pyshark/capture/capture.py:428> exception=TSharkCrashException('TShark seems to have crashed (retcode: 2). Try rerunning in debug mode [ capture_obj.set_debug() ] or try updating tshark.',)>
Traceback (most recent call last):
  File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
    result = coro.send(None)
  File "/home/vsts/.local/lib/python3.5/site-packages/pyshark/capture/capture.py", line 430, in close_async
    await self._cleanup_subprocess(process)
  File "/home/vsts/.local/lib/python3.5/site-packages/pyshark/capture/capture.py", line 423, in _cleanup_subprocess
    % process.returncode)
pyshark.capture.capture.TSharkCrashException: TShark seems to have crashed (retcode: 2). Try rerunning in debug mode [ capture_obj.set_debug() ] or try updating tshark.
Expected exactly 2 handshake. Got: 0
Task exception was never retrieved
future: <Task finished coro=<Capture._get_tshark_process() done, defined at /home/vsts/.local/lib/python3.5/site-packages/pyshark/capture/capture.py:375> exception=TSharkCrashException("TShark seems to have crashed. Try updating it. (command ran: '/usr/local/bin/tshark -l -n -T pdml -Y (quic && !icmp) && ip.src==193.167.100.100 && quic.long.packet_type -d udp.port==443,quic -r /tmp/logs_sim_rah4eln3/trace_node_right.pcap')",)>
Traceback (most recent call last):
  File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
    result = coro.send(None)
  File "/home/vsts/.local/lib/python3.5/site-packages/pyshark/capture/capture.py", line 396, in _get_tshark_process
    self._created_new_process(parameters, tshark_process)
  File "/home/vsts/.local/lib/python3.5/site-packages/pyshark/capture/capture.py", line 404, in _created_new_process
    process_name, " ".join(parameters)))
pyshark.capture.capture.TSharkCrashException: TShark seems to have crashed. Try updating it. (command ran: '/usr/local/bin/tshark -l -n -T pdml -Y (quic && !icmp) && ip.src==193.167.100.100 && quic.long.packet_type -d udp.port==443,quic -r /tmp/logs_sim_rah4eln3/trace_node_right.pcap')
File size of /tmp/download_kxavf_0v/ppzgwecuih doesn't match. Original: 1024 bytes, downloaded: 0 bytes.

IMO, output here should not be present unless it's a useful status message or a legitimate error. For instance:

Saving logs to ./logs.
Running test X.
SUCCESS
Running test Y.
SUCCESS
Running test Z.
FAILURE: Expected at least 2 ClientHellos. Got: 0
...

For those completely new to all this, it would make things a lot easier to process and understand.

Add performance test for when network has reordering

We chould implement a perf test that introduces reordering based on known reordering patterns in the wild. There’s the common 1-packet swap, and then one packet jumping way ahead of others. These are the common patterns I’ve seen. I think this would be best done as a separate test implementation in ns3.

Thanks @ghedo for the suggestion.

Interop runner counts one connection attempt too many when parsing logs

Looking at the logs of this handshake loss trial, I find that the test failed because Expected 50 handshakes. Got: 51. Yet, the client and server logs only mention 50 connections, and the client downloaded the expected 50 files.

The test of the number of handshakes is probably mistaking a failed attempt for a successful one, leading to a false negative in the result. False negatives are unhelpful. They waste time when people investigate, and they push people to just dismiss test results without investigating them further.

Filtered views

The results presentation continues to grow with new test cases and implmentations. This is a mark of success! However, its getting a bit unwieldly and I predict will continue to do so. So I have a feature request that I think will benefit a typical workflow for many people that are using the results to for their implementation.

As a project maintainer,
I'd like to be able to filter out other projects
So that I can quickly scan the results of my [client, server].

For example, at the time of writing there are 12 server implementations but I mostly care about quiche, which is in the middle. It would be easier if I could filter the other servers out and have the following

Filtered matrix

There's a bunch of way this could be done. The first one that springs to mind is to have a collapsed section just above the results matrix that contains filters. This section contains a checkboxes for each server and client implementation. By default, there is an "all" checkbox that is selected. Users can check one more client and server implementations and the matrix will hide unchecked implementations.

Bonus points: make the filters part of the URL, so that I can bookmark or share them without having to manually apply filters each time.

Add testcase descriptions to results json

Could we add some descriptions of the testcases to the results.json file? I would then grab them and present them on the web page. Doing it this way (i.e, getting them from the code) means that they can be maintained in the place where it makes the most sense.

speed up interop run by caching the wireshark build

We can use caching to cache the Wireshark executable we build.
To make sure we get the most recent build, we can add save the minimum git commit that we require for Wireshark to function, and check the Wireshark commit history to see if the cached build originated after that commit.
Not that this involves checking out the Wireshark repository every time, but this is fast (20s) compared to building Wireshark (which requires checking out the repo + ~6 minutes of build time).

prevent cheating on the multiplexing test

We should validate that servers set a (bidirectional) stream limit smaller than the number of files transferred. Otherwise, we can't test that MAX_STREAMS frames are sent and acted upon.

Support endpoints in addition to Docker images

Some implementations may not be able to publish a Docker container for various reasons, but are able to host a publicly accessible server instance. It would be nice to be able to supply a hostname/port as an alternative.

-r option to run.py and BYO images

So I build my own docker image and run this to test that the image is basically functional:

./run.py -r <name>=<somehash> -c <name> -s <name> -t handshake -d

This works the first time. Great.

I have a few things to tweak in my image, so I delete the old image (with docker rmi -f <somehash>) and regenerate it.

Running run.py again with the updated hash stalls indefinitely. After some investigation, what happens is that docker-compose is stalling on a prompt, e.g.:

WARNING: The SERVER variable is not set. Defaulting to a blank string.
WARNING: The SERVER_PARAMS variable is not set. Defaulting to a blank string.
WARNING: The TESTCASE_SERVER variable is not set. Defaulting to a blank string.
WARNING: The CLIENT_PARAMS variable is not set. Defaulting to a blank string.
WARNING: The REQUESTS variable is not set. Defaulting to a blank string.
sim is up-to-date
Recreating 7370afc20c25_client ... error

ERROR: for 7370afc20c25_client  no such image: sha256:f2365d3874d02cd068be2f9743490dd89fbbba2bfed2d13cabc3815688cb2d3e: No such image: sha256:f2365d3874d02cd068be2f9743490dd89fbbba2bfed2d13cabc3815688cb2d3e

ERROR: for client  no such image: sha256:f2365d3874d02cd068be2f9743490dd89fbbba2bfed2d13cabc3815688cb2d3e: No such image: sha256:f2365d3874d02cd068be2f9743490dd89fbbba2bfed2d13cabc3815688cb2d3e
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.

Continue with the new image? [yN]

The workaround is to run docker-compose rm as well when removing images. But adding -V to the docker-compose arguments seems to fix the problem cleanly.

Website Needs Legend

I've shown https://interop.seemann.io/ to a few coworkers and most can't tell at a glance what everything is; both for the Interop and Measurements table. There should be some way, either via a legend directly on the page or a link to the info somewhere else, for reads to get all the necessary explanation for what information is being displayed.

On a side note, the page seems to be getting really wide. It might be worth modifying the way the results are displayed in each cell so that it all fits in the width of the average monitor.

another pyshark error

Server: quicly. Client: quicgo. Running test case: retry
Traceback (most recent call last):
  File "run.py", line 66, in <module>
    tests=get_tests(get_args().test),
  File "/Users/jiyengar/quic-interop-runner/interop.py", line 211, in run
    status = self._run_testcase(server, client, testcase)
  File "/Users/jiyengar/quic-interop-runner/interop.py", line 170, in _run_testcase
    if testcase.check():
  File "/Users/jiyengar/quic-interop-runner/testcases.py", line 208, in check
    return self._check_trace()
  File "/Users/jiyengar/quic-interop-runner/testcases.py", line 193, in _check_trace
    if p.quic.long_packet_type != "0" or p.quic.token_length == "0":
  File "/usr/local/Cellar/python/3.7.4_1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/pyshark/packet/packet.py", line 126, in __getattr__
    raise AttributeError("No attribute named %s" % item)
AttributeError: No attribute named quic

Certificate authority settings

I was looking in the documentation for guidelines regarding server certificates, but couldn't find anything. Is the expectation that they be self-signed and that clients should trust any root or should they be issued by a well-known authority?

If the expectation is trusting any root, could we instead expose a SERVER_CA_PATH environment variable to clients? This would get us closer to how clients will operate in the wild.

Thanks!

CPU Limited Throughput Test

It'd be interesting to see what throughput we're getting between the different implementations when not limited by the network. It might help identify interop performance issues. If you're not already running on real HW, this would most likely require it in order to get consistent measurements. Additionally, logging would likely have a big impact on perf and might need to be disabled, though obviously that'd make it difficult to debug any issues.

Command Line Option to Manually Specify Image Tag

For (automated) testing purposes on PR runs (not official, main-line branch commits) we'd like to be able to run the interop runner in our CI system. But to do that, it would require us to point the interop runner to a different tag than latest. So, it would be nice to have the ability to specify, at command line, to replace a given implementations tag (or full path) with something else.

cc @anrossi.

add a HTTP/3 POST test case

Servers could implement a SHA256 endpoint (for example), and the client would save the hash to a file to be verified by the interop runner.

sim image binaries uses avx instructions

https://en.wikipedia.org/wiki/Advanced_Vector_Extensions

This causes the test to fail with "illegal instruction" on machines prior to Intel SandyBridge and AMD Bulldozer. These came out around 10 years ago.

The workaround (I believe) is rebuilding the sim and iperf-endpoint containers on the machine without AVX. This will cause AVX instructions to not be used. I'm not sure how much using AVX really is needed, but given the workaround and the old hardware, opening this issue is really just to make you aware in the unlikely event someone else reports this.

Need Post-Test Step for Converting Logs

Right now, MsQuic cannot collect logs because it requires commands to be executed after the test runs. But the current setup runs the server continuously and then just kills the container. This never allows for us to collect/convert our log files. We need either:

  • An explicit new step/script to be run after killing the server process.
  • Or, an explicit signal to shutdown the server and then time enough to run the necessary scripts after the server app process exits.

handshake count is inaccurate

Counting the number of unique DCIDs leads to overcounting if an implementation immediately acts upon receipt of a NCID frame and then retransmits a Handshake packet using the new DCID.

Version Negotiation test doesn't work

The problem is that the simulator container buffers the writing of the pcap, which is why we end up with a pcap that doesn't the VNP, although it was both sent and received.

Windows Docker Image Support

It'd be really nice to test out our Windows code here, since it uses totally different TLS, Crypto and UDP stacks. I'm not exactly sure how it would all work, but you might be able to get things working by making the host machine run Windows and then you can host either Windows or Linux (via WSL2) VMs.

crash while counting handshakes

Using selector: KqueueSelector
Traceback (most recent call last):
  File "run.py", line 89, in <module>
    output=get_args().json,
  File "/Users/marten/src/quic-interop-runner/interop.py", line 334, in run
    status = self._run_testcase(server, client, testcase)
  File "/Users/marten/src/quic-interop-runner/interop.py", line 203, in _run_testcase
    return self._run_test(server, client, sim_log_dir, None, testcase)
  File "/Users/marten/src/quic-interop-runner/interop.py", line 271, in _run_test
    if testcase.check():
  File "/Users/marten/src/quic-interop-runner/testcases.py", line 422, in check
    num_handshakes = self._count_handshakes()
  File "/Users/marten/src/quic-interop-runner/testcases.py", line 114, in _count_handshakes
    conn_ids = [ p.quic.dcid for p in cap_handshakes ]
  File "/Users/marten/src/quic-interop-runner/testcases.py", line 114, in <listcomp>
    conn_ids = [ p.quic.dcid for p in cap_handshakes ]
  File "/usr/local/lib/python3.7/site-packages/pyshark/packet/packet.py", line 126, in __getattr__
    raise AttributeError("No attribute named %s" % item)
AttributeError: No attribute named quic

Define Acceptable Ranges for Measurement Values

We should define acceptable ranges for both the goodput and cross-traffic measurements. Successful values in those ranges would continue to be marked as green on the interop site. Successful values outside of those ranges would be marked as yellow or orange.

At least for now, we should probably go with conservative ranges. IMO, these should be good:

goodput: x >= 7500
cross-traffic: 3500 <= x <= 6500

Ideally, I'd prefer values closer to:

goodput: x >= 9000
cross-traffic: 4250 <= x <= 5750

Specify directory for structured logs

Currently, the README specifies where console log output is written.
However, no guidance is provided for where to potentially store other types of logs, such as for example qlog output.

I am not sure if this is necessary, since containers could just choose themselves to /logs/#server_#client/#testcase/server/qlog or equivalent, but maybe some more "default guidance" is welcome if we want to automate log extraction / aggregation / visualization in the future.

Repeat test until it fails

Would love to see a feature that re-runs a set of tests until at least one fails. Alternatively, make run.py exit with an error code, so I can shell-script a loop around it.

Show run end time and/or estimated current run completion time

I find myself looking at the start time and adding the duration to see how fresh the displayed results are, and just showing the end time would make life easier. (I still have to convert UTC to my local time, but whatever)

Also, since the end time of one run is the start of the next run, adding duration to the end time would give an estimate of when the current run will complete.

pip3 install -r requirements.txt Fails

I'm using https://labs.play-with-docker.com/ to try this out, and I get the following error:

$ pip3 install -r requirements.txt
Collecting pycrypto
  Downloading pycrypto-2.6.1.tar.gz (446 kB)
     |████████████████████████████████| 446 kB 26.5 MB/s 
Collecting termcolor
  Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Collecting prettytable
  Downloading prettytable-0.7.2.tar.bz2 (21 kB)
Collecting pyshark
  Downloading pyshark-0.4.2.11-py3-none-any.whl (30 kB)
Collecting lxml
  Downloading lxml-4.5.1.tar.gz (4.5 MB)
     |████████████████████████████████| 4.5 MB 17.1 MB/s 
    ERROR: Command errored out with exit status 1:
     command: /usr/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-9liw8idl/lxml/setup.py'"'"'; __file__='"'"'/tmp/pip-install-9liw8idl/lxml/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-un5de16c
         cwd: /tmp/pip-install-9liw8idl/lxml/
    Complete output (3 lines):
    Building lxml version 4.5.1.
    Building without Cython.
    Error: Please make sure the libxml2 and libxslt development packages are installed.
    ----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

My Linux knowledge isn't great. What else should be installed? Should the requirements.txt be updated?

trace analyzer doesn't handle coalesced packets correctly

For example, a packet is only recognized as a Handshake packet if the first coalesced packet is of type Handshake. As soon as this happens, the whole packet is returned.
We should probably "split" packets before we return them. That means that we won't have access to the UDP header at that point any more, but we're not using that anyway at the moment.

Client may start earlier than Server

Sometimes Client container and its script starts before Server, making Client fails before Server starts.

Attached log (output.txt) is one example of quiche-quiche multiplexing (goodput) test. Here Client (quiche client) fails because there is no server running on the other side.

I can put some sleep before client script, but if runner can make sure Server always starts first it will be great.

2020-02-05 10:07:42,839 Generated random file: sqjhlalpyl of size: 10485760
2020-02-05 10:07:42,840 Requests: https://server:443/sqjhlalpyl
2020-02-05 10:07:42,840 Command: TESTCASE=transfer WWW=/tmp/www_yq2lzhz4/ DOWNLOADS=/tmp/download_l0z_1bz3/ SERVER_LOGS=/tmp/logs_server_ayl83vm7 CLIENT_LOGS=/tmp/logs_client_usgjj_0i SCENARIO="simple-p2p --delay=15ms --bandwidth=10Mbps --queue=25" CLIENT=cloudflare/quiche-qns:latest SERVER=cloudflare/quiche-qns:latest REQUESTS="https://server:443/sqjhlalpyl"  docker-compose up --abort-on-container-exit --timeout 1 sim client server
2020-02-05 10:07:46,808 The SERVER_PARAMS variable is not set. Defaulting to a blank string.
The CLIENT_PARAMS variable is not set. Defaulting to a blank string.
Starting sim ...
^MStarting sim ... done^MRecreating server ...
Recreating client ...
^MRecreating client ... done^MRecreating server ... done^MAttaching to sim, client, server
client          | Setting up routes...
client          | Actual changes:
client          | tx-checksumming: off
client          |       tx-checksum-ip-generic: off
client          |       tx-checksum-sctp: off
client          | tcp-segmentation-offload: off
client          |       tx-tcp-segmentation: off [requested on]
client          |       tx-tcp-ecn-segmentation: off [requested on]
client          |       tx-tcp-mangleid-segmentation: off [requested on]
client          |       tx-tcp6-segmentation: off [requested on]
client          | supported
client          | wait-for-it.sh: waiting 30 seconds for sim:57832
client          | wait-for-it.sh: sim:57832 is available after 0 seconds
client          | ## Starting quiche client...
client          | ## Client params:
client          | ## Requests: https://server:443/sqjhlalpyl
client          | ## Test case: transfer
client          | [2020-02-05T09:07:44.775629500Z INFO  quiche_client] connecting to 193.167.100.100:443 from 193.167.0.100:38526 with scid 61f627377a08628009fea10f1881c708a39c09e1
client          | thread 'main' panicked at 'recv() failed: Os { code: 111, kind: ConnectionRefused, message: "Connection refused" }', src/bin/quiche-client.rs:194:21
client          | note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
server          | Setting up routes...
server          | Actual changes:
server          | tx-checksumming: off
server          |       tx-checksum-ip-generic: off
server          |       tx-checksum-sctp: off
server          | tcp-segmentation-offload: off
server          |       tx-tcp-segmentation: off [requested on]
server          |       tx-tcp-ecn-segmentation: off [requested on]
server          |       tx-tcp-mangleid-segmentation: off [requested on]
server          |       tx-tcp6-segmentation: off [requested on]
server          | supported
server          | ## Starting quiche server...
server          | ## Server params:
server          | ## Test case: transfer
sim             | Using scenario: simple-p2p --delay=15ms --bandwidth=10Mbps --queue=25
client exited with code 101
Stopping server   ...
Stopping sim      ...
^MStopping server   ... done^MStopping sim      ... done^MAborting on container exit...

pyshark error

Server: quicgo. Client: quicgo. Running test case: retry
Exception ignored in: <function Capture.__del__ at 0x10b9ed3b0>
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/pyshark/capture/capture.py", line 446, in __del__
    self.close()
  File "/usr/local/lib/python3.7/site-packages/pyshark/capture/capture.py", line 437, in close
    self.eventloop.run_until_complete(self.close_async())
  File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 566, in run_until_complete
    self.run_forever()
  File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py", line 521, in run_forever
    raise RuntimeError('This event loop is already running')
RuntimeError: This event loop is already running

add a test case that corrupts packets

I imagine this to be similar to the handshakeloss test case (as corrupted packets will show up as packet loss), although this should lead to some interesting corner cases if only one packet in a coalesced packet is corrupted.

Obviously, we'll need another network simulator scenario for this.

vneg confuses interop runner

When a client send an Initial with an unsupported version and the server responds with vneg, the parsing logic in the runner becomes confused.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.