tohojo / flent Goto Github PK
View Code? Open in Web Editor NEWThe FLExible Network Tester.
Home Page: https://flent.org
License: Other
The FLExible Network Tester.
Home Page: https://flent.org
License: Other
I would like directly comparable results between different tcps on a variety of tests.
--test-parameter=key=value looks like the right place to do that. CC={reno,cubic,whatever}
not clear how to extend that to the rest of the codebase.
I like that settings has now moved to a menu item.
I have been using meta data more and more (see bug about collecting more of it by default without the -x option) - and I think a better place for it to go by default is across the bottom (rather than the right+bottom), with space for the "name" vastly expanded. These days I am continually tearing off the panel, then expanding the name to be readable and so on.
Having a way to expand all would be nice also.
the metadata proved darn useful in tearing apart some flows in the cdg results, notably spotting the (mis)use of netem....
This captures a very fine grain of detail that is satisfying on shorter RTTs especially (this is a 10ms path)
http://snapon.lab.bufferbloat.net/~cero3/shortened_step_size_bug/shortened_stepsize_bug.svg
but the ping and udp flows last too long.
Would probably make sense to convert the documentation to Sphinx and use that for generating web site and man pages. http://sphinx-doc.org/
Hi,
Am using python3 and netperf_2.6.0 as mentioned here :
http://archive.tohojo.dk/apt/dists/squeeze/Release
I could successfully install "netperf-wrapper_0.5.6" but when I started this command line "netperf-wrapper -H localhost ping" it takes some seconds and shows me the below error :
root@xenServer:/home/ahmed/Downloads# netperf-wrapper -H localhost ping
Traceback (most recent call last):
File "/usr/local/bin/netperf-wrapper", line 62, in
results[0].dump_dir(os.path.dirname(settings.OUTPUT) or ".")
File "/usr/local/lib/python3.1/dist-packages/netperf_wrapper/resultset.py", line 174, in dump_dir
fp = gzip_open(self._dump_file, "wt")
File "/usr/local/lib/python3.1/dist-packages/netperf_wrapper/util.py", line 110, in gzip_open
binary_file = _gzip_open(filename, mode)
File "/usr/lib/python3.1/gzip.py", line 132, in init
self.closed = False
AttributeError: can't set attribute
Exception AttributeError: "can't set attribute" in <bound method GzipFile.del of <gzip on 0xb718d08c>> ignored
I am not sure of what -x collects that is actually something someone would prefer kept private.
Certainly the GATEWAY info is private-ish. Also collecting the fe80 addresses is somewhat pointless (and in my metadata duplicated once). Also the IP_ADDRS stuff is sensitive.
LOCAL_HOST, sans domain, seems private enough. With a full domain not so.
KERNEL_NAME and KERNEL_RELEASE are good to keep and I think that should always be captured but it is debatable.
As for the sysctls, well, I just added a couple useful ones.
I would like a traceroute (that is definately a -x option) at minimum, and the original rrul spec required mtr output over the course of the test (and now could be a variant thereof. mtr can output csv in particular.
My guess is the last value her e is the result in usec.
root@nuc-client:~/public_html/renovscubic# mtr --report-wide --csv -n -c 2 snapon.lab.bufferbloat.net
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;1;172.21.2.21;248
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;2;???;0
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;3;67.180.184.1;9618
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;4;68.85.102.189;9704
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;5;162.151.79.9;9365
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;6;162.151.78.249;10241
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;7;???;0
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;8;68.86.86.166;9366
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;9;173.167.56.62;9390
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;10;149.20.65.20;9436
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;11;149.20.65.10;11771
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;12;149.20.63.30;10883
I generally never use the log scales except on pfifo_fast.
If you inverted the sense of the test you could use "log scale" for the check box.
Presently, the GUI tabs represent open data files, but the feature to
combine plots can also load multiple data files in one tab. This is not
really visible, and there's only an "add more / remove all" granularity
to the files open in each tab.
Having a better way to manage which data files are open and which are
used in each plot (tab) would make flipping through many data files
easier.
Plenty of examples show timeseries aggregation, but none show iterated.
Setting ITERATIONS to 3 for some reason requires collect() to return exactly 3 results.
Can you please create a simple show-case?
I'm trying to run single netperf tcp upload with different message sizes (-m) multiple times.
The plotting code has grown quite large and unmanageable in a flat
namespace in the PlotFormatter class. This should be refactored so plots
can get their own classes, and be split out into their own module.
This also entails splitting out the code that combines several datafiles
into one plot and making it accessible orthogonally to the plotting code
(and thus hopefully more widely useful).
This work is ongoing in the plotting-refactor branch.
after installing flent package 'flent' produces 'command not found', 'flent-gui' does work.
/usr/bin/flent points to ../share/flent/flent which is actually a directory
/usr/bin/flent-gui points to ../share/flent/flent-gui which does exist.
I suspect that 'flent' should live in ../share/flent but has instead been moved into that directory because it's called the same this as the directory.
I'm using netperf-wrapper to monitor latency while I test out changes trying to sort out some VoIP lag issues.
For long term monitoring I'm running netperf-wrapper in 15 minute chunks pinging our upstream VoIP provider 1/sec. Chopping things up in 15 minute files helps me go back and just look at the times where lag is reported.
netperf-wrapper is a great tool and has been massively useful in my testing, however two additional features would help me:
I'd like to be able to feed multiple test files to ping_cdf and have all the results of all files summed up in 1 cdf trace rather than 1 trace per file.
I'd like the option of processing traces wrt absolute time so that I could provide several traces that were taken sequentially and have them concatenated onto a single graph rather than show up as separate traces on the same graph.
Thanks for all the work you have put into this tool.
ping -D is not working .. due to that i am not getting ping data in graphs
what i have to do for that
There would be mildly more room for metadata if the settings panel became a top level menu item instead (file, settings, view, data)
also log scale would be "log scale"
This is a meta-issue tracking things related to being less surprising and more useful on first run.
All other unix derived tools drop their files in the local directory by default.
-D somewhere_else is ok, way to set that as a default is good in the rcfile, but the default behavior should be the local dir, (and my beef with the gui is that should default to saving out also to the local dir)
There are a zillion things wrong with the idea of dropping files into tmp by default. More than one user using the tool at the same time. Using a dirname as the basic means of distinguishing between test runs and having shorter filenames as a result. Having to move stuff from tmp to somewhere else...
and...
Breaking the most common semantic for unix...
Sure: use /tmp to store a temporary file, but not the final output. (but that is handled by grabbing the std TMPDIR env var from the environment)
d@nuc-client:~/public_html/cake_500$ flent-gui rrul_50_codel_.gz
Traceback (most recent call last):
File "/usr/local/bin/flent-gui", line 9, in
load_entry_point('flent==0.11.1-git-a5e14b3', 'gui_scripts', 'flent-gui')()
File "/usr/local/lib/python2.7/dist-packages/flent-0.11.1_git_a5e14b3-py2.7.egg/flent/init.py", line 81, in run_flent_gui
return run_flent(gui=True)
File "/usr/local/lib/python2.7/dist-packages/flent-0.11.1_git_a5e14b3-py2.7.egg/flent/init.py", line 44, in run_flent
return run_gui(settings)
File "/usr/local/lib/python2.7/dist-packages/flent-0.11.1_git_a5e14b3-py2.7.egg/flent/gui.py", line 68, in run_gui
mainwindow = MainWindow(settings)
File "/usr/local/lib/python2.7/dist-packages/flent-0.11.1_git_a5e14b3-py2.7.egg/flent/gui.py", line 187, in init
self.read_settings()
File "/usr/local/lib/python2.7/dist-packages/flent-0.11.1_git_a5e14b3-py2.7.egg/flent/gui.py", line 203, in read_settings
self.restoreGeometry(settings.value("mainwindow/geometry"))
TypeError: QWidget.restoreGeometry(QByteArray): argument 1 has unexpected type 'QVariant'
Using https://mpld3.github.io/ it should be possible to output plots as interactive .html files with embedded javascript.
Warning: Program exited non-zero.
Command: netperf -P 0 -v 0 -D -0.2 -4 -Y CS5,CS5 -H localhost -t TCP_STREAM -l 60 -f m
Program output:
netperf: invalid option -- '0'
Usage: netperf [global options] -- [test options]
Global options:
-a send,recv Set the local send,recv buffer alignment
-A send,recv Set the remote send,recv buffer alignment
-B brandstr Specify a string to be emitted with brief output
-c [cpu_rate] Report local CPU usage
-C [cpu_rate] Report remote CPU usage
-d Increase debugging output
-D [secs,units] * Display interim results at least every secs seconds
using units as the initial guess for units per second
-f G|M|K|g|m|k Set the output units
-F fill_file Pre-fill buffers with data from fill_file
-h Display this text
-H name|ip,fam * Specify the target machine and/or local ip and family
-i max,min Specify the max and min number of iterations (15,1)
-I lvl[,intvl] Specify confidence level (95 or 99) (99)
and confidence interval in percentage (10)
-l testlen Specify test duration (>0 secs) (<0 bytes|trans)
-L name|ip,fam * Specify the local ip|name and address family
-o send,recv Set the local send,recv buffer offsets
-O send,recv Set the remote send,recv buffer offset
-n numcpu Set the number of processors for CPU util
-N Establish no control connection, do 'send' side only
-p port,lport* Specify netserver port number and/or local port
-P 0|1 Don't/Do display test headers
-r Allow confidence to be hit on result only
-t testname Specify test to perform
-T lcpu,rcpu Request netperf/netserver be bound to local/remote cpu
-v verbosity Specify the verbosity level
-W send,recv Set the number of send,recv buffers
-v level Set the verbosity level (default 1, min 0)
-V Display the netperf version and exit
For those options taking two parms, at least one must be specified;
specifying one value without a comma will set both parms to that
value, specifying a value with a leading comma will set just the second
parm, a value with a trailing comma will set just the first. To set
each parm to unique values, specify both and separate them with a
comma.
Traceback (most recent call last):
File "netperf-wrapper", line 62, in
results[0].dump_dir(os.path.dirname(settings.OUTPUT) or ".")
File "/home/ahmed/Downloads/netperf-wrapper-master/netperf_wrapper/resultset.py", line 178, in dump_dir
fp.close()
File "/usr/lib/python2.6/io.py", line 1492, in close
if not self.closed:
File "/usr/lib/python2.6/io.py", line 1498, in closed
return self.buffer.closed
AttributeError: GzipFile instance has no attribute 'closed'
These use the handy-dandy :: syntax to create 50 flows, but there is no way to see the graph of all those flows interrelating.
The lack of proper logging has been an issue for debugging for a while.
Increasing verbosity would definitely be useful for debugging.
Proper logging should be implemented with the Python logging framework,
configured appropriately.
Iteratively parsing the test tool output would allow interactivity
during test runs (running tests from the GUI - yay!), and would allow us
to detect problems sooner.
Hi @tohojo,
thanks again for the help with http-getter
I have one question for you - is there any way (parameters, settings) to include MEAN_VALUE
for HTTP latency
into SERIES_META
. Because, for now, it is empty:
"TCP upload BE2": {
"MEAN_VALUE": 936.8
},
"HTTP latency": {},
"Ping (ms) avg": {
"MEAN_VALUE": 1736.3999999999999
},
"TCP download BE4": {
"MEAN_VALUE": 725.62
},
"TCP upload BE4": {
"MEAN_VALUE": 897.17
},
"TCP download BE2": {
"MEAN_VALUE": 712.97
},
"TCP upload avg": {
"MEAN_VALUE": 918.6175
},
"Ping (ms) ICMP": {},
It is good to have it there because it allows to quickly get needed results of http-rrul
from metadata
output without parsing results in json.gz. Ping (ms) ICMP
will be helpful too.
Thank you for your work.
Hi,
Would it be possible to get these data in the metadata of test ? I am willing to add it by myself but I don't know where you parse the output of netperf.
Thanks
Periodically I receive the below exception when processing ping_cdf plots. It's not clear to me if this is the same UDP ping issue described in BUGS. I get results if I use totals rather than ping_cdf.
I have a short ping test file that creates the exception. I tried processing with -L but no logfile is created.
(BTW. BUGS says make a log file with -l but it supposed to be -L)
I would attach the file to this ticket but I don't see how. Only images seem to be supported I can email it to you if you want it.
rsmith@thinko:~/rrul$ netperf-wrapper -p ping_cdf -i daily/ping-2014-06-16T100414.372309.json.gz
/usr/lib/pymodules/python2.7/matplotlib/axes.py:2452: UserWarning: Attempting to set identical left==right results
in singular transformations; automatically expanding.
left=0.0, right=0.0
We really do need a test suite for this thing... Including some sort of
self-test for checking if the tests can run.
Testing the tests... It's testception!
I think a good test would be one that starts, say 10, streams, one every 5 seconds,
and tries to measure how long it takes for it to reach equilibrium.
Each TCP flow on later versions of netperf can dump it's tcp statistics which seems useful information to collect.
d@nuc-client:~/git/netperf$ DUMP_TCP_INFO=1 netperf -H ranger
MIGRATED TCP STREAM TEST from ::0 (::) port 0 AF_INET6 to ranger () port 0 AF_INET6 : demo
tcpi_rto 204000 tcpi_ato 0 tcpi_pmtu 1500 tcpi_rcv_ssthresh 28800
tcpi_rtt 3482 tcpi_rttvar 455 tcpi_snd_ssthresh 91 tpci_snd_cwnd 105
tcpi_reordering 3 tcpi_total_retrans 9
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.04 294.61
d@nuc-client:~/git/netperf$ DUMP_TCP_INFO=1 netperf -H snapon.lab.bufferbloat.net
MIGRATED TCP STREAM TEST from ::0 (::) port 0 AF_INET6 to snapon.lab.bufferbloat.net () port 0 AF_INET6 : demo
tcpi_rto 224000 tcpi_ato 0 tcpi_pmtu 1500 tcpi_rcv_ssthresh 28800
tcpi_rtt 23011 tcpi_rttvar 1872 tcpi_snd_ssthresh 7 tpci_snd_cwnd 9
tcpi_reordering 3 tcpi_total_retrans 2
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 16384 16384 10.42 5.29
d@nuc-client:~/git/netperf$
some graphs use a combination of orange, red, and greenish blue that is kind of hard to see. The one bugging me at the moment is the total bandwidth and average ping plot in rrul. I like ping in red. I think the orange should be green, the greenish blue, blue....
I am totally hopeless for color selections, it is my hope that someone with an artistic bent shows up and reviews the plot types for better color matches and patterns....
The rrul-test produces an error on Mac OSX since the -w option is unknown and should use -t instead.
Data series: Ping (ms) ICMP
Runner: PingRunner
Command: /sbin/ping -n -D -i 0.20 -w 70 tptest.carl.kau.se
Standard error output:
/sbin/ping: illegal option -- w
usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize]
[-g sweepminsize] [-h sweepincrsize] [-i wait] [โk trafficclass]
[-l preload] [-M mask | time] [-m ttl] [-p pattern]
[-S src_addr] [-s packetsize] [-t timeout][-W waittime] [-z tos]
host
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flent-0.12.4_git_4319244-py2.7.egg/flent/gui.py", line 530, in load_files
widget.load_results(f)
File "/usr/local/lib/python2.7/dist-packages/flent-0.12.4_git_4319244-py2.7.egg/flent/gui.py", line 1163, in load_results
self.results = ResultSet.load_file(unicode(results))
File "/usr/local/lib/python2.7/dist-packages/flent-0.12.4_git_4319244-py2.7.egg/flent/resultset.py", line 412, in load_file
r = cls.load(fp, absolute)
File "/usr/local/lib/python2.7/dist-packages/flent-0.12.4_git_4319244-py2.7.egg/flent/resultset.py", line 397, in load
obj = cls.unserialise(json.load(fp), absolute, SUFFIX=ext)
File "/usr/local/lib/python2.7/dist-packages/flent-0.12.4_git_4319244-py2.7.egg/flent/resultset.py", line 346, in unserialise
metadata[t] = parse_date(metadata[t])
File "/usr/local/lib/python2.7/dist-packages/flent-0.12.4_git_4319244-py2.7.egg/flent/util.py", line 71, in parse_date
ts = time.mktime(dt.timetuple())
NameError: global name 'time' is not defined
given the enormous number of tests in the suite, having it be able to test each test would be good. (and getting to where all the tests would actually run, good also, predefined defaults for more servers, etc)
so either the --self-test thing could be embedded in flent (using localhost? or a set of predefined servers on the internet?) or run from an external driver script parsing --list-tests.
Installation from pip using the command sudo pip install netperf-wrapper
fails due to an unexpected parameter (--single-version-externally-managed
).
ubuntu@noname:~$ sudo pip install netperf-wrapper
Downloading/unpacking netperf-wrapper
Downloading netperf-wrapper-0.8.1.tar.gz (97kB): 97kB downloaded
Running setup.py (path:/tmp/pip_build_root/netperf-wrapper/setup.py) egg_info for package netperf-wrapper
Installing collected packages: netperf-wrapper
Running setup.py install for netperf-wrapper
usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: -c --help [cmd1 cmd2 ...]
or: -c --help-commands
or: -c cmd --help
error: option --single-version-externally-managed not recognized
Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/netperf-wrapper/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-yBJWJH-record/install-record.txt --single-version-externally-managed --compile:
usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: -c --help [cmd1 cmd2 ...]
or: -c --help-commands
or: -c cmd --help
error: option --single-version-externally-managed not recognized
----------------------------------------
Cleaning up...
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/netperf-wrapper/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-yBJWJH-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/netperf-wrapper
Storing debug log for failure in /home/ubuntu/.pip/pip.log
Tested on Ubuntu 14.10 (my desktop machine) and Ubuntu 14.04.1 (on Amazon EC2).
in the general case I like to keep the plots I make in the data directory itself, and to retain whatever directory I last saved in for future plots.
Saving in the home dir is a PITA.
/opt/local/Library/Frameworks/Python.framework/Versions/3.4/bin/flent-gui *.flent.gz
is a tad long. Is there a way to convince setup.py install to put something in the path?
I was pleased to see python3.4 setup.py install work at all on osx, actually.
It would be good to restructure the file format a little bit. In no
particular order:
Throughout all this, keep a compatibility layer so we can read old data
files (and convert them to the new format), but breaking compatibility
so old versions can't read the new files is fine.
Currently, the settings are stored in on big object with no module
namespacing. This object is then passed around everywhere, leading to
tight coupling, and making it difficult to see which parts of the code
use which settings.
Doing this differently, and having each module / class explicitly take
their needed parameters as call arguments should clear this up.
While we're at it, might as well upgrade the argument parsing code to
use the argparse module. Should be more flexible, and we won't have to
do our own ini file parsing anymore.
Depends on #24 since argparse is Python 2.7+.
Old python versions are never tested, and probably don't work well
anyway. And even Debian stable has Python 2.7 now.
As far as CentOS is concerned, see: http://www.curiousefficiency.org/posts/2015/04/stop-supporting-python26.html
Related to #30 -- is it worth converting to setuptools entirely?
Python(29085,0x7fff798a2310) malloc: *** error for object 0x7fa8acd71978: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
I had a metric ton of files open
I was delighted to see I could output all the statistics in a given batch with:
netperf-wrapper -f csv tcp_upload*.json.gz
However having the first bit of output have column names, and the left side the filename, would be useful when importing into a spreadsheet.
The issue is reproducible on Ubuntu with the latest pip installed (7.0.1 as of now). The error is following:
$ flent --list-tests
Traceback (most recent call last):
File "/usr/local/bin/flent", line 27, in
sys.exit(run_flent())
File "/usr/local/lib/python2.7/dist-packages/flent/init.py", line 41, in run_flent
settings = load(sys.argv[1:])
File "/usr/local/lib/python2.7/dist-packages/flent/settings.py", line 623, in load
list_tests()
File "/usr/local/lib/python2.7/dist-packages/flent/settings.py", line 656, in list_tests
tests = sorted([os.path.splitext(i)[0] for i in os.listdir(TEST_PATH) if i.endswith('.conf')])
OSError: [Errno 2] No such file or directory: '/usr/local/lib/python2.7/dist-packages/tests'
The same works fine if flent is installed by pip 6.1 (or when it was installed by 6.1 and later pip upgraded)
The issue also reproducible in virtualenv created by version 13:
$ .venv/bin/flent --list-tests
Traceback (most recent call last):
File ".venv/bin/flent", line 27, in
sys.exit(run_flent())
File "/tmp/.venv/local/lib/python2.7/site-packages/flent/init.py", line 41, in run_flent
settings = load(sys.argv[1:])
File "/tmp/.venv/local/lib/python2.7/site-packages/flent/settings.py", line 623, in load
list_tests()
File "/tmp/.venv/local/lib/python2.7/site-packages/flent/settings.py", line 656, in list_tests
tests = sorted([os.path.splitext(i)[0] for i in os.listdir(TEST_PATH) if i.endswith('.conf')])
OSError: [Errno 2] No such file or directory: '/tmp/.venv/local/lib/python2.7/site-packages/tests'
$ virtualenv --version
13.0.1
in the ath9k at least, there are several other useful sorts of stats that can be collected.
/sys/kernel/debug/ieee80211/phy0/netdev:sw00/stations/b8:3e:59:1a:00:63/rc_stats#
/netdev:/stations/*/rc_stats
is one of them. The hard part to parse is what channels are being selected and why. There is a new version of this code in the latest minstrel-blues and minstrel-variance output that is rc_stats_csv and should be easier to parse.
/sys/kernel/debug/ieee80211/phy0/ath9k/xmit
/phy*/
is the other. TX-Pkgts-All, TX-Bytes-All compared against aggregates and mpdus in their various forms for the queues available is good.
Hello,
I am trying to establish connection from my laptop from home to a desktop in school.
I set up static IP in desktop (for using as server): 203.241.247.250
Laptop (using as client) and Desktop is set up netperf-wrapper in the same way (also with config-demo)
I tried to plot data when I choose host as: demo.tohojo.dk (might be your server) and it worked.
However, When I tried with my server,
./netperf-wrapper -H 203.241.247.250 -p ping_cdf -o rrul-test-file.ps -f plot rrul
it always gives warning: program exited non-zero (1) (displayed on client/laptop)
Command: /bin/ping -n -D -i 0.20 -w 70 203.241.247.250
Program output:
PING 203.241.247.250: 100% packet loss
When turning on debug logging, currently the output of the commands are
included one after another, slightly indented. It would probably be
better to store them directly, maybe build a tar file of the input.
Also, re-parsing from such a file would be nice, if nothing else for
debugging purposes.
I.e. RRUL and RRUL-be should be plottable on the same plot.
If the HOSTS variable was a closure, where every reference to it became HOST = HOSTS[X = X + 1 % NUMBER_OF_HOSTS], it would be simpler to add more different RTTs to the mix and expand out to the huge numbers of flows I am attempting to look at.
Similarly, more robust handling/plotting of multiple flows would be nice.
mikaels dataset had no queue stats in it.
in case this gets mangled: http://snapon.lab.bufferbloat.net/~d/cake.out
These are the three (thus far) known modes of "cake"
qdisc cake 8002: root refcnt 2 unlimited diffserv4 flows raw
Sent 1672 bytes 6 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Class 0 Class 1 Class 2 Class 3
rate 0bit 0bit 0bit 0bit
target 5.0ms 5.0ms 5.0ms 5.0ms
interval 105.0ms 105.0ms 105.0ms 105.0ms
Pk delay 0us 0us 0us 0us
Av delay 0us 0us 0us 0us
Sp delay 0us 0us 0us 0us
pkts 0 2 0 4
way inds 0 0 0 0
way miss 0 2 0 1
way cols 0 0 0 0
bytes 0 1200 0 472
drops 0 0 0 0
marks 0 0 0 0
qdisc cake 8002: root refcnt 2 unlimited diffserv8 flows raw
Sent 3302 bytes 12 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Class 0 Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7
rate 0bit 0bit 0bit 0bit 0bit 0bit 0bit 0bit
target 5.0ms 5.0ms 5.0ms 5.0ms 5.0ms 5.0ms 5.0ms 5.0ms
interval 105.0ms 105.0ms 105.0ms 105.0ms 105.0ms 105.0ms 105.0ms 105.0ms
Pk delay 0us 0us 0us 0us 0us 0us 0us 0us
Av delay 0us 0us 0us 0us 0us 0us 0us 0us
Sp delay 0us 0us 0us 0us 0us 0us 0us 0us
pkts 0 4 0 8 0 0 0 0
way inds 0 0 0 0 0 0 0 0
way miss 0 2 0 1 0 0 0 0
way cols 0 0 0 0 0 0 0 0
bytes 0 2400 0 902 0 0 0 0
drops 0 0 0 0 0 0 0 0
marks 0 0 0 0 0 0 0 0
qdisc cake 8002: root refcnt 2 unlimited besteffort flows raw
Sent 3732 bytes 16 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Class 0
rate 0bit
target 5.0ms
interval 105.0ms
Pk delay 0us
Av delay 0us
Sp delay 0us
pkts 2
way inds 0
way miss 1
way cols 0
bytes 228
drops 0
marks 0
It appears that sometimes the data files (matplotlibrc.dist, etc) are
not properly installed when running 'python setup.py install'.
This has been observed on at least OSX and Ubuntu 14.04. Need to find a
way to reliably reproduce this issue.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.