Giter Site home page Giter Site logo

flent's People

Contributors

cherusk avatar dtaht avatar emys-alb avatar freysteinn avatar heistp avatar hrishikeshathalye avatar jcristau avatar kasraghu avatar kris-anderson avatar maybe-hello-world avatar mlouielu avatar moeller0 avatar netoptimizer avatar richb-hanover avatar shashank68 avatar sourcejedi avatar tohojo avatar zi0r avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flent's Issues

Changing CC algos on the fly

I would like directly comparable results between different tcps on a variety of tests.

--test-parameter=key=value looks like the right place to do that. CC={reno,cubic,whatever}

not clear how to extend that to the rest of the codebase.

flent layout for metadata

I like that settings has now moved to a menu item.

I have been using meta data more and more (see bug about collecting more of it by default without the -x option) - and I think a better place for it to go by default is across the bottom (rather than the right+bottom), with space for the "name" vastly expanded. These days I am continually tearing off the panel, then expanding the name to be readable and so on.

Having a way to expand all would be nice also.

the metadata proved darn useful in tearing apart some flows in the cdg results, notably spotting the (mis)use of netem....

http://folk.uio.no/kennetkl/experiments/

Bug in netperf-wrapper_0.5.6 with python3

Hi,
Am using python3 and netperf_2.6.0 as mentioned here :
http://archive.tohojo.dk/apt/dists/squeeze/Release
I could successfully install "netperf-wrapper_0.5.6" but when I started this command line "netperf-wrapper -H localhost ping" it takes some seconds and shows me the below error :

root@xenServer:/home/ahmed/Downloads# netperf-wrapper -H localhost ping
Traceback (most recent call last):
File "/usr/local/bin/netperf-wrapper", line 62, in
results[0].dump_dir(os.path.dirname(settings.OUTPUT) or ".")
File "/usr/local/lib/python3.1/dist-packages/netperf_wrapper/resultset.py", line 174, in dump_dir
fp = gzip_open(self._dump_file, "wt")
File "/usr/local/lib/python3.1/dist-packages/netperf_wrapper/util.py", line 110, in gzip_open
binary_file = _gzip_open(filename, mode)
File "/usr/lib/python3.1/gzip.py", line 132, in init
self.closed = False
AttributeError: can't set attribute
Exception AttributeError: "can't set attribute" in <bound method GzipFile.del of <gzip on 0xb718d08c>> ignored

improving the -x option and other metadata

I am not sure of what -x collects that is actually something someone would prefer kept private.

Certainly the GATEWAY info is private-ish. Also collecting the fe80 addresses is somewhat pointless (and in my metadata duplicated once). Also the IP_ADDRS stuff is sensitive.

LOCAL_HOST, sans domain, seems private enough. With a full domain not so.

KERNEL_NAME and KERNEL_RELEASE are good to keep and I think that should always be captured but it is debatable.

As for the sysctls, well, I just added a couple useful ones.

I would like a traceroute (that is definately a -x option) at minimum, and the original rrul spec required mtr output over the course of the test (and now could be a variant thereof. mtr can output csv in particular.

My guess is the last value her e is the result in usec.

root@nuc-client:~/public_html/renovscubic# mtr --report-wide --csv -n -c 2 snapon.lab.bufferbloat.net
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;1;172.21.2.21;248
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;2;???;0
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;3;67.180.184.1;9618
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;4;68.85.102.189;9704
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;5;162.151.79.9;9365
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;6;162.151.78.249;10241
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;7;???;0
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;8;68.86.86.166;9366
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;9;173.167.56.62;9390
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;10;149.20.65.20;9436
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;11;149.20.65.10;11771
MTR.0.85;1432498345;OK;snapon.lab.bufferbloat.net;12;149.20.63.30;10883

Better handling of multiple open data files in the GUI

Presently, the GUI tabs represent open data files, but the feature to
combine plots can also load multiple data files in one tab. This is not
really visible, and there's only an "add more / remove all" granularity
to the files open in each tab.

Having a better way to manage which data files are open and which are
used in each plot (tab) would make flipping through many data files
easier.

No examples of iterated jobs.

Plenty of examples show timeseries aggregation, but none show iterated.
Setting ITERATIONS to 3 for some reason requires collect() to return exactly 3 results.
Can you please create a simple show-case?
I'm trying to run single netperf tcp upload with different message sizes (-m) multiple times.

Restructure the plotting code

The plotting code has grown quite large and unmanageable in a flat
namespace in the PlotFormatter class. This should be refactored so plots
can get their own classes, and be split out into their own module.

This also entails splitting out the code that combines several datafiles
into one plot and making it accessible orthogonally to the plotting code
(and thus hopefully more widely useful).

This work is ongoing in the plotting-refactor branch.

Fails to install correctly ubuntu 14.04

after installing flent package 'flent' produces 'command not found', 'flent-gui' does work.

/usr/bin/flent points to ../share/flent/flent which is actually a directory
/usr/bin/flent-gui points to ../share/flent/flent-gui which does exist.

I suspect that 'flent' should live in ../share/flent but has instead been moved into that directory because it's called the same this as the directory.

Want methods to aggregate many ping test results in a single plot

I'm using netperf-wrapper to monitor latency while I test out changes trying to sort out some VoIP lag issues.

For long term monitoring I'm running netperf-wrapper in 15 minute chunks pinging our upstream VoIP provider 1/sec. Chopping things up in 15 minute files helps me go back and just look at the times where lag is reported.

netperf-wrapper is a great tool and has been massively useful in my testing, however two additional features would help me:

  1. I'd like to be able to feed multiple test files to ping_cdf and have all the results of all files summed up in 1 cdf trace rather than 1 trace per file.

  2. I'd like the option of processing traces wrt absolute time so that I could provide several traces that were taken sequentially and have them concatenated onto a single graph rather than show up as separate traces on the same graph.

Thanks for all the work you have put into this tool.

PING -D option

ping -D is not working .. due to that i am not getting ping data in graphs
what i have to do for that

The settings panel

There would be mildly more room for metadata if the settings panel became a top level menu item instead (file, settings, view, data)

also log scale would be "log scale"

-D option and /tmp default breaks std unix semantics

All other unix derived tools drop their files in the local directory by default.

-D somewhere_else is ok, way to set that as a default is good in the rcfile, but the default behavior should be the local dir, (and my beef with the gui is that should default to saving out also to the local dir)

There are a zillion things wrong with the idea of dropping files into tmp by default. More than one user using the tool at the same time. Using a dirname as the basic means of distinguishing between test runs and having shorter filenames as a result. Having to move stuff from tmp to somewhere else...

and...

Breaking the most common semantic for unix...

Sure: use /tmp to store a temporary file, but not the final output. (but that is handled by grabbing the std TMPDIR env var from the environment)

flent-gui goes boom with latest git

d@nuc-client:~/public_html/cake_500$ flent-gui rrul_50_codel_.gz
Traceback (most recent call last):
File "/usr/local/bin/flent-gui", line 9, in
load_entry_point('flent==0.11.1-git-a5e14b3', 'gui_scripts', 'flent-gui')()
File "/usr/local/lib/python2.7/dist-packages/flent-0.11.1_git_a5e14b3-py2.7.egg/flent/init.py", line 81, in run_flent_gui
return run_flent(gui=True)
File "/usr/local/lib/python2.7/dist-packages/flent-0.11.1_git_a5e14b3-py2.7.egg/flent/init.py", line 44, in run_flent
return run_gui(settings)
File "/usr/local/lib/python2.7/dist-packages/flent-0.11.1_git_a5e14b3-py2.7.egg/flent/gui.py", line 68, in run_gui
mainwindow = MainWindow(settings)
File "/usr/local/lib/python2.7/dist-packages/flent-0.11.1_git_a5e14b3-py2.7.egg/flent/gui.py", line 187, in init
self.read_settings()
File "/usr/local/lib/python2.7/dist-packages/flent-0.11.1_git_a5e14b3-py2.7.egg/flent/gui.py", line 203, in read_settings
self.restoreGeometry(settings.value("mainwindow/geometry"))
TypeError: QWidget.restoreGeometry(QByteArray): argument 1 has unexpected type 'QVariant'

can't get and plot data

When I execute this :

python2.6 netperf-wrapper -H localhost -p ping_cdf rrul

I get this console's output :

Warning: Program exited non-zero.
Command: netperf -P 0 -v 0 -D -0.2 -4 -Y CS5,CS5 -H localhost -t TCP_STREAM -l 60 -f m
Program output:
netperf: invalid option -- '0'

Usage: netperf [global options] -- [test options]

Global options:
-a send,recv Set the local send,recv buffer alignment
-A send,recv Set the remote send,recv buffer alignment
-B brandstr Specify a string to be emitted with brief output
-c [cpu_rate] Report local CPU usage
-C [cpu_rate] Report remote CPU usage
-d Increase debugging output
-D [secs,units] * Display interim results at least every secs seconds
using units as the initial guess for units per second
-f G|M|K|g|m|k Set the output units
-F fill_file Pre-fill buffers with data from fill_file
-h Display this text
-H name|ip,fam * Specify the target machine and/or local ip and family
-i max,min Specify the max and min number of iterations (15,1)
-I lvl[,intvl] Specify confidence level (95 or 99) (99)
and confidence interval in percentage (10)
-l testlen Specify test duration (>0 secs) (<0 bytes|trans)
-L name|ip,fam * Specify the local ip|name and address family
-o send,recv Set the local send,recv buffer offsets
-O send,recv Set the remote send,recv buffer offset
-n numcpu Set the number of processors for CPU util
-N Establish no control connection, do 'send' side only
-p port,lport* Specify netserver port number and/or local port
-P 0|1 Don't/Do display test headers
-r Allow confidence to be hit on result only
-t testname Specify test to perform
-T lcpu,rcpu Request netperf/netserver be bound to local/remote cpu
-v verbosity Specify the verbosity level
-W send,recv Set the number of send,recv buffers
-v level Set the verbosity level (default 1, min 0)
-V Display the netperf version and exit

For those options taking two parms, at least one must be specified;
specifying one value without a comma will set both parms to that
value, specifying a value with a leading comma will set just the second
parm, a value with a trailing comma will set just the first. To set
each parm to unique values, specify both and separate them with a
comma.

  • For these options taking two parms, specifying one value with no comma
    will only set the first parms and will leave the second at the default
    value. To set the second value it must be preceded with a comma or be a
    comma-separated pair. This is to retain previous netperf behaviour.

Traceback (most recent call last):
File "netperf-wrapper", line 62, in
results[0].dump_dir(os.path.dirname(settings.OUTPUT) or ".")
File "/home/ahmed/Downloads/netperf-wrapper-master/netperf_wrapper/resultset.py", line 178, in dump_dir
fp.close()
File "/usr/lib/python2.6/io.py", line 1492, in close
if not self.closed:
File "/usr/lib/python2.6/io.py", line 1498, in closed
return self.buffer.closed
AttributeError: GzipFile instance has no attribute 'closed'

rrul_50_down and rrul_50_up

These use the handy-dandy :: syntax to create 50 flows, but there is no way to see the graph of all those flows interrelating.

Introduce proper logging

The lack of proper logging has been an issue for debugging for a while.
Increasing verbosity would definitely be useful for debugging.

Proper logging should be implemented with the Python logging framework,
configured appropriately.

'HTTP latency' in "SERIES_META" is empty

Hi @tohojo,
thanks again for the help with http-getter

I have one question for you - is there any way (parameters, settings) to include MEAN_VALUE for HTTP latency into SERIES_META. Because, for now, it is empty:

            "TCP upload BE2": {
                "MEAN_VALUE": 936.8
            }, 
            "HTTP latency": {}, 
            "Ping (ms) avg": {
                "MEAN_VALUE": 1736.3999999999999
            }, 
            "TCP download BE4": {
                "MEAN_VALUE": 725.62
            }, 
            "TCP upload BE4": {
                "MEAN_VALUE": 897.17
            }, 
            "TCP download BE2": {
                "MEAN_VALUE": 712.97
            }, 
            "TCP upload avg": {
                "MEAN_VALUE": 918.6175
            }, 
            "Ping (ms) ICMP": {}, 

It is good to have it there because it allows to quickly get needed results of http-rrul from metadata output without parsing results in json.gz. Ping (ms) ICMP will be helpful too.

Thank you for your work.

Recv socket size / Send socket size

Hi,

Would it be possible to get these data in the metadata of test ? I am willing to add it by myself but I don't know where you parse the output of netperf.

Thanks

ping_cdf exception

Periodically I receive the below exception when processing ping_cdf plots. It's not clear to me if this is the same UDP ping issue described in BUGS. I get results if I use totals rather than ping_cdf.

I have a short ping test file that creates the exception. I tried processing with -L but no logfile is created.
(BTW. BUGS says make a log file with -l but it supposed to be -L)

I would attach the file to this ticket but I don't see how. Only images seem to be supported I can email it to you if you want it.

rsmith@thinko:~/rrul$ netperf-wrapper -p ping_cdf -i daily/ping-2014-06-16T100414.372309.json.gz
/usr/lib/pymodules/python2.7/matplotlib/axes.py:2452: UserWarning: Attempting to set identical left==right results
in singular transformations; automatically expanding.
left=0.0, right=0.0

  • 'left=%s, right=%s') % (left, right))
    Traceback (most recent call last):
    File "/usr/local/bin/netperf-wrapper", line 37, in
    b.run()
    File "/usr/local/lib/python2.7/dist-packages/netperf_wrapper/batch.py", line 414, in run
    return self.load_input(self.settings)
    File "/usr/local/lib/python2.7/dist-packages/netperf_wrapper/batch.py", line 410, in load_input
    formatter.format(results)
    File "/usr/local/lib/python2.7/dist-packages/netperf_wrapper/formatters.py", line 1112, in format
    artists = getattr(self, 'do_%s_plot' % self.config['type'])(results)
    File "/usr/local/lib/python2.7/dist-packages/netperf_wrapper/formatters.py", line 890, in do_cdf_plot
    self._do_cdf_plot(results[0], config=config, axis=axis)
    File "/usr/local/lib/python2.7/dist-packages/netperf_wrapper/formatters.py", line 936, in _do_cdf_plot
    max_val = max(data[i])
    ValueError: max() arg is an empty sequence

Test suite

We really do need a test suite for this thing... Including some sort of
self-test for checking if the tests can run.

Testing the tests... It's testception!

TCP staircase test

I think a good test would be one that starts, say 10, streams, one every 5 seconds,
and tries to measure how long it takes for it to reach equilibrium.

capturing DUMP_TCP_INFO stats when possible would be nice

Each TCP flow on later versions of netperf can dump it's tcp statistics which seems useful information to collect.

d@nuc-client:~/git/netperf$ DUMP_TCP_INFO=1 netperf -H ranger
MIGRATED TCP STREAM TEST from ::0 (::) port 0 AF_INET6 to ranger () port 0 AF_INET6 : demo
tcpi_rto 204000 tcpi_ato 0 tcpi_pmtu 1500 tcpi_rcv_ssthresh 28800
tcpi_rtt 3482 tcpi_rttvar 455 tcpi_snd_ssthresh 91 tpci_snd_cwnd 105
tcpi_reordering 3 tcpi_total_retrans 9
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

87380 16384 16384 10.04 294.61

d@nuc-client:~/git/netperf$ DUMP_TCP_INFO=1 netperf -H snapon.lab.bufferbloat.net
MIGRATED TCP STREAM TEST from ::0 (::) port 0 AF_INET6 to snapon.lab.bufferbloat.net () port 0 AF_INET6 : demo
tcpi_rto 224000 tcpi_ato 0 tcpi_pmtu 1500 tcpi_rcv_ssthresh 28800
tcpi_rtt 23011 tcpi_rttvar 1872 tcpi_snd_ssthresh 7 tpci_snd_cwnd 9
tcpi_reordering 3 tcpi_total_retrans 2
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec

87380 16384 16384 10.42 5.29
d@nuc-client:~/git/netperf$

Better color selection for graphs

some graphs use a combination of orange, red, and greenish blue that is kind of hard to see. The one bugging me at the moment is the total bandwidth and average ping plot in rrul. I like ping in red. I think the orange should be green, the greenish blue, blue....

I am totally hopeless for color selections, it is my hope that someone with an artistic bent shows up and reviews the plot types for better color matches and patterns....

Mac OSX ping uses -t, not -w

The rrul-test produces an error on Mac OSX since the -w option is unknown and should use -t instead.

Data series: Ping (ms) ICMP
Runner: PingRunner
Command: /sbin/ping -n -D -i 0.20 -w 70   tptest.carl.kau.se
Standard error output:
  /sbin/ping: illegal option -- w
  usage: ping [-AaDdfnoQqRrv] [-b boundif] [-c count] [-G sweepmaxsize]
              [-g sweepminsize] [-h sweepincrsize] [-i wait] [โˆ’k trafficclass]
              [-l preload] [-M mask | time] [-m ttl] [-p pattern]
              [-S src_addr] [-s packetsize] [-t timeout][-W waittime] [-z tos]
              host

latest git has broken rtt-fair test

Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flent-0.12.4_git_4319244-py2.7.egg/flent/gui.py", line 530, in load_files
widget.load_results(f)
File "/usr/local/lib/python2.7/dist-packages/flent-0.12.4_git_4319244-py2.7.egg/flent/gui.py", line 1163, in load_results
self.results = ResultSet.load_file(unicode(results))
File "/usr/local/lib/python2.7/dist-packages/flent-0.12.4_git_4319244-py2.7.egg/flent/resultset.py", line 412, in load_file
r = cls.load(fp, absolute)
File "/usr/local/lib/python2.7/dist-packages/flent-0.12.4_git_4319244-py2.7.egg/flent/resultset.py", line 397, in load
obj = cls.unserialise(json.load(fp), absolute, SUFFIX=ext)
File "/usr/local/lib/python2.7/dist-packages/flent-0.12.4_git_4319244-py2.7.egg/flent/resultset.py", line 346, in unserialise
metadata[t] = parse_date(metadata[t])
File "/usr/local/lib/python2.7/dist-packages/flent-0.12.4_git_4319244-py2.7.egg/flent/util.py", line 71, in parse_date
ts = time.mktime(dt.timetuple())
NameError: global name 'time' is not defined

--self-test option

given the enormous number of tests in the suite, having it be able to test each test would be good. (and getting to where all the tests would actually run, good also, predefined defaults for more servers, etc)

so either the --self-test thing could be embedded in flent (using localhost? or a set of predefined servers on the internet?) or run from an external driver script parsing --list-tests.

pip installation fails

Installation from pip using the command sudo pip install netperf-wrapper fails due to an unexpected parameter (--single-version-externally-managed).

ubuntu@noname:~$ sudo pip install netperf-wrapper
Downloading/unpacking netperf-wrapper
  Downloading netperf-wrapper-0.8.1.tar.gz (97kB): 97kB downloaded
  Running setup.py (path:/tmp/pip_build_root/netperf-wrapper/setup.py) egg_info for package netperf-wrapper

Installing collected packages: netperf-wrapper
  Running setup.py install for netperf-wrapper
    usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
       or: -c --help [cmd1 cmd2 ...]
       or: -c --help-commands
       or: -c cmd --help

    error: option --single-version-externally-managed not recognized
    Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/netperf-wrapper/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-yBJWJH-record/install-record.txt --single-version-externally-managed --compile:
    usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]

   or: -c --help [cmd1 cmd2 ...]

   or: -c --help-commands

   or: -c cmd --help



error: option --single-version-externally-managed not recognized

----------------------------------------
Cleaning up...
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/netperf-wrapper/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-yBJWJH-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/netperf-wrapper
Storing debug log for failure in /home/ubuntu/.pip/pip.log

Tested on Ubuntu 14.10 (my desktop machine) and Ubuntu 14.04.1 (on Amazon EC2).

symbolic link to /usr/local/bin/flent on osx?

/opt/local/Library/Frameworks/Python.framework/Versions/3.4/bin/flent-gui *.flent.gz

is a tad long. Is there a way to convince setup.py install to put something in the path?

I was pleased to see python3.4 setup.py install work at all on osx, actually.

Restructuring the file format

It would be good to restructure the file format a little bit. In no
particular order:

  • Add a proper version field (yeah, should have done that from the start
    guess).
  • Move the raw data from SERIES_META into the top-level object, always
    include it. Possibly even get rid of the aligned data and re-compute it
    as needed?
    • Possibly restructure metadata so SERIES_META is somewhere
      else/different?
  • Import data units into the series metadata.
  • Use machine IDs (no spaces!) for the data series keys and convert them
    to human-readable values on output (stored in the metadata, or in the
    test config?).
    • This needs to (also) be an automatic mapping, so we can import old
      data files.
  • Bzip instead of gzip?

Throughout all this, keep a compatibility layer so we can read old data
files (and convert them to the new format), but breaking compatibility
so old versions can't read the new files is fine.

Restructure the settings handling

Currently, the settings are stored in on big object with no module
namespacing. This object is then passed around everywhere, leading to
tight coupling, and making it difficult to see which parts of the code
use which settings.

Doing this differently, and having each module / class explicitly take
their needed parameters as call arguments should clear this up.

While we're at it, might as well upgrade the argument parsing code to
use the argparse module. Should be more flexible, and we won't have to
do our own ini file parsing anymore.

Depends on #24 since argparse is Python 2.7+.

malloc error on exit

Python(29085,0x7fff798a2310) malloc: *** error for object 0x7fa8acd71978: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug

I had a metric ton of files open

csv stats output with wildcards

I was delighted to see I could output all the statistics in a given batch with:

netperf-wrapper -f csv tcp_upload*.json.gz

However having the first bit of output have column names, and the left side the filename, would be useful when importing into a spreadsheet.

flent --list-tests fails if flent is installed by pip version 7.0.1

The issue is reproducible on Ubuntu with the latest pip installed (7.0.1 as of now). The error is following:
$ flent --list-tests
Traceback (most recent call last):
File "/usr/local/bin/flent", line 27, in
sys.exit(run_flent())
File "/usr/local/lib/python2.7/dist-packages/flent/init.py", line 41, in run_flent
settings = load(sys.argv[1:])
File "/usr/local/lib/python2.7/dist-packages/flent/settings.py", line 623, in load
list_tests()
File "/usr/local/lib/python2.7/dist-packages/flent/settings.py", line 656, in list_tests
tests = sorted([os.path.splitext(i)[0] for i in os.listdir(TEST_PATH) if i.endswith('.conf')])
OSError: [Errno 2] No such file or directory: '/usr/local/lib/python2.7/dist-packages/tests'

The same works fine if flent is installed by pip 6.1 (or when it was installed by 6.1 and later pip upgraded)

The issue also reproducible in virtualenv created by version 13:
$ .venv/bin/flent --list-tests
Traceback (most recent call last):
File ".venv/bin/flent", line 27, in
sys.exit(run_flent())
File "/tmp/.venv/local/lib/python2.7/site-packages/flent/init.py", line 41, in run_flent
settings = load(sys.argv[1:])
File "/tmp/.venv/local/lib/python2.7/site-packages/flent/settings.py", line 623, in load
list_tests()
File "/tmp/.venv/local/lib/python2.7/site-packages/flent/settings.py", line 656, in list_tests
tests = sorted([os.path.splitext(i)[0] for i in os.listdir(TEST_PATH) if i.endswith('.conf')])
OSError: [Errno 2] No such file or directory: '/tmp/.venv/local/lib/python2.7/site-packages/tests'
$ virtualenv --version
13.0.1

collecting better wifi stats

in the ath9k at least, there are several other useful sorts of stats that can be collected.

/sys/kernel/debug/ieee80211/phy0/netdev:sw00/stations/b8:3e:59:1a:00:63/rc_stats#
/netdev:/stations/*/rc_stats

is one of them. The hard part to parse is what channels are being selected and why. There is a new version of this code in the latest minstrel-blues and minstrel-variance output that is rc_stats_csv and should be easier to parse.

/sys/kernel/debug/ieee80211/phy0/ath9k/xmit
/phy*/
is the other. TX-Pkgts-All, TX-Bytes-All compared against aggregates and mpdus in their various forms for the queues available is good.

Netperf-wrapper: How to set up my own netserver (vmware)...

Hello,

I am trying to establish connection from my laptop from home to a desktop in school.

I set up static IP in desktop (for using as server): 203.241.247.250

Laptop (using as client) and Desktop is set up netperf-wrapper in the same way (also with config-demo)

I tried to plot data when I choose host as: demo.tohojo.dk (might be your server) and it worked.

However, When I tried with my server,
./netperf-wrapper -H 203.241.247.250 -p ping_cdf -o rrul-test-file.ps -f plot rrul

it always gives warning: program exited non-zero (1) (displayed on client/laptop)

Command: /bin/ping -n -D -i 0.20 -w 70 203.241.247.250
Program output:

PING 203.241.247.250: 100% packet loss

Improve 'debug logging' to include raw command output

When turning on debug logging, currently the output of the commands are
included one after another, slightly indented. It would probably be
better to store them directly, maybe build a tar file of the input.
Also, re-parsing from such a file would be nice, if nothing else for
debugging purposes.

Being able to test multiple hosts, no matter how many flows

If the HOSTS variable was a closure, where every reference to it became HOST = HOSTS[X = X + 1 % NUMBER_OF_HOSTS], it would be simpler to add more different RTTs to the mix and expand out to the huge numbers of flows I am attempting to look at.

Similarly, more robust handling/plotting of multiple flows would be nice.

qdisc_stats test does not parse cake

in case this gets mangled: http://snapon.lab.bufferbloat.net/~d/cake.out

These are the three (thus far) known modes of "cake"

qdisc cake 8002: root refcnt 2 unlimited diffserv4 flows raw
Sent 1672 bytes 6 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Class 0 Class 1 Class 2 Class 3
rate 0bit 0bit 0bit 0bit
target 5.0ms 5.0ms 5.0ms 5.0ms
interval 105.0ms 105.0ms 105.0ms 105.0ms
Pk delay 0us 0us 0us 0us
Av delay 0us 0us 0us 0us
Sp delay 0us 0us 0us 0us
pkts 0 2 0 4
way inds 0 0 0 0
way miss 0 2 0 1
way cols 0 0 0 0
bytes 0 1200 0 472
drops 0 0 0 0
marks 0 0 0 0
qdisc cake 8002: root refcnt 2 unlimited diffserv8 flows raw
Sent 3302 bytes 12 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Class 0 Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Class 7
rate 0bit 0bit 0bit 0bit 0bit 0bit 0bit 0bit
target 5.0ms 5.0ms 5.0ms 5.0ms 5.0ms 5.0ms 5.0ms 5.0ms
interval 105.0ms 105.0ms 105.0ms 105.0ms 105.0ms 105.0ms 105.0ms 105.0ms
Pk delay 0us 0us 0us 0us 0us 0us 0us 0us
Av delay 0us 0us 0us 0us 0us 0us 0us 0us
Sp delay 0us 0us 0us 0us 0us 0us 0us 0us
pkts 0 4 0 8 0 0 0 0
way inds 0 0 0 0 0 0 0 0
way miss 0 2 0 1 0 0 0 0
way cols 0 0 0 0 0 0 0 0
bytes 0 2400 0 902 0 0 0 0
drops 0 0 0 0 0 0 0 0
marks 0 0 0 0 0 0 0 0
qdisc cake 8002: root refcnt 2 unlimited besteffort flows raw
Sent 3732 bytes 16 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Class 0
rate 0bit
target 5.0ms
interval 105.0ms
Pk delay 0us
Av delay 0us
Sp delay 0us
pkts 2
way inds 0
way miss 1
way cols 0
bytes 228
drops 0
marks 0

Data files sometimes not installed

It appears that sometimes the data files (matplotlibrc.dist, etc) are
not properly installed when running 'python setup.py install'.

This has been observed on at least OSX and Ubuntu 14.04. Need to find a
way to reliably reproduce this issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.