Giter Site home page Giter Site logo

nsts's Introduction

Network SpeedTest Suite

NSTS is network meta-benchmarking suite, that was created to automate and standardize network performance estimation process, without defining explicit algorithms. As a meta-benchmarking tool it knows how to execute other established benchmarking tools and gather results, or even run real scenario network services and monitor their performance.

Installation

Before installing NSTS ensure that you have python 2.7+ on your system.

Download the latest release of NSTS from https://github.com/sque/nsts/releases, and then unzip it in a folder. You can execute nsts, by executing nsts.py

cd nsts-latest-release/src
python nsts.py --help

Concepts

NSTS uses some terms and concepts to describe benchmarking procedure. It would be best to familiarize yourself before starting using NSTS.

Profile

Profile is a "wrapper" around other benchmarking tools or network services. A profile describes the "wrapped" tool, possible results, profile options and provides the needed scripts to execute it.

Sampling

Although you could run a profile and gather results, this is not always the best idea. The results have a variance due to system/network state, and other parameters that we cannot control. To overcome this problem, NSTS executes multiple times a profile and return statistical data on the results (average, minimum, maximum, deviation). Every execution of a profile is called a sample, and there is a dead-time interval between samples.

Execution Direction

Each profiles define a one way speed test. This means that the one end will transmit data and the other will receive them. When you execute a profile you need to define direction of execution, nsts will organize both peers to achieve it.

Test

A test is a complete description of how to execute a profile in a reprodusable way. It involves options of the profile, direction of execution, number of samples, interval time between samples and some other parameters.

Suite

A test suite, is a list of multiple tests, that can be described a suite file. A speed suite provides a way to standardize a benchmarking procedure in a given enviroment. You can have suite that target more on transfer rates, or packet loss, or latency depending the scenarion you want to benchmark.

Suite File

A suite file is an ini file that contains all tests for the given suite. (check "suite syntax" section). Instead of defining test from command line you can pass a suite file to execute.

Usage

After downloading and unzipping the software you can run NSTS by executing

python nsts.py --help

To run NSTS you have to run the server in one endpoint and the client in the other endpoint of the link that you want to benchmark.

Example: Get list of installed profiles and their options

python nsts.py --list-profiles

Example: Run simple TCP throught

Server:

python nsts.py -s

Client:

python nsts.py -c servername --tests=iperf_tcp

Example: Run transmission latency tests

Server:

python nsts.py -s

Client:

python nsts.py -c servername --tests=iperf_jitter-s,ping-s

Example: Run a suite

Server:

python nsts.py -s

Client:

python nsts.py -c servername --suite=filename.ini

Example: Run on IPv6 and different port

Server:

python nsts.py -s -6 -p 15000

Client:

python nsts.py -6 -p 15000 -c servername --suite=filename.ini

Suite Files

A suite file is an configuration file (ini format) that contains all tests for the given suite. Each section of the ini file is a test except section "global" which is used for suite options. The name of each section defines also the id of the test so it must be unique inside a suite.

Example:

[global]
interval = 1 sec
samples = 10

[short_tcp]
profile = iperf_tcp
name = Fast connections
samples = 30
interval = 0
iperf_tcp.time = 1 sec

[long_tcp]
profile = iperf_tcp
name = Long last connections 
samples = 5
interval = 20 sec
iperf_tcp.time = 20 sec

[low_rate_latency]
name = Low Rate latency jittering
profile = iperf_jitter
samples = 6
interval = 0
iperf_jitter.time = 10
iperf_jitter.rate = 1 Mbps

[fast_rate_latency]
name = Fast Rate latency jittering
profile = iperf_jitter
samples = 6
interval = 0
iperf_jitter.time = 10
iperf_jitter.rate = 10 Mbps

[estimate_latency]
name = Latency estimations
profile = ping
  • interval : Is the time between samples. You can define it globaly and overide its value per test.
  • samples : Is the number of profile execution per test. You can define it globaly and overide its value per test.
  • name : Is the friendly name of test, it will be shown on the results section
  • profile : (mandatory) The id of the profile
  • direction : By default tests are run bidirectional. You can define "send" or "receive direction .
  • foo.bar : Set the option bar of the profile foo. Foo is the id of the profile and must be the same as at the profile option. bar must be an id of a valid option of profile foo.

Feedback

Please file your ideas, bugs, comments at https://github.com/sque/nsts

nsts's People

Contributors

sque avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar

nsts's Issues

XML results output option

It would be nice to write test results in an xml format. It should be something easy and intuitive

A draft could be

<?xml version="1.0" encoding="UTF-8"?>
<nsts>
  <meta>
    <suite_filename>foobar.ini</suite_filename>
  </meta>
  <profiles>
     <profile id="fooprofile" version="1"></profile>
  </profiles>
  <results>
    <test id="testid" name="nameid", profile="fooprofile">
      <execution started_at="">
         <result id="foovalue" type="time">10 sec</result>
         <result id="barvalue" type="time">10 sec</result>
         ....
      </execution>
      ...
       <statistics>
       </statisics>
    </test>
  <test>...</test>

...
</results>
</nsts>

Add support for test options loaded from .ini file

This could also be called tests profile.

An example could be:

[global]
interval = 10
samples = 10

[iperf_tcp]
test_time = 10 sec

[iperf_jitter]
bandwidth = 10 Mbits

[ping]
samples = 50
interval = 1

Support for setting manually connect-back ip on arguments

Some tests need to connect back to client. This is automatically estimated by the remote address of NSTS client when it connects to server.
However sometimes (like behind nat), the connect-back ip is different.

User should be able to define this ip though command line arguments.

Poor performance when sampling on apache profile

Right now apache every execution has to recreate files, re-estimate speed and finally run test.

What is done right now is:

prepare
 run
finish
prepare
 run
finish
...

If we could safely change the profile API to permit

prepare
 run
 run
 run
 ...
finish

Estimate execution time

Currently there is no prediction of how much time it will take to execute a test or a suite. However in the majority of cases, it is possible to make a rough if not an accurate prediction of how much time it will take.

This issue proposes to extend ProfileExecutor API to query the estimated time for the executor to run.

Crash on host reverse lookup

For hosts that is impossible to do reverse lookup, NSTS crashes with the following error:

./src/nsts.py -c 10.32.53.23 --suite test.ini 
Network SpeedTest Suite [NSTS] Version 1.0.beta2
Free-software published under GPLv3 license.
http://github.com/sque/nsts

Unknown error:  [Errno 1] Unknown host

Compare results between different executions

A common scenario of nsts usage is that user wants to monitor any improvement or regression on the quality of the link. Currently the user has to do visual comparison on results from different executions. This is not always straight forward, especially when running a complete test suite that involves different tests.

This issue proposes to create a new command on nsts that could take two or more results and calculate the difference between them.

Server authentication mechanism

Currently anyone can connect to a running NSTS server and make a connection benchmark. In some uncontrolled environments it is not safe to permit anyone to make resource-hungry benchmarks.

This issue proposes the introduction of authentication between NSTS client & server. This will effectively protect the server from anonymous abuses.

Apache is left running in some cases

If for a reason (like CTRL+C) nsts is closed when apache server is running, then server is left running and nsts is impossible to restart this test. User must kill server manually
killall apache2

Add support for apache test module

The idea is that the test module will create configuration files in a custom folder. It will directly execute the apache module using a port that does not require root permissions.

The web root will be filled with random files, big one and some small ones.

Missing ipv6 support

Full support means that nsts must communicate in ipv6 layer and all tests should be run over ipv6. Also test should run with link-local addresses.

Add support for external sensors

It would be very usefull to have external information available like cpu usage locally and remote (ssh?).
A sensor is a way to monitor a resource usage locally or remotely. In general an abstraction can be used that all sensors are shell based (top, iotop, ps etc) and there are is an abstract way of refereeing to shells.

Possible configuration:

[router.foo]
shell=ssh
ssh.remote=router.foobar.local
ssh.user=root
#if pass is omitted, it could interactively ask for password?
# or ssh.key=pkey could also be used
# or ssh.pass for saving password

[fooroutercpu]
shell=router.foo
sensor=cpu,memory,network
network.interfaces = eth0
cpu.method=top

[localusage]
shell=local
sensor=cpu
cpu.method=ps
cpu.process_name=python nsts.py

HTML output with interactive graphs

It would be cool to have HTML output with graphs for the performance of each test. This will be very helpful when sensors are fully implemented. It will permit to create timecharts of resources usage for the life time of tests.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.