Giter Site home page Giter Site logo

ansemjo / speedtest-plotter Goto Github PK

View Code? Open in Web Editor NEW
58.0 7.0 12.0 960 KB

running scheduled speedtests inside docker and plotting the results with gnuplot

License: MIT License

Shell 6.72% Dockerfile 11.07% Python 82.20%
speedtest speedtest-cli docker gnuplot plot bandwidth

speedtest-plotter's Introduction

speedtest-plotter

This is a collection of scripts, which takes internet speedtest measurements against the speedtest.net network with taganaka/SpeedTest and plots them with gnuplot. A crontab schedule is used to automate measurements every couple of minutes and save them to a database. The results can be displayed through a simple Flask webserver.

example plot of speedtest results

USAGE

For changes between releases check the changelog.

CONTAINER

GitHub Workflow Status

The main distribution method is the automatically built container ghcr.io/ansemjo/speedtest. Obviously, you need to have a container runtime like docker or podman installed to run the container.

Note: please update your image name to use the Github container registry. I will delete the DockerHub project sometime in the future.

To start the container with default settings run:

docker run -d -p 8000:8000 ghcr.io/ansemjo/speedtest

This will take a measurement every 15 minutes, save them to a SQLite database in /data/speedtests.db and run the webserver on port 8000. Visit http://localhost:8000 to look at the plotted results. (Note: The smoothed bezier curves require at least two measurements and the image will stay blank otherwise. So you might have to wait a while first.)

TIMEZONE

Your local timezone can be set with the TZ environment variable and a string from tzselect. If none is set usually UTC is assumed. For example users in Japan should use:

docker run -d -p 8000:8000 -e TZ=Asia/Tokyo ghcr.io/ansemjo/speedtest

DATABASE

For data persistence, either mount a volume at /data to save the database file or set the environment variable DATABASE to an SQLAlchemy-compatible URI. A PostgreSQL URI might look like this:

docker run -d \
  -p 8000:8000 \
  -e TZ=Europe/Berlin \
  -e DATABASE=postgresql://user:password@hostname:5432/database' \
  ghcr.io/ansemjo/speedtest

SCHEDULE

You can modify the measurement schedule with the environment variables MINUTES and SCHEDULE. The former takes a measurement every n minutes and the latter may define an entirely custom cron schedule like "four times a day":

docker run -d -p 8000:8000 -e SCHEDULE="0 3,9,15,21 * * *" ghcr.io/ansemjo/speedtest

MARKERS AND SCALING

To add horizontal dashed lines in the plot (e.g. to mark your expected bandwidths) you can use environment variables MARKER_DOWNLOAD and MARKER_UPLOAD. The values are given in MBit/s.

In addition or independently from that you can also set a range scaling for the upload plot relative to the download range with UPLOAD_SCALE. For highly asymmetrical connections this makes it easier to see the upload bandwidth. For example, the above example picture was created with:

docker run -d \
  [...] \
  -e MARKER_DOWNLOAD=800 \
  -e MARKER_UPLOAD=40 \
  -e UPLOAD_SCALE=10 \
  ghcr.io/ansemjo/speedtest

DEFAULT FETCH LIMIT

By default, the webserver will fetch the last seven days (7d) for plotting. This can be configured with the limit= query parameter per request and then bookmark this URL; i.e. http://localhost:8000/?limit=30d will fetch the last 30 days. Alternatively, you can set the environment variable FETCH_LIMIT to configure a different default value for all requests without the query parameter above.

FONT AND RESOLUTION

The resolution and font of the SVG output can be configured with environment variables RESOLUTION and FONT respectively. Output resolution is expected as a comma-separated value of x- and y-size; the default is 1280,800. The font can take either only a name (Arial), only a size (,18) or both (Arial, 18). Note that for a font in an SVG to work, the client needs to have the font, not the server. For example:

docker run -d \
  [...] \
  -e RESOLUTION=1920,1080 \
  -e FONT="Fira Sans, 14" \
  ghcr.io/ansemjo/speedtest

SPECIFIC TESTSERVER

If you want to test against a specific server, you can give a host:port combination in the environment variable TESTSERVER. You can use the API at www.speedtest.net/api/js/servers to pick a suitable host key from the JSON; supply a parameter for ?search=... if you need to. By default it lists servers close to you. Note that this is different from the SERVERID used previously! But you can use ?id=... to search for a specific ID.

For example, to test against wilhelm.tel in Norderstedt with the server ID 4087, you'd use:

docker run -d \
  [...] \
  -e TESTSERVER=speedtest.wtnet.de:8080 \
  ghcr.io/ansemjo/speedtest

DISABLE WEBSERVER

The webserver is a single-threaded Flask application and pipes the data to gnuplot in a subprocess, which may not be suitable for production usage. To disable the webserver completely set the PORT environment variable to an empty string. This will only take measurements and save them to the database.

docker run -d -e PORT="" -v speedtests:/data ghcr.io/ansemjo/speedtest

SHORTHAND COMMANDS

To dump the results as CSV from a running container use the dump command:

docker exec $containerid dump > results.csv

To trigger a measurement manually use the measure command:

docker exec $containerid measure

To reimport a previous dump in a fresh container use import:

docker exec -i $containerid import < results.csv

This can also be used to import results obtained manually with speedtest-cli.

PYTHON SCRIPT

You can use the Python script by itself locally, too. First install the requirements:

pip install -r requirements.txt

Choose a database location and take any number of measurements:

./speedtest-plotter -d sqlite:///$PWD/measurements.db measure
...

Then start the flask webserver to look at the results:

TZ=Europe/Berlin ./speedtest-plotter -d sqlite:///$PWD/measurements.db serve

GNUPLOT SCRIPT

To keep things really simple, you can also take measurements manually with speedtest-cli and only plot an image with gnuplot.

The plotscript expects the format that speedtest-cli outputs when using the --csv flag and a header line from --csv-header. To take some measurements manually with a simple sleep-loop:

speedtest-cli --csv-header > results.csv
while true; do speedtest-cli --csv | tee -a results.csv; sleep 600; done
^C

Afterwards plot the results to an SVG picture with:

gnuplot -c plotscript results.csv plot.svg

BREITBANDMESSUNG

If you're in Germany and you have found that your measured speed regularly does not meet minimum contractual obligations ("erhebliche, kontinuierliche oder regelmäßig wiederkehrende Abweichung bei der Geschwindigkeit") and your provider is not responsive to your complaints, you could use the Breitbandmessung App as the next step. It helps you prepare a well-formatted measurement report, which you could use to file a complaint with the Bundesnetzagentur (BNetzA).

LICENSE

Copyright (c) 2019 Anton Semjonov Licensed under the MIT License

speedtest-plotter's People

Contributors

ansemjo avatar dependabot[bot] avatar tuxpeople avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

speedtest-plotter's Issues

Possibility to plot tests against multiple servers?

I would like to test multiple Speedtest servers to eliminate single origin variances and to get a better picture of internet connectivity health.
e.g.

-e TESTSERVER=stosat-dvre-01.sys.comcast.net.prod.hosts.ooklaserver.net:8080 \
-e TESTSERVER=speedtest-lgmt.mynextlight.net.prod.hosts.ooklaserver.net:8080 \
-e TESTSERVER=speedtest.implicit.systems.prod.hosts.ooklaserver.net:8080

or

-e TESTSERVER=stosat-dvre-01.sys.comcast.net.prod.hosts.ooklaserver.net:8080,speedtest-lgmt.mynextlight.net.prod.hosts.ooklaserver.net:8080,speedtest.implicit.systems.prod.hosts.ooklaserver.net:8080

Is this possible?

PermissionError: [Errno 1] Operation not permitted

Hi,

i ran this app on my Synology in Docker to measure the VPN. Now I want to do the same on my RPI.
Here I get this error:

created directory: '/data'
Traceback (most recent call last):
File "/opt/speedtest-plotter/./speedtest-plotter", line 6, in
import subprocess, csv, json, tempfile, sys, argparse, os, io, signal, re, datetime, logging
File "/usr/local/lib/python3.10/logging/init.py", line 57, in
_startTime = time.time()
PermissionError: [Errno 1] Operation not permitted

can you tell me what i'm doing wrong?

Thanks

Release Note and Changelog

Hi

it would be nice for every release or push to dockehub, to write a release note or changelog to know the improvements and changes.
Thanks

sqlite errors in docker image?

After spending a couple hours looking for an image that plots speed, it looked like your image was the best, light weight etc. Thank you for your work and making this public!

The error below was produced with:

docker run -d -p 8000:8000 ansemjo/speedtest

I am, however, having an issue. The image seems to be doing this:

eated directory: '/data',

  • Connected database: sqlite:////data/speedtests.db,
  • Serving Flask app "speedtest" (lazy loading),
  • Environment: development,
  • Debug mode: off,
  • Running on http://0.0.0.0:8000/ (Press CTRL+C to quit),
    [2020-03-26 13:39:04,885] ERROR in app: Exception on / [GET],
    Traceback (most recent call last):,
    File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1248, in _execute_context,
    self.dialect.do_execute(,
    File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 580, in do_execute,
    cursor.execute(statement, parameters),
    sqlite3.OperationalError: no such table: speedtest,
    ,
    The above exception was the direct cause of the following exception:,
    ,
    Traceback (most recent call last):,
    File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 2446, in wsgi_app,
    response = self.full_dispatch_request(),
    File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1951, in full_dispatch_request,
    rv = self.handle_user_exception(e),
    File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1820, in handle_user_exception,
    reraise(exc_type, exc_value, tb),
    File "/usr/local/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise,
    raise value,
    File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1949, in full_dispatch_request,
    rv = self.dispatch_request(),
    File "/usr/local/lib/python3.8/site-packages/flask/app.py", line 1935, in dispatch_request,
    return self.view_functionsrule.endpoint,
    File "./speedtest-plotter", line 112, in home,
    rows = retrieve_measurements(limit, after),
    File "./speedtest-plotter", line 82, in retrieve_measurements,
    return db.query("SELECT * FROM speedtest WHERE "Timestamp" IS NOT NULL ORDER BY "Timestamp" DESC LIMIT :li", li=limit),
    File "/usr/local/lib/python3.8/site-packages/dataset/database.py", line 253, in query,
    rp = self.executable.execute(query, *args, **kwargs),
    File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 988, in execute,
    return meth(self, multiparams, params),
    File "/usr/local/lib/python3.8/site-packages/sqlalchemy/sql/elements.py", line 287, in _execute_on_connection,
    return connection._execute_clauseelement(self, multiparams, params),
    File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1101, in _execute_clauseelement,
    ret = self._execute_context(,
    File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1252, in _execute_context,
    self._handle_dbapi_exception(,
    File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1473, in _handle_dbapi_exception,
    util.raise_from_cause(sqlalchemy_exception, exc_info),
    File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause,
    reraise(type(exception), exception, tb=exc_tb, cause=cause),
    File "/usr/local/lib/python3.8/site-packages/sqlalchemy/util/compat.py", line 152, in reraise,
    raise value.with_traceback(tb),
    File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", line 1248, in _execute_context,
    self.dialect.do_execute(,
    File "/usr/local/lib/python3.8/site-packages/sqlalchemy/engine/default.py", line 580, in do_execute,
    cursor.execute(statement, parameters),
    sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: speedtest,
    [SQL: SELECT * FROM speedtest WHERE "Timestamp" IS NOT NULL ORDER BY "Timestamp" DESC LIMIT ?],
    [parameters: (1344,)],
    (Background on this error at: http://sqlalche.me/e/e3q8),

Error when starting measurement

When a measurement is started I get the following error and no measurement is running:

Error relocating /usr/local/bin/SpeedTest: _ZSt28__throw_bad_array_new_lengthv: symbol not found

It makes no difference if the measurement is started via scheduler or manually.

`MINUTES`: max 60?

Just a minor issue: setting MINUTES to anything above 60 is currently ignored - i.e. it will still run the test every 60 minutes instead of e.g. every 120 minutes, if you set it to 120.

Just wanted to confirm whether this is a known issue or known limitation currently - as you can still used SCHEDULE="*/2 * * * *" alternatively for example.

How to specify local timezone?

speedtest-plotter result graph Measurement Date is UTC.
I want specify JST (I'm Japanese).
How to specify local timezone?
TZ environment doesn't work.

Improvement proposal: Allow defining expected connection bandwidth

I am using speedtest-plotter to demonstrate to my ISP that they are not even close to delivering the bandwidth I pay for. For a stronger visualization it would be great if:

  • I can define the expected/supposed bandwidth (downstream and upstream)
  • Choose whether to plot with the axis maximum as the expected bandwidth or the highest measured value

Improvement proposal: Don't rely on speedtest-cli for speed measurements

Thanks for the easy to use tool. I run it on my NAS but figured out that it is not very reliable when it comes to download speeds.

Below is my plot of the last days.

Screenshot_20200404_113026

I have a 50/10 Mbit connection. Especially when the download speed was slow I double checked with other browser based services like

Even speedtest.net showed higher results when using the browser based service.

Furthermore I did some tests with

wget -O /dev/null http://speedtest.tele2.net/200MB.zip

Again the readings were higher.

Checking the speedtest-cli README
gives:

There is the potential for this tool to report results inconsistent with Speedtest.net. There are several concepts to be aware of that factor into the potential inconsistency:

  1. Speedtest.net has migrated to using pure socket tests instead of HTTP based tests
  2. This application is written in Python
  3. Different versions of Python will execute certain parts of the code faster than others
  4. CPU and Memory capacity and speed will play a large part in inconsistency between Speedtest.net and even other machines on the same network

Issues relating to inconsistencies will be closed as wontfix and without additional reason or context.

The http based test in python seems to be not very reliable.

Some further details here:

https://superuser.com/questions/1434610/speedtest-cli-very-slow-although-network-speed-is-fine/1519159#1519159

I suggest to change the tool for speed measurement to either one of them:

Another interesting fact:

Comparing ftp based download speed (wget -O /dev/null ftp://speedtest.tele2.net/100MB.zip --report-speed=bits) with http based speed (wget -O /dev/null http://speedtest.tele2.net/100MB.zip --report-speed=bits) shows that http is about 3x faster (45.9Mb/s vs 17.5Mb/s).

There were rumors some years ago that some providers prioritize anything which has speedtest in it - but only for http.
So it could make sense to include ftp speedtest as well.

On ever reboot docker container shows status of EXITED.

I have a remote site with cellular internet. I use Speedtest plotter to keep an eye on performance of the connection. I also have the raspberry pi 4 running Speedtest plotter in a docker container rebooting daily. After every reboot Speedtest plotter shows exited. Looking at logs via protainer, there are no errors. Manually restarting, it starts up normally. Any ideas?

Graphs fail due to plotscript error

Hi,

After some time, the graphs will throw a HTTP 500 and the following entry will show up in the docker logs:

"plotscript" line 120: undefined variable: i

Full log:

* FILE:
  Records:           5
  Out of range:      0
  Invalid:           0
  Header records:    0
  Blank:             0
  Data Blocks:       1

* COLUMN:
  Mean:          3.03075e+09
  Std Dev:       5.22050e+08
  Sample StdDev: 5.83669e+08
  Skewness:           1.3634
  Kurtosis:           3.0707
  Avg Dev:       4.09878e+08
  Sum:           1.51538e+10
  Sum Sq.:       4.72899e+19

  Mean Err.:     2.33468e+08
  Std Dev Err.:  1.65087e+08
  Skewness Err.:      1.0954
  Kurtosis Err.:      2.1909

  Minimum:       2.64506e+09 [3]
  Maximum:       4.05545e+09 [4]
  Quartile:      2.68159e+09
  Median:        2.88485e+09
  Quartile:      2.88681e+09

"plotscript" line 120: undefined variable: i

xx.xx.xx.xx - - [13/Feb/2023 01:51:45] "GET /results.svg?limit=1d&order=desc&date=now HTTP/1.1" 500 -

Improvement on nan test results

Occasionally, you may encounter "nan" test results. For instance, within my home network, it is not unusual to receive a "nan" result for a download test.

When there is a "nan" value, the plot becomes non-consecutive. In reality, it requires two consecutive successful test results to display a continuous line. I am curious if there is any room for improvement, such as disregarding "nan" test results to maintain the continuity of the trend line.

I appreciate your efforts, as your work is both elegant and extremely useful.

Screen shot for this issue:
IMG_4871

terminate called after throwing an instance of 'std::invalid_argument'

There's an issue in the SpeedTest binary used: taganaka/SpeedTest#60

Root cause is that the binary fetches IP address and location information from a deprecated Ookla API, which returns errors and causes the stof (string-to-float) parser to throw an error – which is not caught. There's a pull request, which simply ignores this error in an empty try-catch block.

A proper fix would be to swap the entire endpoint and use a new parser but I'll wait for upstream to hopefully implement something before writing my own patch.

handle errors in SpeedTest++ subprocess

The altclient branch uses taganaka/SpeedTest to run the speedtests but this was cobbled together very quickly. After restarting my Docker host there was a networking problem and the speedtest failed. The results however were stored as zero in the database.

In case of errors there is an "error": "..." key in the JSON output. For example:

{
  "client": {
    "ip": "REDACTED",
    "lat": "REDACTED",
    "lon": "REDACTED",
    "isp": "Vodafone Kabel Deutschland"
  },
  "error": "unable to download server list"
}

Feature Request: Plot Pings with Bars or Line

Hello, the speedtest-plotter is great. I'm only missing one little thing: ping times are really important when it comes to VOIP etc. and because of Covid provider network usage I have temporary spikes around the day. Unfortunately this important information is not good visualized: you have to search for ping crosses, e.g. sometimes they "hide" near the upload/download lines.
Proposed solution: plot pings with upright bars or also with a line in different color to clearly see spikes.
Thanks for considering!
Best regards,
Steven

Possible to define multiple testserver?

My isp want me to use 3 different test servers to show the problem isn’t with one test destination. Is it possible to define multiple test servers? Ideally they would be run sequentially and each have their own separate plot lines.

Improvement proposal: Allow for different y-axes for down- and upstream

As most consumer ISP offerings are asymmetric, it would be great if the plot would offer an additional y axis so that downstream and upstream could both be plotted at “full height”.

Optionally, in combination with #5 the scale could be changed to % of expected bandwidth instead of absolute bandwidth.

(Of course the most amazing thing would be to have a client-side graph rendering that allows this configuration on the fly and also lets one hover over specific data points.)

speedtest-cli: Cannot retrieve speedtest configuration

When running the docker without daemon mode, docker run -p 8000:8000 ansemjo/speedtest, here is my output:

created directory: '/data'
 * Connected database: sqlite:////data/speedtests.db
 * Serving Flask app "speedtest" (lazy loading)
 * Environment: development
 * Debug mode: off
 * Running on http://0.0.0.0:8000/ (Press CTRL+C to quit)
^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^Cb'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'

^C^Cb'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'


b'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'

b'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'

b'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'
^C^C
b'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'
b'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'

b'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'

b'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'

b'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'

b'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'


b'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'
b'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'

^C
b'Cannot retrieve speedtest configuration\nERROR: <urlopen error [Errno -3] Try again>\n'
10.0.0.7 - - [24/Feb/2021 02:19:47] "GET / HTTP/1.1" 500 -
10.0.0.7 - - [24/Feb/2021 02:19:47] "GET /favicon.ico HTTP/1.1" 404 -
172.17.0.1 - - [24/Feb/2021 02:20:53] "GET / HTTP/1.1" 500 -
172.17.0.1 - - [24/Feb/2021 02:20:56] "GET / HTTP/1.1" 500 -
172.17.0.1 - - [24/Feb/2021 02:20:57] "GET / HTTP/1.1" 500 -
172.17.0.1 - - [24/Feb/2021 02:20:57] "GET / HTTP/1.1" 500 -
172.17.0.1 - - [24/Feb/2021 02:20:58] "GET / HTTP/1.1" 500 -
172.17.0.1 - - [24/Feb/2021 02:21:03] "GET / HTTP/1.1" 500 -
172.17.0.1 - - [24/Feb/2021 02:23:19] "GET / HTTP/1.1" 500 -

As you can see, CTRL+C doesn't quit.
In addition I get this on the web page:

No measurements taken yet!
(sqlite3.OperationalError) no such table: speedtest
[SQL: SELECT * FROM speedtest WHERE "Timestamp" IS NOT NULL ORDER BY "Timestamp" DESC LIMIT ?]
[parameters: (1344,)]
(Background on this error at: http://sqlalche.me/e/e3q8)

Environment Variable "MINUTES" is being ignored?

I am running the Speedtest-Plotter with "MINUTES=15" but is is being ignored. Tests are run every 30min no matter if I remove the variable completely or set it to anything else. Is there a rate-limit?

Unable to download server list. Try again later

I have problem.
It doesn't work.

My setup is..

docker-compose.yml

version: '3'
services:
  speedtest:
    image: ansemjo/speedtest
    ports:
      - 8000:8000
    volumes:
      - ./data:/data
    restart: always
    environment:
      TZ: Asia/Tokyo

      # default is 15
      MINUTES: 60

Execute docker-compose up -d.
Then few hours later, docker-compose logs -f output is ..

speedtest_1  | Unable to download server list. Try again later
speedtest_1  | 
speedtest_1  | Unable to download server list. Try again later
speedtest_1  | 
speedtest_1  | Unable to download server list. Try again later
speedtest_1  | 
speedtest_1  | Unable to download server list. Try again later
speedtest_1  | 
speedtest_1  | Unable to download server list. Try again later
speedtest_1  | 
speedtest_1  | Unable to download server list. Try again later
speedtest_1  | 
speedtest_1  | Unable to download server list. Try again later
speedtest_1  | 
speedtest_1  | Unable to download server list. Try again later
speedtest_1  | 
speedtest_1  | Unable to download server list. Try again later
speedtest_1  | 
speedtest_1  | Unable to download server list. Try again later
speedtest_1  | 
speedtest_1  | Unable to download server list. Try again later

How to fix this problem?

Investigate SpeedTest++ compilation warnings on ARM

Now that we have a compilation pipeline that builds this container on various architectures, including arm/v6 and arm/v7, I've noticed some warnings in the first compiler stage:

#68 [linux/arm/v6 compiler 4/4] RUN git clone https://github.com/taganaka/SpeedTest.git .   && cmake -DCMAKE_BUILD_TYPE=Release .   && make
#68 sha256:da37175a7050f3e4cc6c92334dc5967106d1f4b0a6e3ea645ac123503988aa5c
#68 142.3 In file included from /usr/include/c++/10.2.1/bits/stl_algo.h:61,
#68 142.3                  from /usr/include/c++/10.2.1/algorithm:62,
#68 142.3                  from /build/SpeedTest.h:19,
#68 142.3                  from /build/SpeedTest.cpp:7:
#68 142.3 /usr/include/c++/10.2.1/bits/stl_heap.h: In function 'void std::__adjust_heap(_RandomAccessIterator, _Distance, _Distance, _Tp, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<double*, std::vector<double> >; _Distance = int; _Tp = double; _Compare = __gnu_cxx::__ops::_Iter_less_iter]':
#68 142.3 /usr/include/c++/10.2.1/bits/stl_heap.h:223:5: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<double*, std::vector<double> >' changed in GCC 7.1
#68 142.3   223 |     __adjust_heap(_RandomAccessIterator __first, _Distance __holeIndex,
#68 142.3       |     ^~~~~~~~~~~~~
#68 142.9 In file included from /usr/include/c++/10.2.1/algorithm:62,
#68 142.9                  from /build/SpeedTest.h:19,
#68 142.9                  from /build/SpeedTest.cpp:7:
#68 142.9 /usr/include/c++/10.2.1/bits/stl_algo.h: In function 'void std::__insertion_sort(_RandomAccessIterator, _RandomAccessIterator, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<double*, std::vector<double> >; _Compare = __gnu_cxx::__ops::_Iter_less_iter]':
#68 142.9 /usr/include/c++/10.2.1/bits/stl_algo.h:1839:5: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<double*, std::vector<double> >' changed in GCC 7.1
#68 142.9  1839 |     __insertion_sort(_RandomAccessIterator __first,
#68 142.9       |     ^~~~~~~~~~~~~~~~
#68 142.9 /usr/include/c++/10.2.1/bits/stl_algo.h:1839:5: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<double*, std::vector<double> >' changed in GCC 7.1
#68 142.9 /usr/include/c++/10.2.1/bits/stl_algo.h:1839:5: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<double*, std::vector<double> >' changed in GCC 7.1
#68 143.6 /usr/include/c++/10.2.1/bits/stl_algo.h: In function 'void std::__introsort_loop(_RandomAccessIterator, _RandomAccessIterator, _Size, _Compare) [with _RandomAccessIterator = __gnu_cxx::__normal_iterator<double*, std::vector<double> >; _Size = int; _Compare = __gnu_cxx::__ops::_Iter_less_iter]':
#68 143.6 /usr/include/c++/10.2.1/bits/stl_algo.h:1945:5: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<double*, std::vector<double> >' changed in GCC 7.1
#68 143.6  1945 |     __introsort_loop(_RandomAccessIterator __first,
#68 143.6       |     ^~~~~~~~~~~~~~~~
#68 143.6 /usr/include/c++/10.2.1/bits/stl_algo.h:1945:5: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<double*, std::vector<double> >' changed in GCC 7.1
#68 143.6 /usr/include/c++/10.2.1/bits/stl_algo.h:1959:25: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<double*, std::vector<double> >' changed in GCC 7.1
#68 143.6  1959 |    std::__introsort_loop(__cut, __last, __depth_limit, __comp);
#68 143.6       |    ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#68 ...

I'm not sure if I need to follow up on this. It doesn't fail the pipeline – for now.
StackOverflow seems to suggest it's benign.

Use of TESTSERVER

I tried to use the TESTSERVER environment variable. It's not documented I think, but used as I can see from the python code. In the "old" version, which made use of speedtest-cli (?) I successfully used SERVERID. But TESTSERVER does not work correctly. I tried it with two different servers. If I set TESTSERVER, I see the correct Server ID in the log. But Ping, Download and Upload are all 0:

{'Server ID': '12743', 'Sponsor': '', 'Server Name': '', 'Timestamp': '2021-04-17T16:00:00+00:00', 'Distance': '', 'Ping': '0', 'Download': '0.000000', 'Upload': '0.000000', 'Share': '', 'IP Address': 'xxxxxxxxxxxxxxxx'} 

If I do not specify a specific server, it works:

{'Server ID': 'sonic.wingo.ch:8080', 'Sponsor': 'Wingo', 'Server Name': 'Herden', 'Timestamp': '2021-04-17T16:15:01+00:00', 'Distance': '19.1232', 'Ping': '1', 'Download': '573563353.904221', 'Upload': '610659238.040191', 'Share': '', 'IP Address': 'xxxxxxxxxxxxxxx'}

I'd like to choose the server I want to use, because I think it's important to use your own provider's server, if they have one. Otherwise you are testing your providers network, the interconnection to another provider and that other providers network as well.

Any idea what's going wrong here? Is it the other client you are using internally now? Or is it just me, being stupid?

seems like import doesn't work

Hey,

I run the docker container like this docker run -d -p 8000:8000 -e MARKER_DOWNLOAD=40 -e MARKER_UPLOAD=10 --privileged --volume /home/pi/data:/data ansemjo/speedtest

And I wanted to import some data that I exported previously (haven't used the --volume before), but the database stays empty.

I did:
docker exec 812827bd398f dump > results.csv
and wanted to import it into the new one with
docker exec 1f3fcca1a2fb import < results.csv
but the database in ~/data stays empty as well as the graph.

Any ideas?

Migrate from DockerHub automation to GitHub Actions?

Tagging @tuxpeople here because I noticed they have a fork with GitHub Actions CI that builds the container image. I like that.

If you read this, Thomas:

  • be aware that there have been a number of changes and fixes in this repository since your fork, you might want to rebase.
  • would you like to prepare a pull request with your CI stuff in this repository?

Server ID with SpeedTest++

The new client expects a test server in the form --test-server host:port, so the "ID" has no meaning anymore.
It is currently always replaced with 0 when parsing the JSON results.

  • How to find / present a list of known server host:port combinations?
  • Maybe remove it from the web interface? It is currently appended in parentheses.
  • Does choosing a server this way still use mutli-connection etc. to saturate the bandwidth?

reimport dumped csv files

A user reached out by mail and asked whether the data that was dumped with speedtest-plotter dump > out.csv could be reimported:

is there any possibility to import a dumped csv?
because in the readme.md you wrote a dump is possible:

docker exec $containerid speedtest-plotter dump > results.csv

This is not currently supported, but internally r = parse_result(res.stdout) is used to parse the result from the speedtest-cli invocation. So that shouldn't be too hard to add.

Incorrect result parsing by splitting on comma

See #2 (comment):

Nice! Thanks again for your work.

Is it possible your speed test data string has new elements in it, it almost looks like they are offset slightly or are breaking at the wrong point in my case, maybe near "Seattle WA". Maybe you are breaking elements on "," and the sponsors are returning "Seattle, WA".

{'Server ID': '6199', 'Sponsor': 'Wowrack', 'Server Name': '"Seattle', 'Timestamp': ' WA"', 'Distance': '2020-03-26T14:46:57.495763Z', 'Ping': '3.901019554563146', 'Download': '16.944', 'Upload': '86800814.61200348', 'Share': '6072783.422078191', 'IP Address': ''}

The problem is the simple .split(','):

return dict(zip(FIELDNAMES, result.decode().strip().split(',')))

workflow for ppc64le broken

After bumping the Alpine base image to 3.14 various packages fail to install on ppc64le:

#51 9.199 (9/55) Installing libgcc (10.3.1_git20210424-r0)
#51 9.316 ERROR: libgcc-10.3.1_git20210424-r0: package mentioned in index not found (try 'apk update')
#51 9.326 (10/55) Installing libstdc++ (10.3.1_git20210424-r0)
#51 9.839 ERROR: libstdc++-10.3.1_git20210424-r0: package mentioned in index not found (try 'apk update')
#51 9.839 (11/55) Installing lzip (1.22-r0)

I've tried adding apk update in the Dockerfile but that does not appear to fix it either.

Is the Alpine base image just "too new" and the package mirrors are not fully synchronized yet?

Old data disappeared

Did the database format change since you moved from speedtest-cli to speedtest++? I'm asking because I updated my main deployment now, and I cant see any data. Also, the csv is empty. But the database has some size:

[root@forseti pvc-119498c9-2b38-478e-9d0c-7af88562c44d_speedtest-plotter_speedtest-plotter-data]# ls -l
total 248
-rw-r--r-- 1 root root 217088 Apr 17 19:16 speedtests.db
-rw-r--r-- 1 root root  32768 Apr 17 19:16 speedtests.db-shm
-rw-r--r-- 1 root root      0 Apr 17 19:16 speedtests.db-wal

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.