Giter Site home page Giter Site logo

gumby's Introduction

Gumby

An experiment runner framework to run local and distributed experiments. Gumby allows developers and scientists to design complex experiments and run them on the DAS5 supercomputer.

Notable features:

  • Run IPv8/Tribler experiments with thousands of instances in a local or remote (DAS5) environment.
  • A built-in experiment coordinator, facilitating coordination and message passing between any running instance.
  • Scenario files to schedule custom actions during an experiment run.
  • Resource monitoring (CPU, memory, I/O etc).
  • Post-processing functionality to visualize statistics gathered during an experiment with R.

Installation

Prior to installing Gumby install the required dependencies for basic tests on Ubuntu/debian-based systems by executing the following command:

sudo apt-get install python-psutil python-configobj r-base

These dependencies can also be installed using pip. Please note that more elaborate experiments might require additional dependencies.

Next, clone this repository from GitHub by running the following command:

git clone https://github.com/tribler/gumby

Tutorials

A tutorial for creating your first Gumby experiment is available here.

gumby's People

Contributors

badrock avatar captain-coder avatar devos50 avatar egbertbouman avatar grimadas avatar ichorid avatar kc1212 avatar kozlovsky avatar lfdversluis avatar lipufei avatar mo574216 avatar nielszeilemaker avatar pathemeous avatar pimotte avatar pimveldhuisen avatar pvanita avatar qstokkink avatar rjruigrok avatar snorberhuis avatar vladum avatar whirm avatar xoriole avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gumby's Issues

Spawned processes are never killed

After running a single gumby experiment some processes are never killed.

E.g.
/bin/bash -xe /var/scratch/zeilemak/gumby/scripts/run_tracker.sh
python -O -c from Tribler.dispersy.tool.tracker import main; main() --port 7788

build_virtualenv.sh issues

On gorillas:

  • crap with boost --> solved!

On my localhost:

  • DTrace install should be added to the buildscript (script checks for dtrace to trigger systemtap installation)
  • URL for libdwarf is down: other URL seems to deliver old/wrong libdwarf version? Output:

2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: dwarf-20130207/Makefile.in
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: ~/venv/src/DyninstAPI-8.1.2 ~/venv/src ~/workspace
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for a BSD-compatible install... /usr/bin/install -c
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for gcc... gcc
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for C compiler default output file name... a.out
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking whether the C compiler works... yes
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking whether we are cross compiling... no
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for suffix of executables...
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for suffix of object files... o
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking whether we are using the GNU C compiler... yes
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking whether gcc accepts -g... yes
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for gcc option to accept ISO C89... none needed
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for g++... g++
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking whether we are using the GNU C++ compiler... yes
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking whether g++ accepts -g... yes
2014-02-18 17:06:29+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking whether g++ supports C++11 features by default... no
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking whether g++ supports C++11 features with -std=c++11... yes
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for initializer list support... yes
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for icc... no
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for icpc... no
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for pgcc... no
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for pgCC... no
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for cc... /usr/bin/cc
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for CC... no
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for library containing dlopen... -ldl
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for grep that handles long lines and -e... /bin/grep
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking how to run the C++ preprocessor... g++ -E
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for a sed that does not truncate output... /bin/sed
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for Boost headers version >= 0.0.0... yes
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for Boost's header version... 1_53
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for gawk... no
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for mawk... mawk
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: configure: Building support for parsing systap sections
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for tcl.h... tcl.h not found in
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for Tcl_Eval in -ltcl8.4... no
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: Cant find libtcl8.4.
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking build system type... x86_64-unknown-linux-gnu
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking host system type... x86_64-unknown-linux-gnu
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: configure: Linux, not requiring thread_db...
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for egrep... /bin/grep -E
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for ANSI C header files... yes
2014-02-18 17:06:30+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for sys/types.h... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for sys/stat.h... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for stdlib.h... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for string.h... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for memory.h... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for strings.h... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for inttypes.h... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for stdint.h... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for unistd.h... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking libelf.h usability... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking libelf.h presence... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for libelf.h... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: configure: ELF include directory:
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for elf_memory in -lelf... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for cplus_demangle in -liberty... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking libdwarf.h usability... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking libdwarf.h presence... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for libdwarf.h... yes
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: configure: DWARF include directory: /home/corpaul/venv/inst/include
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" OUT: checking for dwarf_next_cu_header_c in -ldwarf... no
2014-02-18 17:06:31+0100 [-] CMD "tribler_experiment_setup.sh" ERR: configure: error: couldn't find sufficiently recent libdwarf (> 2011-12) with dwarf_next_cu_header_c
2014-02-18 17:06:32+0100 [-] CMD "tribler_experiment_setup.sh" exit code 1
2014-02-18 17:06:32+0100 [-] 'Experiment execution failed, exiting with error.'

hiddenservices_client not working

The HiddenServicesClient seeder and downloader are using different infohashes, this causes the file not to download.

This can be fixed by using a TorrentDefNoMetainfo for the downloader, using the same infohash as used by the seeder. I have an example implementation for the downloader's TorrentDef on my branch for a bbq ➡️ DAS5 experiment (so you can't copy-paste my experiment).

@devos50 @pimveldhuisen

EDIT: Adding peers to the download is also required to get it back up and running, see https://github.com/qstokkink/gumby/tree/fix_hiddenseeding for current fix.

multiline config file documentation

@CONF_OPTION NETEM_DELAY: Netem delay for the leechers. Note: for a homogeneous network of leechers, set 1 value

@CONF_OPTION NETEM_DELAY: for a heterogeneous network separate values by , e.g. netem_delay = "0ms,100ms"

results in:

Netem delay for the leechers. Note: for a homogeneous network of leechers, set 1 value

netem_delay =

for a heterogeneous network separate values by , e.g. netem_delay = "0ms,100ms"

netem_delay =

when generating the config file

SGE_KEEP_TMPFILES

Can we add SGE_KEEP_TMPFILES=no to bashrc on the das if it isn't there?

# check if we allready have SGE_KEEP_TMPFILES=no in .bashrc, if not add it
cat ~/.bashrc | grep SGE_KEEP_TMPFILES || echo SGE_KEEP_TMPFILES=no >> ~/.bashrc

Problems with GO command

Currently the SyncServer sends a GO command to all clients ordering them to start the experiment. However, this results in two problems:

  • The GO command will not arrive at all clients at the same time -> hence peers start at different times.
  • Only after receiving the GO command is the scenario file parsed, causing huge differences between nodes. In one run I could see more than a 2s difference between actually starting the experiment.

Suggested changes:

  • Add a timestamp to the GO command indicating when a client should start
  • Before sending the vars, the scenario file should be parsed, after receiving the GO command + timestamp the tasks should be registered.

add IO read support to stap logging

Add IO read support in a similar way as the IO writes, but note that a problem occurs when a function reads and writes to multiple files as we only store the last filename written to. (when monitoring writes only this doesn't happen too often, for reads/writes combined it occurs frequently)

Improving the testing infrastructure of Gumby

Currently, there are no regression tests of the Gumby framework. These would be really helpful since I'm about to make some changes to the way Gumby starts up the unit tests (since I'm trying to run the unit tests on OS X and Windows). Also, creating tests for Gumby helps when we eventually want to support psutil graphs (besides the data procfs generates).

As a first step for this, we can run the basic experiment (local_processguard) and archive the screenshots. This should take very little time. After that, we could create some more sophisticated experiments.

Inject configuration variables through jenkins

I would like to be able to inject config vars for gumby through jenkins.. now I have to push the config file to git every time I want to make a small change to an experiment.

I tried using the 'inject env vars' config option in jenkins but those vars are overwritten by gumby.

Dispersy cannot start with Tribler experiment client

In the latest devel branch. I found that if you called start_session in TriblerDispersyExperimentScriptClient, it will produce an assertion error. I guess it is because of commit 47f647c.

You can test it by running hiddenservices experiment. Below is the log produced :

ERROR   1478524088.08 TriblerDispersyClient:43    Starting Tribler Session
ERROR   1478524088.08 TriblerDispersyClient:101   Dispersy port set to 21002
ERROR   1478524088.12 TriblerDispersyClient:53    Upgrader
INFO    1478524088.80 tunnel_community:298   TunnelCommunity: setting become_exitnode = False
ERROR   1478524089.20 TriblerDispersyClient:48    Tribler Session started
Unhandled Error
Traceback (most recent call last):
  File "/home/ardhipoetra/git/thesis/gumby/experiments/dispersy/hiddenservices_client.py", line 244, in <module>
    main(HiddenServicesClient)
  File "/home/ardhipoetra/git/thesis/gumby/gumby/experiments/dispersyclient.py", line 516, in main
    reactor.run()
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/base.py", line 1194, in run
    self.mainLoop()
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/base.py", line 1203, in mainLoop
    self.runUntilCurrent()
--- <exception caught here> ---
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/base.py", line 798, in runUntilCurrent
    f(*a, **kw)
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 392, in callback
    assert not isinstance(result, Deferred)
exceptions.AssertionError:

ERROR   1478524089.21             log:143   Unhandled Error
Traceback (most recent call last):
  File "/home/ardhipoetra/git/thesis/gumby/experiments/dispersy/hiddenservices_client.py", line 244, in <module>
    main(HiddenServicesClient)
  File "/home/ardhipoetra/git/thesis/gumby/gumby/experiments/dispersyclient.py", line 516, in main
    reactor.run()
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/base.py", line 1194, in run
    self.mainLoop()
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/base.py", line 1203, in mainLoop
    self.runUntilCurrent()
--- <exception caught here> ---
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/base.py", line 798, in runUntilCurrent
    f(*a, **kw)
  File "/usr/local/lib/python2.7/dist-packages/twisted/internet/defer.py", line 392, in callback
    assert not isinstance(result, Deferred)
exceptions.AssertionError:

ERROR   1478524091.05 hiddenservices_client:72    This peer is exit node: Yes

Note that I can run it in commit 89510a6

Unhandled Errors do not stop process

While Dispersy quits due to having strict enabled, the gumby client does not when an exception occurs when calling a callback.

This needs to be fixed.
Example:

2013-10-29 12:19:46+0100 [-] Unhandled Error
    Traceback (most recent call last):
      File "/var/scratch/emilon/jenkins/workspace/Experiment_PrivateSocial_gumby/gumby/experiments/dispersy/social_client.py", line 126, in <module>
        main(SocialClient)
      File "/var/scratch/emilon/jenkins/workspace/Experiment_PrivateSocial_gumby/gumby/gumby/experiments/dispersyclient.py", line 346, in main
        reactor.run()
      File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/base.py", line 1192, in run
        self.mainLoop()
      File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/base.py", line 1201, in mainLoop
        self.runUntilCurrent()
    --- <exception caught here> ---
      File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/base.py", line 824, in runUntilCurrent
        call.func(*call.args, **call.kw)

Moreover, this exception is also not logged in the stderr, but in stdout similar to #50

Small gumby exception

2013-12-22 12:26:00+0100 [-] Killing leftover local sub processes...
Traceback (most recent call last):
  File "./gumby/run.py", line 93, in <module>
    _killGroup()
  File "./gumby/run.py", line 65, in _killGroup
    if getpgid(pid) == mypid and pid != mypid:
OSError: [Errno 3] No such process

Venv packages not compiled?

I got this error:

2013-11-16 09:08:29+0100 [Uninitialized] Unhandled Error
    Traceback (most recent call last):
      File "/home/emilon/venv/lib/python2.7/site-packages/twisted/python/log.py", line 88, in callWithLogger
        return callWithContext({"system": lp}, func, *args, **kw)
      File "/home/emilon/venv/lib/python2.7/site-packages/twisted/python/log.py", line 73, in callWithContext
        return context.call({ILogContext: newCtx}, func, *args, **kw)
      File "/home/emilon/venv/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
        return self.currentContext().callWithContext(ctx, func, *args, **kw)
      File "/home/emilon/venv/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
        return func(*args,**kw)
    --- <exception caught here> ---
      File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 619, in _doReadOrWrite
        why = selectable.doWrite()
      File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/tcp.py", line 593, in doConnect
        self._connectDone()
      File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/tcp.py", line 607, in _connectDone
        self.protocol = self.connector.buildProtocol(self.getPeer())
      File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/base.py", line 1071, in buildProtocol
        return self.factory.buildProtocol(addr)
      File "/var/scratch/emilon/jenkins/workspace/Experiment_PrivateSocial/COMMUNITY_TYPE/poli/label/das4_vu/gumby/gumby/sync.py", line 289, in buildProtocol
        p = self.protocol(self.vars)
      File "/var/scratch/emilon/jenkins/workspace/Experiment_PrivateSocial/COMMUNITY_TYPE/poli/label/das4_vu/gumby/experiments/dispersy/social_client.py", line 55, in __init__
        from Tribler.community.privatesocial.community import PoliSocialCommunity
      File "/var/scratch/emilon/jenkins/workspace/Experiment_PrivateSocial/COMMUNITY_TYPE/poli/label/das4_vu/tribler/Tribler/community/privatesocial/community.py", line 4, in <module>
        from conversion import SocialConversion
      File "/var/scratch/emilon/jenkins/workspace/Experiment_PrivateSocial/COMMUNITY_TYPE/poli/label/das4_vu/tribler/Tribler/community/privatesocial/conversion.py", line 8, in <module>
        from Tribler.community.privatesemantic.rsa import get_bits
      File "/var/scratch/emilon/jenkins/workspace/Experiment_PrivateSocial/COMMUNITY_TYPE/poli/label/das4_vu/tribler/Tribler/community/privatesemantic/rsa.py", line 18, in <module>
        from numpy.oldnumeric.random_array import random
      File "/home/emilon/venv/lib/python2.7/site-packages/numpy/__init__.py", line 153, in <module>
        from . import add_newdocs
      File "/home/emilon/venv/lib/python2.7/site-packages/numpy/add_newdocs.py", line 13, in <module>
        from numpy.lib import add_newdoc
      File "/home/emilon/venv/lib/python2.7/site-packages/numpy/lib/__init__.py", line 9, in <module>
        from .index_tricks import *
      File "/home/emilon/venv/lib/python2.7/site-packages/numpy/lib/index_tricks.py", line 19, in <module>
        from . import function_base
    exceptions.EOFError: EOF read where object expected

candidate_id not used in barter_clientclient.request_stat

@corpaul showed me the method request_stat as an example of how to generate a single candidate in gumby for my own tests. But this method does not use a single candidate. Instead the method sends a request stat to every candidate.

I don't know what the wanted behavior is, but the parameter candidate_id is never used and it is not possible at the moment to sent a single request stat to a candidate using this method. It would be easily fixed by using the candidate_id as a key on self.all_vars.. The corresponding value will allow creation of the candidate in the same way as c is used for creation.

gumby default config for experiment

It would be nice to have a mechanism in gumby which automatically checks if an experiment config variable is set and if not defaults it to a value from another config file (e.g. one that is read-only).

Now I do something like
[ -z "$EXPERIMENT_TIME" ] && EXPERIMENT_TIME=30
but if EXPERIMENT_TIME is used in local_setup_cmd, local_instance_cmd and tracker_cmd this has to be repeated everywhere.

Also it would result in much clearer documentation.

Gumby terminating immediately after running the local_processguard.conf experiment on OS X

When running the local_processguard.conf on OS X with the following command:

gumby/run.py gumby/experiments/dummy/local_processguard.conf

The experiment immediately exists:

MacBook-Pro-van-Martijn:Documents martijndevos$ gumby/run.py gumby/experiments/dummy/local_processguard.conf
INFO:ExperimentRunner:Syncing workspaces on remote head nodes...
INFO:ExperimentRunner:Great copying success!
INFO:ExperimentRunner:Running local and remote setup scripts
INFO:ExperimentRunner:Remote setup successful!
INFO:ExperimentRunner:Starting local and remote instances
INFO:ExperimentRunner:Locally running command process_guard.py -c "(yes CPU > /dev/null & find /etc /usr > /dev/null ; wait)" -t 10 -m $OUTPUT_DIR -o $OUTPUT_DIR
INFO:OneShotProcessProtocol:[process_guard.py -c...] OUT: Project root is: /Users/martijndevos/Documents
INFO:OneShotProcessProtocol:[process_guard.py -c...] OUT: NOT activating virtualenv.
INFO:OneShotProcessProtocol:[process_guard.py -c...] OUT: Running process_guard.py -c "(yes CPU > /dev/null & find /etc /usr > /dev/null ; wait)" -t 10 -m /Users/martijndevos/Documents/output -o /Users/martijndevos/Documents/output
INFO:OneShotProcessProtocol:[process_guard.py -c...] OUT: All child processes have died, exiting
INFO:OneShotProcessProtocol:[process_guard.py -c...] OUT: TERMinating group. Still 1 process(es) running:
INFO:OneShotProcessProtocol:[process_guard.py -c...] OUT: Command:
INFO:OneShotProcessProtocol:[process_guard.py -c...] OUT:    (yes CPU > /dev/null & find /etc /usr > /dev/null ; wait)
INFO:OneShotProcessProtocol:[process_guard.py -c...] OUT:    exited with status: -15
INFO:OneShotProcessProtocol:[process_guard.py -c "(yes CPU > /dev/null & find /etc /usr > /dev/null ; wait)" -t 10 -m $OUTPUT_DIR -o $OUTPUT_DIR] exit code 0
INFO:ExperimentRunner:Syncing output data back from head nodes...
...

This also happens with running nosetests inside gumby so it should be fixed before unit tests are able to run on OS X.

Update: an IOError triggers (see #251) due to the /proc directory not being available on OS X. I suggest we should either use psutil or sysctl(3) to get statistics about a specific process. @whirm what do you think?

Exception in import of client causes the .err files not to be copied back

After logging into a node the following exception was in 00009.err file:

ERROR   1391774980.87             log:141   Unhandled Error
Traceback (most recent call last):
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/python/log.py", line 88, in callWithLogger
    return callWithContext({"system": lp}, func, *args, **kw)
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/python/log.py", line 73, in callWithContext
    return context.call({ILogContext: newCtx}, func, *args, **kw)
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
    return self.currentContext().callWithContext(ctx, func, *args, **kw)
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
    return func(*args,**kw)
--- <exception caught here> ---
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 619, in _doReadOrWrite
    why = selectable.doWrite()
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/tcp.py", line 593, in doConnect
    self._connectDone()
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/tcp.py", line 607, in _connectDone
    self.protocol = self.connector.buildProtocol(self.getPeer())
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/base.py", line 1071, in buildProtocol
    return self.factory.buildProtocol(addr)
  File "/var/scratch/emilon/jenkins/workspace/Experiment_Metadatacommunity_lipu/gumby/gumby/sync.py", line 424, in buildProtocol
    p = self.protocol(self.vars)
  File "/var/scratch/emilon/jenkins/workspace/Experiment_Metadatacommunity_lipu/gumby/experiments/dispersy/metadata_client.py", line 57, in __init__
    from Tribler.community.metadata.community import MetadataCommunity
  File "/var/scratch/emilon/jenkins/workspace/Experiment_Metadatacommunity_lipu/tribler/Tribler/community/metadata/community.py", line 8, in <module>
    from Tribler.dispersy.resoultion import PublicResolution
exceptions.ImportError: No module named resoultion

Unhandled Error
Traceback (most recent call last):
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/python/log.py", line 88, in callWithLogger
    return callWithContext({"system": lp}, func, *args, **kw)
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/python/log.py", line 73, in callWithContext
    return context.call({ILogContext: newCtx}, func, *args, **kw)
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/python/context.py", line 118, in callWithContext
    return self.currentContext().callWithContext(ctx, func, *args, **kw)
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/python/context.py", line 81, in callWithContext
    return func(*args,**kw)
--- <exception caught here> ---
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/posixbase.py", line 619, in _doReadOrWrite
    why = selectable.doWrite()
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/tcp.py", line 593, in doConnect
    self._connectDone()
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/tcp.py", line 607, in _connectDone
    self.protocol = self.connector.buildProtocol(self.getPeer())
  File "/home/emilon/venv/lib/python2.7/site-packages/twisted/internet/base.py", line 1071, in buildProtocol
    return self.factory.buildProtocol(addr)
  File "/var/scratch/emilon/jenkins/workspace/Experiment_Metadatacommunity_lipu/gumby/gumby/sync.py", line 424, in buildProtocol
    p = self.protocol(self.vars)
  File "/var/scratch/emilon/jenkins/workspace/Experiment_Metadatacommunity_lipu/gumby/experiments/dispersy/metadata_client.py", line 57, in __init__
    from Tribler.community.metadata.community import MetadataCommunity
  File "/var/scratch/emilon/jenkins/workspace/Experiment_Metadatacommunity_lipu/tribler/Tribler/community/metadata/community.py", line 8, in <module>
    from Tribler.dispersy.resoultion import PublicResolution
exceptions.ImportError: No module named resoultion
ERROR   1391774980.87             log:141   "Failed to connect to experiment server (will retry in a while), error was: An error occurred while connecting: [Failure instance: Traceback (failure with no frames): <type 'exceptions.ImportError                                                                                                 '>: No module named resoultion\n]."
"Failed to connect to experiment server (will retry in a while), error was: An error occurred while connecting: [Failure instance: Traceback (failure with no frames): <type 'exceptions.ImportError'>: No module named resoultion\n]."

This error caused the syncserver to timeout, and maybe that could have also prevented the files from being copied back.

Refactoring the base classes used for Tribler and Dispersy experiments

As we discussed, we should refactor the base classes that are powering all our Tribler and Dispersy experiments. We discovered various issues with the current design:

  • much core logic (like the scenario runner system and various callbacks) is located in dispersyclient.py while this should be located in the client base class.
  • in our current design, the TriblerDispersyClient has no clear description about what it does and how it works. This should be simplified (and the name of the file should be changed to tribler_client.py).

After the meeting, we settled on the current design:

  • the ExperimentClient is the base class of all clients that are created and contains the scenario file mechanism and various callbacks that are general for running experiments (like the stop callback, please take a look at the callbacks available in DispersyClient and see which ones should be moved).
  • DispersyClient will become a subclass of ExperimentClient and is responsible for managing a Dispersy session.
  • TriblerClient will become a subclass of ExperimentClient and is responsible for managing a Tribler session. This client implements some basic methods:
    • Starting/stopping a download from a file/url/magnet link. Optional arguments can specify the amount of hops and whether safe seeding is enabled
    • Creating a file with some random data that can be downloaded/seeded.

In general, we wish to adhere to the following classification:

  • Experiment configuration file: responsible for allocating resources for the experiment
  • Experiment client: setting up the environment for the experiment
  • Scenario file: specifies the behaviour during the experiment

Finally, the README should address the changes above and clearly describe how to setup an experiment, either with Dispersy or Tribler.

Error when parsing utime

Project root is: /home/jenkins/workspace/GH_Tribler_pull-request-tester-isolated_devel@2
NOT activating virtualenv.
Running graph_process_guard_data.sh
Parsing resource_usage file ./resource_usage.log
Traceback (most recent call last):
  File "/home/jenkins/workspace/GH_Tribler_pull-request-tester-isolated_devel@2/gumby/scripts/extract_process_guard_stats.py", line 211, in <module>
    main(argv[1], argv[2])
  File "/home/jenkins/workspace/GH_Tribler_pull-request-tester-isolated_devel@2/gumby/scripts/extract_process_guard_stats.py", line 199, in main
    parse_resource_files(input_directory, output_directory, start_time)
  File "/home/jenkins/workspace/GH_Tribler_pull-request-tester-isolated_devel@2/gumby/scripts/extract_process_guard_stats.py", line 108, in parse_resource_files
    utime = long(parts[14])
IndexError: list index out of range

Propagate stderr immediately

Currently the process guard will collect all stdout and stderr for each process. The resulting files can be shown once the experiment is finished.

We can improve this by passing along the stderr immediately. The output will be collected by prun, giving feedback while the experiment is running.

generate_config_file.py docs

This script could benefit from some immediate response on the input format, when I run it without an argument it tells:
Traceback (most recent call last):
File "./gumby/scripts/generate_config_file.py", line 129, in
config_file_path = argv[1]
IndexError: list index out of range

When I run it with a path that doesn't exist it tells:
Traceback (most recent call last):
File "gumby/scripts/generate_config_file.py", line 134, in
config_file = open(config_file_path, 'w')
IOError: [Errno 2] No such file or directory: 'gumby/experiments/my_experiment/new_experiment.conf'

Dispersy errors not in stderr

Even though the exception was logged with the ERROR level, it did not show up in stderr.

2013-10-29 11:59:10+0100 [-] ERROR 1383044350.34 callback:214 reassessing as fatal exception, attempting proper shutdown

Venv

I got this error while building the venv on de delft-das4:
configure: error: system tiff library not found! Use --with-libtiff=builtin to use built-in version

Yappi not Installed on gorilla / bbq

Not entirely Gumby related, but I get the following error log when trying to run a gumby config on either of those machines.

Traceback (most recent call last):
  File "/home/jenkins/workspace/Relative_CPU_Performance/gumby/experiments/dispersy/tunnel_client_local_blind.py", line 7, in <module>
    import yappi
ImportError: No module named yappi

No graphs in isolated test due to missing function "ddply"

Problem

When I run an experiment using the isolated configuration, no graphs are produced in the ./output/ folder.

Environment
Package Version
Ubuntu Ubuntu 14.04.2 LTS
vnc4server vnc4server:i386/vivid 4.1.1+xorg4.3.0-37ubuntu5.0.1
r-base-dev r-base-dev:all/vivid 3.1.2-2
Experiment set-up
  • I have created the folder Tribler/Test/MyUniqueFolder/which contains only a single file: a copy of test_url.py from Tribler/Test/.
  • I use the following .conf file, which is a copy of experiments/tribler/run_all_tests_isolated.conf with less instances and only runs tests from MyUniqueFolder (which is just test_url.py):
experiment_name = TriblerTests

local_setup_cmd = tribler_experiment_setup.sh

nose_run_dir = tribler

nose_tests_to_run = Tribler/Test/MyUniqueFolder/

isolated_tribler_instances_to_spawn = 4

local_instance_cmd = isolated_tribler_network.sh

isolated_cmd = wrap_in_vnc.sh run_nosetests_for_jenkins.sh

post_process_cmd = graph_process_guard_data.sh

use_local_venv = False

tracker_port = __unique_port__
Output

The console output contains errors around the [graph_process_guard_...] output:

18:55:05 [-] Syncing output data back from head nodes...
18:55:05 [-] Great copying success!
18:55:05 [-] Post processing collected data
18:55:05 [-] Locally running command graph_process_guard_data.sh
18:55:05 [-] [graph_process_guard_...] OUT: Project root is: /home/quinten/tribler
18:55:05 [-] [graph_process_guard_...] OUT: NOT activating virtualenv.
18:55:05 [-] [graph_process_guard_...] OUT: Running graph_process_guard_data.sh
18:55:05 [-] [graph_process_guard_...] ERR: Parsing resource_usage file ./resource_usage.log
18:55:05 [-] [graph_process_guard_...] ERR: . ./utimes.txt ./utimes_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./stimes.txt ./stimes_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./wchars.txt ./wchars_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./rchars.txt ./rchars_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./writebytes.txt ./writebytes_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./readbytes.txt ./readbytes_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./vsizes.txt ./vsizes_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./rsizes.txt ./rsizes_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./utimes_node.txt ./utimes_node_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./stimes_node.txt ./stimes_node_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./wchars_node.txt ./wchars_node_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./rchars_node.txt ./rchars_node_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./writebytes_node.txt ./writebytes_node_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./readbytes_node.txt ./readbytes_node_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./vsizes_node.txt ./vsizes_node_reduced.txt
18:55:05 [-] [graph_process_guard_...] ERR: . ./rsizes_node.txt ./rsizes_node_reduced.txt
18:55:05 [-] [graph_process_guard_...] OUT: > is.installed <- function(mypkg) is.element(mypkg, installed.packages()[,1])
18:55:05 [-] [graph_process_guard_...] OUT: >
18:55:05 [-] [graph_process_guard_...] OUT: > toInstall <- c("ggplot2", "reshape", "stringr", "plyr")
18:55:05 [-] [graph_process_guard_...] OUT: > for (package in toInstall){
18:55:05 [-] [graph_process_guard_...] OUT: +   if (is.installed(package) == FALSE){
18:55:05 [-] [graph_process_guard_...] OUT: +       install.packages(package, repos = "http://cran.r-project.org")
18:55:05 [-] [graph_process_guard_...] OUT: +   }
18:55:05 [-] [graph_process_guard_...] OUT: + }
18:55:05 [-] [graph_process_guard_...] OUT: >
18:55:06 [-] [graph_process_guard_...] OUT: Using annotation as id variables
18:55:06 [-] [graph_process_guard_...] OUT: Error in load_annotations() : could not find function "ddply"
18:55:06 [-] [graph_process_guard_...] OUT: Execution halted
18:55:06 [-] [graph_process_guard_...] OUT: Using annotation as id variables
18:55:06 [-] [graph_process_guard_...] OUT: Error in load_annotations() : could not find function "ddply"
18:55:06 [-] [graph_process_guard_...] OUT: Execution halted
18:55:06 [-] [graph_process_guard_...] OUT: Using annotation as id variables
18:55:06 [-] [graph_process_guard_...] OUT: Error in load_annotations() : could not find function "ddply"
18:55:06 [-] [graph_process_guard_...] OUT: Execution halted
18:55:06 [-] [graph_process_guard_...] OUT: Using annotation as id variables
18:55:06 [-] [graph_process_guard_...] OUT: Error in load_annotations() : could not find function "ddply"
18:55:06 [-] [graph_process_guard_...] OUT: Execution halted
18:55:06 [-] [graph_process_guard_data.sh] exit code 0
18:55:06 [-] experiment suceeded
18:55:06 [-] Main loop terminated.
18:55:06 [-] Killing leftover local sub processes...
18:55:06 [-] Waiting for 1 subprocess(es) to die...
18:55:16 [-] Done.
Temporary fix

I managed to fix this by inserting the following line at the top of the scripts/r/annotation.r file:

library(plyr)

Even though this works: I feel like I'm missing some setting/dependency somewhere, what is the correct way to fix this?

Move Dispersy system tests into gumby

There are two Dispersy system tests:

  • test_overlay.py
  • test_bootstrap.py

These can be moved to the gumby repository. The associated R scripts can be moved there as well.

Create a MultiChain experiment

We should create a stable MultiChain experiment. For the first iteration, this should be tested as a standalone community without dependencies on hidden seeding or anonymous download. Adding these dependencies can be added in a later stage.

By using artificial triggers, we should initiate the creation of MultiChain blocks. Maybe plot them after the experiment has been finished?

Running Dummy Profiler Yields Exception in twisted.defer

I tried running various dummy profile configurations, but they all result in a similar looking exception.
Depending on the exact config, certain code in the ExperimentRunner fails, but the traceback points towards the twisted 'defer' module each time.

Trying to run a remote dummy config shows that the SSH is denied, giving a stacktrace to the same lines in twisted.defer as below.

Do you have any clue as to what might go wrong?
If you want, I can come by in the break at 15:30 quickly before my next lecture starts.

This is the error message for the local_prun_sync.conf

pathemeous@pathemeous-laptop:~/Documents/repos/gumbyFork$ gumby/run.py gumby/experiments/dummy/local_prun_sync.conf
INFO:ExperimentRunner:Syncing workspaces on remote head nodes...
INFO:ExperimentRunner:Great copying success!
INFO:ExperimentRunner:Running local and remote setup scripts
INFO:ExperimentRunner:Remote setup successful!
INFO:OneShotProcessProtocol:[] OUT: Project root is: /home/pathemeous/Documents/repos/gumbyFork
INFO:OneShotProcessProtocol:[] OUT: NOT activating virtualenv.
INFO:OneShotProcessProtocol:[] OUT: Running das4_setup.sh
INFO:OneShotProcessProtocol:[] ERR: ++ hostname
INFO:OneShotProcessProtocol:[] ERR: ++ grep '^fs[0-9]$'
INFO:OneShotProcessProtocol:[] ERR: + '[' '!' -z ']'
INFO:OneShotProcessProtocol:[] ERR: + quota -q
INFO:OneShotProcessProtocol:[] ERR: /home/pathemeous/Documents/repos/gumbyFork/gumby/scripts/das4_setup.sh: line 46: quota: command not found
INFO:OneShotProcessProtocol:[] ERR: + '[' 127 -ne 0 ']'
INFO:OneShotProcessProtocol:[] ERR: + echo 'Quota exceeded!'
INFO:OneShotProcessProtocol:[] ERR: + echo 'Aborting experiment.'
INFO:OneShotProcessProtocol:[] ERR: + exit 1
INFO:OneShotProcessProtocol:[] OUT: Quota exceeded!
INFO:OneShotProcessProtocol:[] OUT: Aborting experiment.
INFO:OneShotProcessProtocol:[das4_setup.sh] exit code 1
ERROR:ExperimentRunner:Experiment execution failed, exiting with error.
ERROR:ExperimentRunner:<twisted.python.failure.Failure <class 'twisted.internet.defer.FirstError'>>
Unhandled Error
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 416, in fireEvent
    DeferredList(beforeResults).addCallback(self._continueFiring)
  File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 306, in addCallback
    callbackKeywords=kw)
  File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 295, in addCallbacks
    self._runCallbacks()
  File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
--- <exception caught here> ---
  File "/usr/lib/python2.7/dist-packages/twisted/internet/base.py", line 429, in _continueFiring
    callable(*args, **kwargs)
exceptions.SystemExit: 1
INFO:root:Killing leftover local sub processes...
INFO:root:Done.

Running Gumby on OS X and Windows

In order to be able to run Gumby on OS X and Windows (for the unit tests of Tribler/Dispersy), some major changes are needed:

procfs is not available on OS X and the port (https://github.com/osxfuse/filesystems/tree/master/filesystems-c/procfs) seems to be outdated. On Windows, we have no real good alternative for procfs. psutil is providing some tools to get statistics. So we might fallback on psutil if we do not have procfs.

Another issue is that Windows does not support process groups.

This issue is open for discussions, ideas and suggestions how to make Gumby cross-platform.

Das4 #processes not flexible

I want to run 4023 processes on 20 nodes. This isn't possible due to being required to choose a number of nodes (20) and a number of processes per node (201.15).

Can we switch to a number of nodes + number of processes thingy and drop the not flexible number of processes per node requirement.

In process_guard.py, self.network_monitor_file is sometimes undefined.

In process_guard.py, self.network_monitor_file is sometimes undefined, crashing the whole thing.

INFO:OneShotProcessProtocol:[process_guard.py  -c...] ERR: Traceback (most recent call last):
INFO:OneShotProcessProtocol:[process_guard.py  -c...] ERR:   File "/home/jenkins/workspace/Experiment_multichain_100/gumby/scripts/process_guard.py", line 384, in <module>
INFO:OneShotProcessProtocol:[process_guard.py  -c...] ERR:     exit(pm.monitoring_loop())
INFO:OneShotProcessProtocol:[process_guard.py  -c...] ERR:   File "/home/jenkins/workspace/Experiment_multichain_100/gumby/scripts/process_guard.py", line 305, in monitoring_loop
INFO:OneShotProcessProtocol:[process_guard.py  -c...] ERR:     if self.network_monitor_file:
INFO:OneShotProcessProtocol:[process_guard.py  -c...] ERR: AttributeError: 'ProcessMonitor' object has no attribute 'network_monitor_file'
INFO:OneShotProcessProtocol:[process_guard.py  -c...] OUT: No logger.conf found.

Gumby should support a custom LibTorrent tracker

To allow for experiments to use full-fledged LibTorrent, a functional LibTorrent tracker is required. This should probably be implemented like the Dispersy tracker, using a twistd service (see run_tracker.sh) on the sync node.

Implementation details of the libtorrent udp tracker api can be found here.

Communities should connect better with candidates during experiments

When running gumby experiments the behavior of the candidates list in a community should be more controllable. Currently, a node in the gumby experiment does not necessarily know every other node.
While this is required for a large set of experiments.

There are two problems:

  1. The nodes are not introduced properly at the start of the experiment. There is some functionality that allows to introduce candidates, but these methods are not successful every time. You should be able to have an instruction in your scenario file that instructs a node to know every other node.
  2. The nodes forget candidates during the experiment. This is probably due to Dispersy refreshing the list of candidates. You should be able to turn off this behavior.

An example of an experiment that fails due these problems is experiments/multichain/multichain_synthetic.scenario

I think I solved the second problem by overriding the cleanup_candidates method with a MultiChainGumbyCommunity and using this community in gumby experiments. But this should be implemented in a generic way in Gumby.

Compile all gumby python files before starting experiment?

I think i'm seeing a similar problem as we had without precompiling Tribler, this strange EOF file reported by a single peer.

2013-10-31 02:29:45+0100 [-] STDOUT: Found in: ./output/localhost/node307/00022.err
2013-10-31 02:29:45+0100 [-] STDOUT: Traceback (most recent call last):
      File "/var/scratch/emilon/jenkins/workspace/Experiment_AllChannel+Channelcommunity_devel_with_gumby/gumby/experiments/dispersy/allchannel_client.py", line 46, in <module>
        from gumby.experiments.dispersyclient import DispersyExperimentScriptClient, call_on_dispersy_thread, main
    EOFError: EOF read where object expected

graph_process_guard_data.sh crashes on empty file

when resource_usage.log is empty:

Running gumby/scripts/graph_process_guard_data.sh
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: Parsing resource_usage file src/resource_usage.log
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: Traceback (most recent call last):
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: File "/home/corpaul/workspace/gumby/scripts/extract_process_guard_stats.py", line 178, in
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: main(argv[1], argv[2])
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: File "/home/corpaul/workspace/gumby/scripts/extract_process_guard_stats.py", line 166, in main
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: parse_resource_files(input_directory, output_directory, start_time)
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: File "/home/corpaul/workspace/gumby/scripts/extract_process_guard_stats.py", line 85, in parse_resource_files
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: metainfo = json.loads(line)
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: File "/home/corpaul/venv/inst/lib/python2.7/json/init.py", line 326, in loads
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: return _default_decoder.decode(s)
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: File "/home/corpaul/venv/inst/lib/python2.7/json/decoder.py", line 366, in decode
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: obj, end = self.raw_decode(s, idx=_w(s, 0).end())
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: File "/home/corpaul/venv/inst/lib/python2.7/json/decoder.py", line 384, in raw_decode
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: raise ValueError("No JSON object could be decoded")
2013-12-03 15:51:55+0100 [-] CMD "gumby/scripts/graph_process_guard_data.sh" ERR: ValueError: No JSON object could be decoded

Argument passing

I want to pass a json document to a method using a scenario file.
However, because the json document contains spaces gumby will split it up and pass it as separate arguments, e.g. I'm getting this error: add_friend() takes exactly 3 arguments (14 given).

Can we agree upon a "this is a single argument" delimiter in order to pass such an argument?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.