Giter Site home page Giter Site logo

brian-team / brian2genn Goto Github PK

View Code? Open in Web Editor NEW
46.0 46.0 16.0 5.46 MB

Brian 2 frontend to the GeNN simulator

Home Page: http://brian2genn.readthedocs.io/

License: GNU General Public License v2.0

Python 77.28% C++ 21.17% MATLAB 0.58% Makefile 0.21% Shell 0.71% M 0.05%
brian genn genn-simulator gpu-computing python simulation spiking-neural-networks

brian2genn's People

Contributors

azure-pipelines[bot] avatar denisalevi avatar jangmarker avatar justasb avatar kernfel avatar mstimberg avatar neworderofjamie avatar thesamovar avatar tnowotny avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

brian2genn's Issues

Wrong number output format during code generation?

I just found this in generated code:
#define SCALAR_MAX 179769313486231570814527423731704356798070567525844996598917476803157260780028538760589558632766878171540458953514382464234321326889464182768467546703537516986049910576551282076245490090389328944075868508455133942304583236903222948165808559332123348274797826204144723168738177180919299881250404026184124858368.000000
I think this is not sensible (if C++ even understands it) - the constant should have appeared in a floating point notation.

Docs are not building on readthedocs

There is an error installing brian2genn on readthedocs in the docs building process.
I can't make out what the problem is, it points to line 24 in codeobject.py:

File "/home/docs/checkouts/readthedocs.org/user_builds/brian2genn/envs/latest/local/lib/python2.7/site-packages/Brian2GeNN-0.9b0-py2.7.egg/brian2genn/codeobject.py", line 24, in GeNNCodeObject
'constant_or_scalar': constant_or_scalar})
TypeError: init() got multiple values for keyword argument 'env_globals'

The details are here:
https://readthedocs.org/projects/brian2genn/builds/4450559/

Any insights @mstimberg ?

Synapse model

Implemented in principle but not tested yet I think.

Errors when using the brian2genn interface

From @Kwartke on August 24, 2015 12:58

When running the attached code (adapted from tests/features/speed.py) using the brian2genn interface, the compilation fails.
Recent and clean versions of brian2 (master branch), brian2genn (develop branch) and genn (master branch) from the respective git repositories were used.
When running the attached examples (HHNeuronsOnly and CUBAFixedConnectivity): the following compiler errors show up (the compiler errors are the same for both examples but differ when using different cuda versions):

on a GTX Titan Black (CUDA 6.5):

magicnetwork_model.cc: error: ‘struct neuronModel’ has no member named ‘supportCode’
   n.supportCode= tS("\n\
     ^

on a GTX 580 (CUDA 7.0):

  File "p.py", line 36, in <module>
    group.v = El
...
File "/home/wartke/brian2genn/brian2genn/device.py", line 150, in code_object_class
    raise ValueError("Cannot specify codeobj_class for GeNN device.")
ValueError: Cannot specify codeobj_class for GeNN device.

Our aim is to run all examples from tests/features/speed.py using brian2genn. However, errors similar as those above show up for the other examples as well.

Kind regards, Konrad Wartke & Moritz Augustin


attached example no. 1:


from brian2 import *
import brian2genn

set_device("genn")
num_neurons = 10
# Parameters
area = 20000 * umetre**2
Cm = 1 * ufarad * cm**-2 * area
gl = 5e-5 * siemens * cm**-2 * area
El = -65 * mV
EK = -90 * mV
ENa = 50 * mV
g_na = 100 * msiemens * cm**-2 * area
g_kd = 30 * msiemens * cm**-2 * area
VT = -63 * mV
        # The model
eqs = Equations('''
        dv/dt = (gl*(El-v) - g_na*(m*m*m)*h*(v-ENa) - g_kd*(n*n*n*n)*(v-EK) + I)/Cm : volt
        dm/dt = 0.32*(mV**-1)*(13.*mV-v+VT)/
            (exp((13.*mV-v+VT)/(4.*mV))-1.)/ms*(1-m)-0.28*(mV**-1)*(v-VT-40.*mV)/
            (exp((v-VT-40.*mV)/(5.*mV))-1.)/ms*m : 1
        dn/dt = 0.032*(mV**-1)*(15.*mV-v+VT)/
            (exp((15.*mV-v+VT)/(5.*mV))-1.)/ms*(1.-n)-.5*exp((10.*mV-v+VT)/(40.*mV))/ms*n : 1
        dh/dt = 0.128*exp((17.*mV-v+VT)/(18.*mV))/ms*(1.-h)-4./(1+exp((40.*mV-v+VT)/(5.*mV)))/ms*h : 1
        I : amp
        ''')
        # Threshold and refractoriness are only used for spike counting
group = NeuronGroup(num_neurons, eqs,
                            threshold='v > -40*mV',
                            refractory='v > -40*mV')
group.v = El
group.I = '0.7*nA * i / num_neurons'
run(1*second)
device.build(directory="genn", compile=True, run=True)

attached example no. 2:


from brian2 import *
import brian2genn

set_device("genn")
N = 10
Ne = int(.8 * N)
taum = 20 * ms
taue = 5 * ms
taui = 10 * ms
Vt = -50 * mV
Vr = -60 * mV
El = -49 * mV
eqs = '''
        dv/dt  = (ge+gi-(v-El))/taum : volt (unless refractory)
        dge/dt = -ge/taue : volt (unless refractory)
        dgi/dt = -gi/taui : volt (unless refractory)
        '''
P = NeuronGroup(
            N, eqs, threshold='v>Vt', reset='v = Vr', refractory=5 * ms)
P.v = 'Vr + rand() * (Vt - Vr)'
P.ge = 0 * mV
P.gi = 0 * mV
we = (60 * 0.27 / 10) * mV  # excitatory synaptic weight (voltage)
wi = (-20 * 4.5 / 10) * mV  # inhibitory synaptic weight
Ce = Synapses(P, P, pre='ge += we')
Ci = Synapses(P, P, pre='gi += wi')
Ce.connect('i<Ne', p=80. / N)
Ci.connect('i>=Ne', p=80. / N)
s_mon = SpikeMonitor(P)
run(1*second)
device.build(directory="genn2", compile=True, run=True)

Copied from original issue: brian-team/brian2#545

Expose synapse span type choices

I have made a simple change that allows to set the default synapse span type for all synapse groups in a model via a brian 2 preference devices.genn.synapse_span_type.
It would be better if we had a method to do it per synapse population.
The simple method is realised in the expose_blocksize_prefs branch if you want to have a look, @mstimberg

update order in SynapsesSTDP feature test

I have not yet found a workaround to specifying an explicit update order in the synapsesSTDP feature test (test.features.synapses) in order to have brian2genn pass this test. I thought setting the default schedule in prefs before running the test would work but apparently other devices over-ride withe their own defaults. So for the time being, using the standard test (without explicitly changed update schedule) this test will fail with a ca 8% error in brian2genn.

Support run-regularly for synapses

Support run_regularly for synapses likely using the synapse_dynamics mechanism of GeNN.
An example of application would be weight decay that is only calculated every 100ms >> timestep.

Models with heteregoneous delays don't give warning or error?

When specifying heterogeneous delays in brian2genn, they are silently ignored. I know they are not supported, but shouldn't there be at least a warning or even an error?

This old Brian2 example demonstrates this well when comparing the resulting plots when using cpp_standalone and genn device. Even though synapses with different delays are created, the simulation just uses no delay (or maybe just the first delay value?, didn't check that).

from brian2 import *
import brian2genn

set_device('genn')
#set_device('cpp_standalone')

G1 = NeuronGroup(10, 'dv/dt = -v / (10*ms) : 1',
                threshold='v > 1',
                reset='v=0.')
G1.v = 1.2
G2 = NeuronGroup(10, 'dv/dt = -v / (10*ms) : 1',
                 threshold='v > 1',
                 reset='v=0')

syn = Synapses(G1, G2, 'dw/dt = -w / (50*ms): 1', pre='v+=w')

syn.connect('i == j', p=0.75)

# Set the delays
syn.delay[:] = '1*ms + i * ms + 0.25*randn()*ms'
# Set the initial values of the synaptic variable
syn.w[:] = 1

mon = StateMonitor(G2, 'v', record=True)
run(20*ms)
plot(mon.t / ms, mon.v.T)
show()

Note: I'm not using the latest brian2genn version, I'm on commit 0553caf, in case there were changes since then.

`name must be a namespace name` error in synapseKrnl.cc

One of our speed test scripts fails with Brian2GeNN. Running this script

from brian2 import *
import brian2genn

set_device('genn', directory='genn' , compile=True, run=True, debug=False)


# configuration options
duration = 1*second

N = 10
NE = int(0.8 * N)           # Number of excitatory cells
NI = NE/4          # Number of inhibitory cells 
tau_ampa = 5.0*ms   # Glutamatergic synaptic time constant
tau_gaba = 10.0*ms  # GABAergic synaptic time constant
epsilon = 0.02      # Sparseness of synaptic connections
tau_stdp = 20*ms    # STDP time constant
gl = 10.0*nsiemens   # Leak conductance
el = -60*mV          # Resting potential
er = -80*mV          # Inhibitory reversal potential
vt = -50.*mV         # Spiking threshold
memc = 200.0*pfarad  # Membrane capacitance
bgcurrent = 200*pA   # External current
eta = 0

eqs_neurons='''
dv/dt=(-gl*(v-el)-(g_ampa*v+g_gaba*(v-er))+bgcurrent)/memc : volt (unless refractory)
dg_ampa/dt = -g_ampa/tau_ampa : siemens
dg_gaba/dt = -g_gaba/tau_gaba : siemens
'''

neurons = NeuronGroup(NE+NI, model=eqs_neurons, threshold='v > vt',
                      reset='v=el', refractory=5*ms)
Pe = neurons[:NE]
Pi = neurons[NE:]

con_e = Synapses(Pe, neurons, on_pre='g_ampa += 0.3*nS')
con_e.connect('rand()<epsilon')
con_ii = Synapses(Pi, Pi, on_pre='g_gaba += 3*nS')                                       
con_ii.connect('rand()<epsilon')

eqs_stdp_inhib = '''
w : 1
dA_pre/dt=-A_pre/tau_stdp : 1
dA_post/dt=-A_post/tau_stdp : 1
'''
alpha = 3*Hz*tau_stdp*2  # Target rate parameter
gmax = 100               # Maximum inhibitory weight

con_ie = Synapses(Pi, Pe, model=eqs_stdp_inhib,
                  on_pre='''A_pre += 1.
                         w = clip(w+(A_post-alpha)*eta, 0, gmax)
                         g_gaba += w*nS''',
                  on_post='''A_post += 1.
                          w = clip(w+A_pre*eta, 0, gmax)
                       '''
                 )
con_ie.connect('rand()<epsilon')
con_ie.w = 1e-10

run(duration)

fails with this output (I left the brian INFO stuff out)

...
building genn executable ...
ar -rcs /home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/genn/lib/lib/libgenn.a /hom
e/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/genn/lib/obj/global.o /home/denisalevi/
projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/genn/lib/obj/modelSpec.o /home/denisalevi/projects/d
ev_brian2cuda/brian2cuda_repo/frozen_repos/genn/lib/obj/neuronModels.o /home/denisalevi/projects/dev_bria
n2cuda/brian2cuda_repo/frozen_repos/genn/lib/obj/synapseModels.o /home/denisalevi/projects/dev_brian2cuda
/brian2cuda_repo/frozen_repos/genn/lib/obj/postSynapseModels.o /home/denisalevi/projects/dev_brian2cuda/b
rian2cuda_repo/frozen_repos/genn/lib/obj/utils.o /home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo
/frozen_repos/genn/lib/obj/stringUtils.o /home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_
repos/genn/lib/obj/sparseUtils.o /home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/ge
nn/lib/obj/hr_time.o
g++ -std=c++11 -DNVCC=\""/usr/local/cuda/bin/nvcc"\" -DMODEL=\"/mnt/antares_raid/home/denisalevi/projects
/dev_brian2cuda/python_test_networks/test_feature_tests/genn/magicnetwork_model.cpp\" -o /mnt/antares_rai
d/home/denisalevi/projects/dev_brian2cuda/python_test_networks/test_feature_tests/genn/generateALL /home/
denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/genn/lib/src/generate*.cc -I"/home/denisa
levi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/genn/lib/include" -I"/usr/local/cuda/include" -
L"/home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/genn/lib/lib" -L"/usr/local/cuda/
lib64" -lgenn -lcuda -lcudart
call was ./generateALL /mnt/antares_raid/home/denisalevi/projects/dev_brian2cuda/python_test_networks/tes
t_feature_tests/genn 
optimizing block size...
Global memory required for core model: 0.000654 MB. 
6440894464 for device 0
dry-run compile for device 0
"/usr/local/cuda/bin/nvcc" -cubin -x cu -arch sm_35 -O3 -I"$GENN_PATH/lib/include" -o "/mnt/antares_raid/
home/denisalevi/projects/dev_brian2cuda/python_test_networks/test_feature_tests/genn/runner.cubin" "/mnt/
antares_raid/home/denisalevi/projects/dev_brian2cuda/python_test_networks/test_feature_tests/genn/magicne
twork_model_CODE/runner.cc"
genn-buildmodel.sh:70: error 50: command failure
/mnt/antares_raid/home/denisalevi/projects/dev_brian2cuda/python_test_networks/test_feature_tests/genn/ma
gicnetwork_model_CODE/synapseKrnl.cc(23): error: name must be a namespace name

1 error detected in the compilation of "/tmp/tmpxft_00007388_00000000-7_runner.cpp1.ii".
/home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/genn/lib/src/generateALL.cc: 258: c
uda driver error 301: CUDA_ERROR_FILE_NOT_FOUND
ERROR      Brian 2 encountered an unexpected error. If you think this is bug in Brian 2, please report th
is issue either to the mailing list at <http://groups.google.com/group/brian-development/>, or to the iss
ue tracker at <https://github.com/brian-team/brian2/issues>. Please include this file with debug informat
ion in your report: /tmp/brian_debug_FQg9Wu.log  Additionally, you can also include a copy of the script 
that was run, available at: /tmp/brian_script_wU0GlI.py You can also include a copy of the redirected std
 stream outputs, available at /tmp/brian_stdout_8V3yfq.log and /tmp/brian_stderr_94OxVL.log Thanks! [bria
n2]
Traceback (most recent call last):
  File "GENN_vogels_with_synaptic_dynamics_test.py", line 60, in <module>
    run(duration)
  File "/mnt/antares_raid/home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/brian2/bri
an2/units/fundamentalunits.py", line 2428, in new_f
    result = f(*args, **kwds)
  File "/mnt/antares_raid/home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/brian2/bri
an2/core/magic.py", line 371, in run
    namespace=namespace, profile=profile, level=2+level)
  File "/mnt/antares_raid/home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/brian2/bri
an2/core/magic.py", line 231, in run
    namespace=namespace, profile=profile, level=level+1)
  File "/mnt/antares_raid/home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/brian2/bri
an2/core/base.py", line 276, in device_override_decorated_function
    return getattr(curdev, name)(*args, **kwds)
  File "/mnt/antares_raid/home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/brian2genn
/brian2genn/device.py", line 1205, in network_run
    super(GeNNDevice, self).network_run(net=net, duration=duration, report=report, report_period=report_p
eriod, namespace=namespace, level=level+1)
  File "/mnt/antares_raid/home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/brian2/bri
an2/devices/cpp_standalone/device.py", line 1170, in network_run
    self.build(direct_call=False, **self.build_options)
  File "/mnt/antares_raid/home/denisalevi/projects/dev_brian2cuda/brian2cuda_repo/frozen_repos/brian2genn
/brian2genn/device.py", line 582, in build
    returncode=ex.returncode)
RuntimeError: Project compilation failed (Command ['genn-buildmodel.sh', 'magicnetwork_model.cpp'] failed
 with error code 50).
See the output above (if any) for more details.

I don't think its a genn configuration problem since all other speed tests work fine. Is there a not supported feature I am missing or is this a Brian2genn bug?

mod function

there are some problems around using fmod with integer arguments and MSVC compiler. Dan and Marcel made a hack with universal support code defining all kinds of overloads for _brian_mod.
I have for now just re-translated _brian_mod to fmod. Needs testing whether this causes problems or nvcc is fine with it on all platforms.

Failing brian2 test: test_synapses.test_vectorisation_STDP_like

The test test_synapses.test_vectorisation_STDP_like fails when brian2genn is run in GPU mode. I have investigated the source of the problem and it comes down to the following problem:
Within the test, (1) a pre-synaptic neuron variable is incremented in the pre-synaptic code and (2) a post-synaptic neuron variable is incremented by post-sunaptic code. In this particular example, this is done on a sparse synaptic connection.
Because the synapse updates are done in parallel for post-synaptic neurons in case 1 and along pre-synaptic spikes in case 2, both lead to race conditions on the target neuron variables; the correct solution would be to use atomicAdd and I have verified by manual substitution of atomicAdd in the generated code that this fixes the problem.
To permanently fix the problem may be more difficult. @mstimberg, @thesamovar : what kind of actions on pre- and post-synaptic neuron variables are allowed in synaptic code in brian 2? I can see what you would expect for incrementing (like here) but it appears that an action like a simple value assignment would have to lead to undefined behaviour? Also, other non-commuting operations should be undefined?

Default linker flags broke compilation

Hi @mstimberg, at Capocaccia we had a participant who wanted to try brian2genn on Windows and he was unable to compile because in makefile_common_win.mk from the included GeNN version, LINK_FLAGS were added to rather than replaced. As a result they contained a strange default of "-MD" and something else which I can't recall. The "-MD" forced cl to do some kind of dynamic linking while nvcc (by default) seems to do some static linking that is the equivalent of what cl would do with a "-MT" flag. Linking then failed with an error message that dynamically linked and statically linked object files can't be linked together. When we replaced LINK_FLAGS rather than adding to it, everything seemed to work. I wonder whether we should make the effort to patch the GeNN version you packaged into brian2genn (or at this time update to something more recent) and repackage brian2genn and release to fix this problem? Relatedly, I am wondering whether you tested brian2genn on Windows with a GPU? Because I am not sure whether this is a problem with this guy's particular setup or whether it is a general problem (it wouldn't appear in the CPU_ONLY version as you wouldn't link cl compiled obj files and nvcc compiled ones).

Threshold conditions that depend on subexpressions fail

The code for the threshold code generated by Brian 2 is in the form of a variable assignment:

double _cond = something

In brian2genn, this is passed into GeNN as the thresholdConditionCode which becomes the argument of an if statement:

if (double _cond = something )

This works "by accident", since an assignment can also be evaluated as an expression. However, this fails when the condition is a multi-line string. This happens whenever the threshold condition refers to a subexpression, like in this example:

from brian2 import *
import brian2genn
set_device('genn')
group = NeuronGroup(100, 'rate = sin(2*pi*t*1*Hz)**2*100*Hz : Hz',
                    threshold='rand() < rate*dt')
spike_mon = SpikeMonitor(group)
run(1*second)

The code generated by GeNN will look like this:

            if (double rate = (_brian_pow(sin(((2 * (3.14159)) * t) * 1), 2)) * 100
  bool _cond = _rand(l_seed) < (rate * DT)) 

In this specific case, rate could of course be directly replaced by its defining expression by the user, but in general subexpressions are used to structure the code and to avoid the repetition of identical expressions. Now I see three solutions:

  1. Support this from the GeNN side -- the easiest "hack" to make it work would be to only include the last line of a multi-line string as the if condition and add the other lines on their own before it.
  2. Replace subexpressions in threshold code on the brian2genn level
  3. Support Brian's new constant over dt subexpressions (which are evaluated once per time step and stored, instead of inserted into the code whenever used) and require that thresholds only refer to such subexpressions.

When I started to write the issue, I was actually thinking about doing 1), but now I feel that 3) should be an easier solution and should be enough in almost all cases (subexpressions that change over a single time step are something that is only needed for numerical accuracy purposes in differential equations, and you would rarely refer to such expressions from the threshold). As a bonus, it would support the "constant over dt" mechanism in general (in quite a few cases, this mechanism can save you some headache, so we are recommending it more and more).
What do you think?

Are there stituations where variables are not decorated because of mixed variable usage?

We are filling the synapse_model description incrementally while preparing the code for pre and post-synaptic effects. It occurs to me that if pre-synaptic code uses a variable that is introduced in the post-synaptic code, it might not be collected early enough for the pre-synaptic code tp be decorated fully. (Unless the list of variables for the pre-synaptic code is already populated by Brian with the post-synaptic variables ...)

reset_device() and run() results in 'Only a single run statement is supported'

In Multiple Runs section in documentation it states that:

code can be run repeatedly “in multiple runs” that are completely independent. This just needs a reset_device command issued after the run(runtime) command

However, I get a the 'Only a single run statement is supported' error when I try to run() after device_reset()

Smallest code to reproduce the error:

from brian2 import *                   
import brian2genn   
set_device('genn')
run(1*second)

from brian2.devices import reset_device
reset_device('genn')

run(1*second) # <- results in above error

v.dimension error

Hi,

I just updated brian2, brian2genn and genn to latest version and when running my code I got an error about v.dimension stating dimension attribute doesn't exist. brian2genn's device.py line 614:

                if isinstance(v, ArrayVariable):
                    try:
                        if isinstance(v, DynamicArrayVariable):
                            if v.dimension == 1:

So I checked and seems like it's changed to v.dim

Decide how to package Brian2GeNN

I think we are ready to publish a first official (beta?) release of Brian2GeNN, it now reasonably well detects unsupported features and it runs non-trivial examples like Kremer et al. (2011).

I wonder what is the best way to package it, though. Brian2GeNN does of course depend on GeNN, but it would not make much sense to specify this dependence on the Python level, since GeNN is not a Python package (we won't upload it to pypi). In principle, we could create a conda package for GeNN, but given that it would not do much except for copying the files, I am not sure it is worth it.

One option I was considering: include the GeNN source code itself into Brian2GeNN via a submodule, and ship it alongside Brian2GeNN in pip/conda packages. I think this would be the most user-friendly option (a simple pip install brian2genn or conda install -c brian-team brian2genn would work), and it would also avoid potential conflicts with future versions of GeNN.

There is one problem with this solution, though: both pip and conda allow the installation of packages system-wide (even though it is uncommon for conda), so GeNN could end up in a location not writeable by users without administrator rights. During the compilation of GeNN projects, some files will be compiled to the source directory, so this would fail. Could we pre-compile these files when we create the conda package (let's not worry too much about pip, this could remain "semi-automatic", i.e. you'd still have to install GeNN independently), or do any of these files have to be re-compiled in a project-dependent way?

GPU timeout doesn't sit well with randn implementation

The randn algorithm is non-deterministic in its runtime. If one works on a machine where the compute GPU is also used for display, there is a mandatory timeout watchdog for GPU calls. In practice this means that randn often runs into timeout and GeNN fails … not so nice but probably unfixable (unless we can find a better algorithm for normally distributed random numbers with determinisitic (and short enough) runtime.

Better implementation of _timestep function

Hi @mstimberg, I notice you implemented the Brian 2 _timestep function to derive an integer timestep from the global variable t as

return (int64_t)((t + 1e-3*dt)/dt);

Is there a particular reason why not return the GeNN intrinsic integer timestep

return iT;

(maybe you didn't know of its existence but I thought I'd ask before changing your solution)?

Performance drop in `master` and `benchmarking` branch for networks with postsynaptic effects

I have recently updated brian2CUDA to be compatible with brian2 master and in order to compare performance against brian2GeNN I updated brian2GeNN to the benchmarking branch and GeNN to tag 3.0.0, see brian-team/brian2cuda#144 (before I used the brian2GeNN 1.0-benchmark branch with GeNN 2.2.2). I am observing a drastic performance decrease (up to factor 5) on some benchmarks between the two versions. This seems to apply only to models where presynaptic spikes modify postsynaptic variables. For models which only effect synaptic variables the performance seems not effected. And even the standard event driven STDP example looks ok to me (even though it does apply effects to postsynaptic variables).

Below is a speed test result from our Brunel Hakim benchmark (which is basically the brian2 example). The x axis are the number of neurons, the y axis the time in s (only for the codeobjects which are run every time step, no compilation or synapse creation etc). The versions used for both measurements are:

green plot:
brian2genn: 8c6da48b3ae (currently tip of benchmarking branch, same results with current master branch)
genn-team/genn@3b794457b8195 (tag 3.0.0)
brian-team/brian2@320a5f6b3 (currently tip of master)

blue plot:
brian2genn: 0553caf (currently tip of 1.0-benchmark branch)
genn-team/genn@e01c85f183 (tag 2.2.2)
brian-team/brian2@fadc6a0ae (somewhere after version 2.1)

genn_figures
I observe the same performance drop for the CUBA and COBAHH examples.

brian2GeNN doesn't work for me with the newer GeNN versions. With both, 3.1.0 and 3.1.1 I get that double definition of isinf error:

.../GeNNworkspace/magicnetwork_model_CODE/support_code.h(40): error: more than one instance of
overloaded function "isinf" matches the argument list:
            function "isinf(double)"                  
            function "std::isinf(double)"             
            argument types are: (double)              

The solution you implemented in PR #61 did not work for me in brian2CUDA either and produces the same error (see brian-team/brian2cuda#125).

So currently the only version that works with brian2 master (which brian2CUDA works with now) has this performance problem.

Brian2GeNN not up to date with brian2 master

When using the brian2genn interface (current develop branch) together with genn (current development branch) there were several problems related to code changes in Brian2.

For example: the class AttributeVariable does not exist anymore, the main_queue has sometimes an additional parameter, there were some changes to the clock structure, ...

Since we (@moritzaugustin and @Kwartke) want to benchmark our CUDA implementation against the GeNN simulator, a working brian2genn would ease the creation and execution of test suites.

Right now, we are using the following combination of repos (with slight modifications) -- thanks to Esin Yavuz for reporting the specific commits:
brian2: commit ac06993ac83101edf3fac3cec984f7ef161ea1a0
brian2genn: commit 5efbcc7
GeNN: commit 0621048c8ef0f081032f39303ce8846788bae9ef

Expose more GeNN preferences in brian2genn

When running on a cluster I need to make use of GeNN preferences

    extern int autoChooseDevice; //!< Flag to signal whether the GPU device should be chosen automatically 
    extern int defaultDevice; //! default GPU device; used to determine which GPU to use if chooseDevice is 0 (off)

I would suggest to just make them brian preferences under devices.genn.XXX
Does that sound like a good plan @mstimberg ?
They would then have to be translated into genn preferences through the model template I suggest.

Bad performance on CUBAFixedConnectivity benchmark

In our CUBA benchmark with fixed connectivity (constant number of synapses per neuron), brian2GeNN performs surprisingly bad, see plot below. In other benchmarks, brian2CUDA and brian2GeNN performance is comparable, not in this one. This behaviour is also not new, I have similar plots for this benchmark from April this year (using older brian2, GeNN and brian2GeNN versions). Does anyone have an idea why this is the case?

speed_test_cubafixedconnectivitynomonitor_absolute

You can reproduce this behaviour by running the script below (with either dev = 'genn' or 'dev = 'cpp'), which runs the benchmark for N = 1e6 neurons and prints the device._last_run_time value (so no synapse creation and device memory initialisation included). The figure above also plots the _last_run_time values. You need to incorporate the changes from PR #65 in order to get the _last_run_time in brian2GeNN.

CUBAFixedConnectivity.py

import time                                                               
from brian2 import *                                                      
                                                                          
dev = 'genn'                                                              
#dev = 'cpp'                                                              
                                                                          
if dev == 'genn':                                                         
    import brian2genn                                                     
    set_device('genn', directory='CUBAFixedConnectivity_GeNN')            
    prefs.devices.genn.benchmarking = True                                
elif dev == 'cpp':                                                        
    set_device('cpp_standalone', directory='CUBAFixedConnectivity_CPP')   
                                                                          
taum = 20*ms                                                              
taue = 5*ms                                                               
taui = 10*ms                                                              
Vt = -50*mV                                                               
Vr = -60*mV                                                               
El = -49*mV                                                               
                                                                          
eqs = '''                                                                 
dv/dt  = (ge+gi-(v-El))/taum : volt (unless refractory)                   
dge/dt = -ge/taue : volt (unless refractory)                                                 
dgi/dt = -gi/taui : volt (unless refractory)                                                  
'''                                                                       
N = int(1e6)                                                              
Ne = int(0.8 * N)                                                         
                                                                          
P = NeuronGroup(N, eqs, threshold='v>Vt', reset='v = Vr', refractory=5*ms,
                method='exact')                                           
P.v = 'Vr + rand() * (Vt - Vr)'                                           
P.ge = 0*mV                                                               
P.gi = 0*mV                                                               
                                                                          
we = (60*0.27/10)*mV # excitatory synaptic weight (voltage)               
wi = (-20*4.5/10)*mV # inhibitory synaptic weight                         
Ce = Synapses(P, P, on_pre='ge += we')                                    
Ci = Synapses(P, P, on_pre='gi += wi')                                    
Ce.connect('i<Ne', p=80. / N)                                             
Ci.connect('i>=Ne', p=80. / N)                                            
                                                                                                                    
start = time.time()                                                       
run(1 * second, report='text')                                            
print("Run took {:.2f} s".format(time.time() - start))                    
print("_last_run_time: {:.2f}".format(device._last_run_time))             

I just ran these on our GeForece GTX TITAN Black (Kepler architecture) with brian2GeNN commit 8c6da48b3ae (benchmarking branch), brian-team/brian2@6c50e3a22d
(master), genn-team/genn@3b794457b81 (3.0.0). I get for

dev == cpp:

_last_run_time: 119.47

dev == 'genn':

_last_run_time: 483.55

Could someone reproduce this? And is this something you would have expected? To me this benchmark looks like a standard example of pre spikes, post effects.

Interesting file conflict on linux

I just noticed an interesting problem when running examples of brian2genn on our linux server "lancing". Because @mstimberg had run other examples before, ptxas was unable to overwrite his /tmp/runner.cubin file with my own and failed. I am not sure why this temporary file ends up in /tmp and without any modifier that would make it unique.

linked_var

the concept of a linked_var is currently not supported in brian2genn, but also no warning is issued if this mechanism is employed. At the least, a NotImplementedError should be raised if this functionality is used.
See examples.reliability for an example of this problem.

Modulo operator with integer arguments cannot be used from GPU code

It is of course not a real-world use case in this form, but the following code fails with brian2genn on GPU (but not on CPU):

from brian2 import *
import brian2genn
set_device('genn')
group = NeuronGroup(100, '', threshold='i % 25 == 0')
run(1*second)

Error:

..../magicnetwork_model_CODE/neuronKrnl.cc(45) (col. 18): error: calling a __host__ function("std::fmod<int, int> ") from a __global__ function("calcNeurons") is not allowed

It works fine for floating point arguments.

How to troubleshoot "NotImplementedError?"

Dear all,
When running a test script (from here) I encountered this error:

$ python simple_example.py

Traceback (most recent call last):
File "simple_example.py", line 14, in
run(10*ms)
File "/home/jintao/anaconda2/lib/python2.7/site-packages/brian2/units/fundamentalunits.py", line 2375, in new_f
result = f(*args, **kwds)
File "/home/jintao/anaconda2/lib/python2.7/site-packages/brian2/core/magic.py", line 371, in run
namespace=namespace, profile=profile, level=2+level)
File "/home/jintao/anaconda2/lib/python2.7/site-packages/brian2/core/magic.py", line 231, in run
namespace=namespace, profile=profile, level=level+1)
File "/home/jintao/anaconda2/lib/python2.7/site-packages/brian2/core/base.py", line 276, in device_override_decorated_function
return getattr(curdev, name)(*args, **kwds)
File "/home/jintao/anaconda2/lib/python2.7/site-packages/brian2genn/device.py", line 1571, in network_run
level=level + 1)
File "/home/jintao/anaconda2/lib/python2.7/site-packages/brian2/devices/cpp_standalone/device.py", line 1299, in network_run
self.build(direct_call=False, **self.build_options)
File "/home/jintao/anaconda2/lib/python2.7/site-packages/brian2genn/device.py", line 711, in build
self.process_neuron_groups(neuron_groups, objects)
File "/home/jintao/anaconda2/lib/python2.7/site-packages/brian2genn/device.py", line 1137, in process_neuron_groups
override_conditional_write=combined_override_conditional_write,
File "/home/jintao/anaconda2/lib/python2.7/site-packages/brian2/devices/cpp_standalone/device.py", line 552, in code_object
override_conditional_write=override_conditional_write,
File "/home/jintao/anaconda2/lib/python2.7/site-packages/brian2/devices/device.py", line 302, in code_object
'%s: %s') % (varname, ex))
NotImplementedError: Cannot use function timestep: 'No implementation available for target genn. Available implementations: cython, numpy, cpp'

I am running an Arch Linux (linux 4.14.56-1-lts)
have installed nvidia-lts, opencl-nvidia and cuda packages
have installed brian2 and brian2genn via conda from brian2-team(https://anaconda.org/brian-team)
have set $CUDA_PATH , $GENN_PATH and $PATH.

$ echo $CUDA_PATH
/opt/cuda
$ echo $GENN_PATH
/home/jintao/anaconda2/opt/genn
$ echo $PATH
/home/jintao/anaconda2/opt/genn/lib/bin:/opt/cuda/bin:/home/jintao/anaconda2/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/opt/cuda/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl

not sure what to troubleshoot :(

Sincerely thanks,
Jintao

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.