Giter Site home page Giter Site logo

brian-team / brian2 Goto Github PK

View Code? Open in Web Editor NEW
882.0 47.0 214.0 37.27 MB

Brian is a free, open source simulator for spiking neural networks.

Home Page: http://briansimulator.org

License: Other

Python 92.30% C++ 2.78% Makefile 0.02% Shell 0.06% Jupyter Notebook 2.62% C 0.57% Cython 1.51% Dockerfile 0.13%
python neuroscience science differential-equations spiking-neural-networks biological-simulations code-generation simulation-framework brian computational-neuroscience

brian2's People

Contributors

aesagtekin avatar arashgmn avatar bdevans avatar charleesf avatar dabliss avatar deep-introspection avatar denisalevi avatar dokato avatar emollier avatar jangmarker avatar kapilkd13 avatar kernfel avatar kjohnsen avatar kwartke avatar moritzaugustin avatar mstimberg avatar oleksii-leonov avatar paulbrodersen avatar phreeza avatar pjsph avatar rahuliitg avatar rcaze avatar rohithvarma3000 avatar sanjayankur31 avatar schmitts avatar thesamovar avatar vigneswaran-chandrasekaran avatar yger avatar ziaeemehr avatar zifeo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

brian2's Issues

Boolean is_active variable not handled correctly in Python code

I stumbled across a quite nasty bug that led to wrong results when using refractoriness via the (active) flag -- it took me quite a while to realize what the problem is... We set the is_active variable via is_active = (t >= refractory_until), the result is an array of boolean values. Unfortunately, there's one major difference between a boolean value in Python and in numpy: The minus sign works as a negation in numpy whereas it just treats the boolean value as an integer in Python:

>>> -True
-1
>>> -False
0
>>> print -np.array([True, False])
[False  True]

A nasty consequence of this behaviour is that symbol rearrangements done by sympy might change the value:

-is_active*(x - 1)   # -is_active will be logically inverted
is_active*(1 - x)  # is_active will be treated as a number

This is a Python-specific problem and we can circumvent it by explicitly multiplying is_active by 1 when it is calculated the first time. We should keep this issue in mind whenever we deal with boolean values.

Regenerate code objects on each run

We want to allow to have parameters change between runs, the easiest way to support this is to have codeobjects (state updater, reset, threshold) get regenerated on every run. This is also more robust then to only do changes in the namespace, as a state updater might have replaced constant values with literals already.

I think the cleanest way to do this is to replace the current prepare method in Network and BrianObject with pre_run and post_run. In contrast to the prepare method, which was called at somewhat unspecified times and could be called multiple times, pre_run and post_run should be called exactly once before and after a run.

In NeuronGroup, pre_run would take care of building the code objects.

Efficient synapse creation

OK a thought on how to efficiently implement synapse creation. The problem to solve is that if you just compute cond(i,j) for all pairs (i, j) it is O(N^2) and in many cases this is highly inefficient. We should list some of the cases and propose solutions.

The two most important that I can think of are:

  • rand()<p should be replaced with binomial distribution sampling, and also compounds like rand()<p and cond(i,j) should be replaced by binomial distribution sampling followed by condition evaluation only on that subset.
  • i==j should be replaced with one-to-one

So there are various possible solutions here - it would be nice to come up with something quite general that can adapt and become more efficient as we add special cases, but that doesn't require us to rewrite all the templates each time.

The most obvious implementation of i==j is just to explicitly detect it and convert it to connect_one_to_one and require that devices implement this connectivity function. Similarly for rand()<p and connect_random. I don't really like this approach because it's not very general.

Another option is to have a set of flags that are passed to codegen and the template can make use of that extra information. For example, if the condition was rand()<p and f(i,j) we could pass to codegen sparseness=p and cond=f(i,j). The code would then look something like:

for i in xrange(_num_source_neurons):
  if sparseness<1:
    j = binomial_sampling_code_here
  else:
    j = arange(_num_target_neurons)
  _cond = f(i, j)
  ...

A more general and flexible mechanism would be to handle all conditions of the form a and b and c, parse them into a set [a, b, c], decode this set a bit and then pass that to codegen. For example, a might be rand()<p, b might be abs(i-j)<=width and c might be i!=j. This would be parsed into:

a: ('rand()<p', 'binomial', p)
b: ('abs(i-j)<=width', 'abswidth', width)
c: ('i!=j', None, None)

The parsed form is (original_expr, specialisation_name, argument). The codegen template could then look through this list, check which ones it supported, implement them directly if they are supported and if not fall back on the default mechanism.

It might be the case though that it's only worth doing one specialisation, because you won't improve the order of the computation by doing multiple specialisations assuming that each one returns an array of candidate indices. In fact, you might even slow it down. So another option is that each specialisation has an estimate of the number of candidate synapses j it will generate for any given i, and we simply choose the best one. For example if there was rand()<0.5 and i==j then rand()<0.5 has an estimated 0.5*_num_target_neurons for each i, whereas i==j has precisely 1, and in most cases 1<0.5*_num_target_neurons so this will be selected.

Any thoughts on the above?

Design Device object and use this for standalone Brian

This issue supersedes #23 (although read the discussion thread for lots of interesting background).

We agreed to create a new framework for standalone code generation which requires a redesign of all the Brian objects. The idea is to have a new object Device that handles allocation of memory and computational work. Core Brian objects like NeuronGroup will handle parsing arguments and deciding what memory to allocate and what computational work to do, and request this from the Device object. In runtime mode, this memory will actually be allocated and code compiled. In standalone mode, code will only be generated to do this. Like this, the core Brian objects will be the same for both standalone and runtime modes, only the Device object will change. We will have a RuntimeDevice and StandaloneDevice for this.

Soon, I'll write a more detailed document outlining the system.

Some issues that immediately jump out are:

  • NeuronGroup is pretty much worked out, but how will Synapses work with this framework?
  • How should memory allocation work? We need to handle the allocation of statically sized and dynamically sized arrays.
  • How will monitors work in this framework? Will the computational work they do be handled by CodeObject or will we do something different for these objects?

How should we proceed in implementing this since it requires major changes to everything in Brian. I suggest we try to work out an initial design for Device at least (an API say), then we will need a concentrated burst of work to rework the existing code to work with that.

At this point, there is a big danger of different branches of the repository going too far out of sync, so I think we need to coordinate when and how we do this.

Consider using a platform-dependent directory for preferences file (and perhaps log file)

We could use something like https://github.com/ActiveState/appdirs to get more adequate directory names, in particular for the user-specific configuration file. ~/.brian/... works but is not the recommended directory on any platform (for Windows one wouldn't use a dot in the name, on Linux it should rather be in .config/brian, etc.). Similarly, the temporary directory might not be the ideal place for log files (but we should make this configurable anyway).

Conditional indices and resizeable arrays in codegen

A new mechanism for doing Synapses and Monitor using CodeObject. We allow resizeable arrays and if an index is accessed that is outside the range of the array, it is extended. In addition, we have conditional indexing, saying that we iterate only over the values where a certain condition holds.

Have a clear interface to and from sympy, deal with function calls in a language-independent way

We need to make clear which strings are interpreted by sympy and which are not. Most of the time using sympy and executing the string in Python/C gives the same result, but there are important differences, e.g. integer fractions (see #38 ).

In principle we want to use sympy only for "mathematics" (differential equations, state updater descriptions), not for "programming" (reset statements, pre/post codes). On the other hand, using a sympy expression allows for using the CCode printer, which replaces some function calls and ** by pow. But we also already have a mechanism in place that deals with functions directly.

Proposed procedure:

  • Have functions that convert between "mathematical strings" and sympy objects. These functions should be always used, never use "sympify" or "str(sympy_object)" directly. This is mostly done already, see parse_to_sympy and sympy_to_str in brian2.codegen.parsing.
  • We don't use the Python/C code printer from sympy, instead functions are translated via the Function system in code generation (e.g. in C we would rather use #define abs fabs instead of using sympy to replace abs calls by fabs)
  • We need a system to add support for new languages to this system, i.e. the system should be mostly language-agnostic. Python has a special role, we use the Python function for unit checking (even though we could not do this and instead have an explicit mechanism for taking care of this).

The only thing I'm not clear about is the translation about ** -- maybe we simply require the use of a power function? Note that this is not as bad as it sounds, it would not affect equations (there we could even support ^) but only abstract code where this will not be used often, anyway.

Templating system

We should investigate templating packages and either use one or create a Brian specific one that we use consistently throughout Brian.

This issue supersedes #22.

numpy.float128 may not exist on all platforms

Some of the code in units assumes there is a numpy.float128 object which doesn't exist on all platforms. This should be checked for. It crashes on Windows 7, 32 bit with Python 2.7 and numpy 1.6.2 and 1.7.1.

Refractoriness bug

The example I-F_curve_LIF example behaves wrongly now that the refractoriness branch is merged into master.

Progress and notifications

It would be nice to have slightly more sophisticated progress reporting and notification when the simulation is complete.

For progress reporting:

  • Progress reporting that allows you to recursively subdivide tasks, i.e. a big simulation that runs 10 smaller simulations, which in turn each run 10 smaller simulations. Brian's ProgressReporter almost has this, but it's not recursive.
  • Improve the GUI option for progress reporting.
  • Integrate with the notification system (below)?

For notification, the idea would be to let the user know a long-running simulation has finished which doesn't require them to monitor a console. For example, a pop up notification on the system tray in Windows, or an email.

  • Cross-platform pop-up notification
  • Send an email when the task is finished

There might be standard apps that you can use for notification - so perhaps it's not worth adding it to Brian?

New language syntax translation module

Previously we have been using sympy for translation from Python syntax to C++, but this doesn't allow for Java and has some restrictions. It is also not difficult to implement this ourselves, allowing us more control over the process and also making it easier to do things like Java for the BrianDROID project.

This module should ideally be a relatively independent piece of code with enough options that it can do all the things we'd like it to. Here are some requirements, we might need more:

  • It should be able to convert to multiple languages, specifically Python, C++, Java to start with. The obvious way to handle this is with a class hierarchy like in Language, or even more simply just a function for each language with the same signature.
  • It should allow us to make function and variable name substitutions

Anything else?

If the above is enough, a nice minimal interface would be just to pass the expression as a string, and a dict of variable/function name replacements. I don't see any obvious reason to complicate it beyond this.

Allow recording/reproducing random numbers

To allow for reproducible simulations, we'll need a mechanism to reproduce random number generation. While setting the seed is sufficient in many cases, this is not guaranteed to give the same results across platforms, numpy versions etc. The problem gets even trickier when considering multiple processes or GPU code where we do not want the results to depend on the number of processes, for example.

Make equation parsing for state updaters more robust

Currently, the same sympy symbols are used for parsing of the state updaters in ExplicitStateUpdater and the equations. This has the unintended effect that the identifiers f and g in equations is interpreted as functions and not as a variables.

More generally, identifiers in the namespace should be parsed according to their role (e.g. if f refers to a Function specifier, it should be interpreted as a function, if it refers to a variable, it should be interpreted as a symbol).

Improvements to Network

Two changes to Network:

  • Network should not use weakrefs by default (although there should be a flag to allow it), and MagicNetwork should.
  • We should enable object access in a Network using the object's name, e.g. net.objects['my_obj'], net['my_obj'] or net.my_obj (obviously I like the last one best).

Like this, users can construct and move around Network objects without having to pass lists or dicts of all their objects.

New get/analyse identifiers using ast parsing

The existing string tools get_identifiers and analyse_identifiers should be given more rigorous version using ast parsing. We can still keep these functions though, for example for analysing C++ code.

Multiple Synapse data structures?

The old Brian Connection class had multiple internal data structures possible (dense, sparse, dynamic). I wonder if Synapses could do with something similar? In particular, there might be some data structures that could be more efficient in certain cases?

Pros: possible efficiency gains
Cons: complicates implementation

Thoughts?

Allow specifying object names with a wildcard at the end

When specifying a name, you should be able to write something like myname* and this is interpreted as: if there is no object myname then name it myname, otherwise name it myname_2, myname_3, ... until you find one that isn't taken. This is already done with the default name for a class, but it would be nice to make it more widely accessible.

Decode argument/keyword lists

For the standalone framework, if we stick with something like the current system of overriding methods with standalone-generating methods, it would be useful to have an automatic way to take a function argument specifier and turn it into a dictionary. So something like, if you have a function defined as f(x, y, z=1) then the user could write f(1,2,3) or f(y=2,x=1,z=3) or any combination. It would be nice if we could automatically construct a dictionary of the keyword arguments by name, removing the arguments all together, so that they could be treated in the same way. This should be relatively simple to implement but may not be necessary if we switch to the new Device way of implementing standalone.

Boolean state variables

We should be able to declare state variables as being of boolean type by writing bool in the units slot.

Preferences: Generate default file on install and document per-category preferences

This is a follow-up to the work done in #6:

  • automatic generation of the preferences file: Generate the file during the installation process, i.e. in setup.py
  • The current sphinxext directive only allows to document all preferences. It should have an option to only document preferences with a given basename so that e.g. the logging module can document the logging preferences.

Run automatic benchmarks

We should port Brian's mechanism for automatically running benchmarks and come up with a couple of benchmarks, both micro-benchmarks and realistic model simulations (e.g. the classical CUBA, COBA sims). This should include comparing the different code generation targets and maybe comparisons to as a reference.

It would also be good to include memory usage in addition to speed, but I'm not sure how feasible that is.

This issue is not a high priority, though, first we need to make the simulations run, then we can worry about performance :)

Convert simulation results to neo objects

[This is a nice-to-have and not high priority but a nice opportunity for external contributions]

Neo (http://neuralensemble.org/neo/) is becoming the standard for the representation of electrophysiological data in the Python/neuroscience community. Recently, the neurotools project (http://neuralensemble.org/NeuroTools/) has been restarted with the goal of bundling existing analysis functions, using neo as the data representation format. I think it would be useful if Brian results (e.g. from a StateMonitor and SpikeMonitor) could be easily converted into neo objects to make use of these tools.

neo objects are basically numpy arrays with additional metadata (such as physical units), so generating them shouldn't be too difficult. The conversion would have to include a conversion from Brian's unit system to the unit system provided by quantities (http://pythonhosted.org/quantities/) but that should be quite straightforward.

I guess having this would make the tools.statistic package in brian1 obsolete?

User-provided functions

Consistently deal with user-provided functions, make them work with code generation. Unit-checking for functions will be supported via the check_units decorator (that we'll make mandatory), quoting from an older issue:

So a function that returns time-dependent voltage would use @check_units(t=second, result=volt) and a simple square function would use @check_units(x=None, result=lambda x_unit: x_unit**2).

One general question: If we make the code-generation language a preference and keep using the same language across objects, do we make it unnecessarily difficult to use user-provided functions in synapse creation or for setting state variables? The scenario I'm thinking of is using C++ code-generation, but using simple Python functions in the expressions for synapse creation or state variable setting. With a "one language" approach, one would be forced to provide C++ code for these functions.

Save and load network state

We should have methods to save and load network state that are more portable than pickle, for example for use in standalone C++ code. Two options occur to me:

  • New simple binary format of our own, the advantage being that it would be easy to read for Brian standalone C++ code (no external dependencies), the disadvantage being that nothing else could read it without writing something explicitly.
  • Existing format, probably HDF5, the advantage being that it could be read in multiple pieces of software, the disadvantage being it would make a large dependency for our standalone C++ code, and may be problematic on, for example, Android/Java.

Or we can do both, but then we have effectively three file format (pickle, internal, HDF5) to support, which seems like a bad idea.

Finalize refractoriness system

After fixing #38 I thought a bit about the current refractoriness system and what kind of refractoriness we can implement with it. The docstring of NeuronGroup states about the refractory variable: "This value can be modified in the reset code. (TODO: more modifiability)" -- @thesamovar do you remember what you meant by this?

From my experience, there are three types of absolute refractoriness models (and combinations of them)

  1. Clamping a value, typical the membrane potential, during the refractory period: this we now support directly via the (active) flag
  2. Ignoring the threshold during refractoriness: this can be included in the threshold condition by using the is_active variable: threshold='(v > v_t) * is_active'
  3. Ignoring incoming spikes during refractoriness (I think this is a bit weird but I know of at least one paper that uses this): in most cases this should be easy to include in the pre code of a Synapses, e.g.: pre='v += w*is_active (is_active is is_active_post).

Anything missing here that we don't yet support? Regardless of this, I wonder whether it would not make more sense to replace refractory_until by lastspike, i.e. do
is_active = (t - lastspike) < refractory
instead of
is_active = t > refractory_until

On the other hand, we'd need some dummy value for the initial lastspike, that would make things a bit ugly -- so probably it's better to stay with refractory_until (one can always do refractory_until - refractory to get the last spike, anyway). There would be also some subtle issues, e.g. the semantics of changing the refractory variable in the reset code changes.

Rewrite Language to use templates more consistently

I just started work on this in the language_rewrite branch and realised that it's not entirely trivial because the current ad-hoc templating system has support for some quite nice features like getting the indents right, etc. I wonder if we actually need to rewrite it after all? I think given the deadlines, it would be better not to unless there are specific missing features that we want to see. Are there? Another reason would be to avoid locking ourselves into an API that will make our life more difficult later on, but perhaps this can be put off until after the developer preview / 2.0alpha version?

Document a library of standard models

As we discussed before, we don't want to recreate the (very sparsely populated) model library from brian, e.g. the Izhikevich, HH, or alpha_current functions. Rather we want to have a webpage (or maybe wikipage) listing these. Ideally, all should be in a standard format, e.g. have a reference and the mathematical equations (autogenerated from the code :) ?). The code itself should then be in a way that is easily copy/pasteable. This is different from full example script it would really only focus on the neural or synaptic model. We should probably agree on some conventions before hand, e.g. always call the membrane potential v or V, etc.

This would also be a nice way for users who are not willing/able/confident enough to work on the Brian code itself to contribute to Brian.

Preferences: proxies and docs

Two features for preferences that would be nice would be:

  • dotted name access to preferences, so for example writing brian_prefs.core.default_scalar_dtype=float64 rather than brian_prefs['core.default_scalar_dtype']=float64. This could be handled by creating a proxy object, e.g. brian_prefs.core, when a new base name is registered, that simply forwards attribute accesses to getitem accesses on brian_prefs.
  • documentation that can be accessed via, e.g., IPython, so that you could type things like brian.core? and get a list of the brian.core.* preferences with their docs. Preferably they would be nicely formatted to look good in the console window (e.g. 80 character width limit).

Some issues are:

  • Do we want you to be able to write brian_prefs.newext.pref = something before newext has been defined? If so, everything gets a whole lot more complicated.
  • if we don't like the attribute access style for prefs, we could still do the documentation via a method (e.g. brian_prefs.docs('core.*')).

Implement stochastic integration

In Brian 2, we want to have specific stateupdaters for for equations containing xi variables. We also want to allow more than one noise variable. To make this all work, we need the following things to be done:

  • Adapt Expression.split_stochastic so that it works with several random variables. Maybe this should move to Equations as the same variable can be used in several expressions?
  • Write a state updater for stochastic equations and integrate it with the system described in #11
  • Think about the implementation on the code generation side (random number generation should be directly on the GPU, for example)

Implement Synapses

This is of course a big chunk of work but I think we should rather start it sooner than later because it is a core component and probably a lot of unforeseen problems will pop up during its implementation.

I had a quick look through the code in Brian 1 and will use this issue to jot down some observations:

  • The good news: A lot of code in Synapses works around restrictions in Brian 1 and is no longer necessary in Brian 2. For example, we don't need a specific SynapticEquations class (it was only introduce to allow for the (event-driven) keyword, support for _pre and _post variables can be easily implemented using the specifiers, no need for implementing them via static equations, etc.
  • We need to put some thought into connecting synapses stuff to the specifier/language/devices system, the SynapticVariable class, for example, is in fact very similar to a specifier in the current system (this also entails that Synapses can use the standard __getattr__ mechanism of Group).
  • A major point is how the SpikeQueue fits into the current system: It could quite naturally be handled in a Devices system (see discussion in #23 ) because it is about storing data. On the other hand, we probably want to have a C version available in the standard runtime version for performance reasons, this would rather suggest to connect it to Language (i.e. automatically use the C version if we use C code generation).
  • Another major point is the synapse creation: Should this use code generation? If yes, if we use C++ code generation but synapses are stored in a dynamical array in Python, how would that work? Should we directly access the dynamical array in C or rather return an array of "synapses to create"?

Give access to number of synapses

To not clutter #42 too much, here's something of peripheral importance that occurred to me: We could easily provide access to the number of incoming and outgoing synapses per source/target neuron. That would be somewhat useful for normalizing a weight attribute (but in this case one can get the information easily by summing along an axis, etc.) but if it is provided as a specifier, then one can access it in code as well. That would provide enormous memory benefits in some simple models with weights that are constant up to a normalization.

Example how I imagine it would work:

w = 1
S = Synapses(source, target, pre='v += w / _incoming_synapses`)

Alternatively, one could implement this probably using the synaptic_sum mechanism and access to the number of synapses. e.g.:

G = NeuronGroup(N, '''dv/dt = (-v + I)/ tau : 1
                      dI/dt = -I / tau : 1
                      normalization : 1''')
S = Synapses(source, target, pre='I += 1`)
G.normalization = synaptic_sum(S, 'num_synapses')

Nested subexpressions not evaluated in the correct order

In the boolean_parameter branch, I made some adjustments to handle the case of subexpressions that reference other subexpressions. Codegen does not handle this case correctly, though. I tried a model like this:

G = NeuronGroup(1, '''
        dv/dt = 100*Hz : 1 (unless-refractory)
        time_since_spike = t - lastspike : second
        ref_subexpression = time_since_spike < 5*ms : bool''',
                           threshold='v>1', reset='v=0',
                           refractory='ref_subexpression')

This now works in principle, but the generated code is wrong, it looks like:

...
ref_subexpression = time_since_spike < 5 * ms
time_since_spike = t - lastspike
not_refractory = (not_refractory) + (logical_not(ref_subexpression))
...

But ref_subexpression should be updated after time_since_spike. We already do the topological sorting in the Equations object, but this information is not propagated to codegen (I'm also not sure that, say, attaching a simple number to the Subexpression specifier, derived from the position in the Equations object is enough. This might become a mess when we are including subexpressions from other groups in Synapses). Any good idea? An easy solution would be to substitute all subexpressions, but I remember you arguing against this a while ago :)

Parse and save the comments in model descriptions

Thinking about our "automatic model documentation" and looking a bit into NineML I realized that it would be a nice thing if we could attach textual descriptions to variable names (including parameters and static equations [did we agree on a new name for them? Subexpressions?]). Then I realized that we don't need a new syntax for that, we could simply parse the comments instead of discarding them. I'm thinking along the lines of:

>>> G = NeuronGroup(100, '''
              dv/dt = -v / tau : 1  # membrane potential
              v_t : 1  # threshold
              ''', threshold='v > v_t', reset='v=0')
>>> print G.description['v']
membrane potential
>>> print G.description['v_t']
threshold

or maybe, use something like G.specifiers['v'].description instead -- I don't think this feature has to be particularly accessible, it's mostly for automatic processing.

The only problem I'm seeing is what to do with comments like:

G = NeuronGroup(100, '''
              dv/dt = (g_L*(E_L - v) + # leak current
                       g_e*(E_e - v) + # excitatory input
                       g_i*(E_i - v)) / tau : 1 # inhibitory input
                       v_t : 1  # threshold
                     ''')

but maybe one should use static equations/subexpressions in this case, anyway?

Many links not working in documentation

e.g. when writing Quantity you should get a link to the reference docs for Quantity, but often you don't. Similarly, if you have a class X with a method f then writing f in the docs for X should give you a link to that method.

excepthook blocks exceptions on Py2.6

At least on Windows 32, Python 2.6, if a program raises an exception with Brian 2 installed, the exception is silently discarded and the program terminates.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.