Giter Site home page Giter Site logo

Comments (6)

dafrose avatar dafrose commented on May 25, 2024

Hi Nikola,

thank you very much for your feedback. Which version are you using? From current github or latest release on PyPI?

Could you please provide the full template for testing? Github allows uploading of .txt files, so if .yml does not work, you could try that.

Best,
Daniel

from pyrates.

jajcayn avatar jajcayn commented on May 25, 2024

Dear Daniel,

thanks for the response! Well, I am using the latest PyPI, hence 0.8.0. The same problem as I am experiencing is happening in PyRates tests (tests/test_compute_graph in test_2_3_edge function). When I extract only the delay test (pasted below) I am getting the same error:

ValueError: Invalid index shape. Operation has shape (7,), but received an index of length 2

I will also attach my network (few yamls in one zip file), but it's rather complicated - it's a full mass model of thalamocortical loop. The minimal test code that reproduces the error is pasted here:

from pyrates.backend import ComputeGraph
from pyrates.frontend import CircuitTemplate
import numpy as np
import pytest


backends = ['numpy', 'tensorflow']
accuracy = 1e-4

def do_test():
    for b in backends:

        # set up edge in pyrates
        dt = 1e-1
        sim_time = 10.
        sim_steps = int(sim_time / dt)

        # define input
        inp = np.zeros((sim_steps, 1)) + 0.5

        # test correct numerical evaluation of graph with 2 bidirectionally delay-coupled nodes
        #######################################################################################

        net_config = CircuitTemplate.from_yaml("model_templates.test_resources.test_compute_graph.net10").apply()
        net = ComputeGraph(net_config=net_config, name='net2', vectorization=True, dt=dt, backend=b)
        results = net.run(sim_time, outputs={'a': 'pop0/op8/a',
                                             'b': 'pop1/op8/a'})

        # calculate edge behavior from hand
        delay0 = int(0.5 / dt)
        delay1 = int(1. / dt)
        targets = np.zeros((sim_steps + 1, 2), dtype=np.float32)
        update4 = lambda y: 2.0 + y
        for i in range(sim_steps):
            inp0 = 0. if i < delay0 else targets[i - delay0, 1]
            inp1 = 0. if i < delay1 else targets[i - delay1, 0]
            targets[i + 1, 0] = update4(inp0 * 0.5)
            targets[i + 1, 1] = update4(inp1 * 2.0)

        diff = np.mean(np.abs(results['a'].values[1:] - targets[:-1, 0])) + \
               np.mean(np.abs(results['b'].values[1:] - targets[:-1, 1]))
        assert diff == pytest.approx(0., rel=accuracy, abs=accuracy)


if __name__ == "__main__":
    do_test()

If needed, my circuit is attached here, I am also attaching very simple python script that just loads the circuit, applies it and gets error on compile.
Cona-et-al_thalamocortical_nmm.zip. In the circuit yaml I commented some edges due to testing, so the full network will be available when uncommenting all the edges.

P.S. When I forked the project, I also tried to cycle through all tagged versions and run the tests/test_compute_graph.py and with all version I am getting fail on the test_2_3_edge function, so I reckon the delay problem persisted over the versions.

Thanks a lot!
Best,
Nikola

from pyrates.

jajcayn avatar jajcayn commented on May 25, 2024

So, after additional testing, I've found very weird behaviour.
I noticed before I started even testing delays, I was getting very weird errors like pyrates.PyRatesException: Could not find object with key path "INS/InhibitorySlowCPO/V_out". So I dug deep and while applying the CircuitTemplate I printed all nodes and their operators and found out the @_cache_op_graph wrapper is caching operators that are different. Put simply, for each node I created three different operators (PRO, RCO, and CPO) but caching function messes this up and some nodes ends up with different operators. At first, I didn't know the caching is the culprit, but when I removed @_cache_op_graph decorator from OperatorGraph class, all nodes had correct operators and graph was successfully built.
However, just now I found out, that when I remove the @_cache_op_graph wrapper, I am getting errors on delay edges. When the caching is "on" (hence the OperatorGraph class is properly decorated with @ _cache_op_graph ) the edge delays works as expected. That being said, with caching I have a problem that it incorrectly parser and connects nodes to operators, and then claims incorrect paths.

It truly seems that the culprit of my problems are within caching function, not the delays themselves. Do you have any idea what could be wrong with caching?

To be more elaborate, I added just printing code to add_nodes_from method of CircuitIR.
The following is written to my console:

PYR ['CorticalPRO', 'PyramidalRCO', 'PyramidalCPO']
EXC ['CorticalPRO', 'ExcitatoryRCO', 'ExcitatoryCPO']
INS ['CorticalPRO', 'ExcitatoryRCO', 'ExcitatoryCPO']
INF ['CorticalPRO', 'ExcitatoryRCO', 'ExcitatoryCPO']
TCR ['ThalamicPRO', 'ThalamoCorticalRCO', 'ThalamoCorticalCPO']
TRN ['ThalamicPRO', 'ThalamicReticularRCO', 'ThalamicReticularCPO']

however, through yaml I define nodes and operators as

...
ExcitatoryPopulation:
  description: Excitatory population of the thalamo-cortical model
  base: NeuralMass
  operators:
    - ./Cona_et_al_thalamocortical_PyRates/potential_to_rate_ops/CorticalPRO
    - ./Cona_et_al_thalamocortical_PyRates/rate_to_current_ops/ExcitatoryRCO
    - ./Cona_et_al_thalamocortical_PyRates/current_to_potential_ops/ExcitatoryCPO
  label: EXC

InhibitorySlowPopulation:
  description: "Slow inhibitory GABA-A mediated cortical population of the
    thalamo-cortical model"
  base: NeuralMass
  operators:
    - ./Cona_et_al_thalamocortical_PyRates/potential_to_rate_ops/CorticalPRO
    - ./Cona_et_al_thalamocortical_PyRates/rate_to_current_ops/InhibitorySlowRCO
    - ./Cona_et_al_thalamocortical_PyRates/current_to_potential_ops/InhibitorySlowCPO
  label: INS

InhibitoryFastPopulation:
  description: "Fast inhibitory GABA-A mediated cortical population of the
    thalamo-cortical model"
  base: NeuralMass
  operators:
    - ./Cona_et_al_thalamocortical_PyRates/potential_to_rate_ops/CorticalPRO
    - ./Cona_et_al_thalamocortical_PyRates/rate_to_current_ops/InhibitoryFastRCO
    - ./Cona_et_al_thalamocortical_PyRates/current_to_potential_ops/InhibitoryFastCPO
  label: INF
...

hence all nodes should have their own operators, but INS and INF nodes (according to the console print) share operators with EXC node. When I turn off caching (simply by removing the wrapper), I got the correct operator <-> node associations, but then again - there is a problem with edge delay:

PYR ['CorticalPRO', 'PyramidalRCO', 'PyramidalCPO']
EXC ['CorticalPRO', 'ExcitatoryRCO', 'ExcitatoryCPO']
INS ['CorticalPRO', 'InhibitorySlowRCO', 'InhibitorySlowCPO']
INF ['CorticalPRO', 'InhibitoryFastRCO', 'InhibitoryFastCPO']
TCR ['ThalamicPRO', 'ThalamoCorticalRCO', 'ThalamoCorticalCPO']
TRN ['ThalamicPRO', 'ThalamicReticularRCO', 'ThalamicReticularCPO']

Ultimately, I fear that caching (I guess you are caching based on some hash function?), or ultimately hashing is wrong somewhere. All operators for different nodes have different constant values, so the hash shouldn't be the same.

from pyrates.

jajcayn avatar jajcayn commented on May 25, 2024

Final comment:
adding output, when I added printing statements to _cache_op_graph wrapper:

869794232570078598
CorticalPRO <OperatorIR(('r = r_max / (1. + exp(sigma*(V_thr - V_in)))',)), hash = -6391445111721950068 >
PyramidalRCO <OperatorIR(('d/dt * I = I_t', 'd/dt * I_t =  G * omega * r - omega^2 * I - 2. * omega * I_t')), hash = 1186711113592817207 >
PyramidalCPO <OperatorIR(('V_out = k * I + I_modulatory + I_driver',)), hash = 8410447827851721302 >

1626518669850294754
CorticalPRO <OperatorIR(('r = r_max / (1. + exp(sigma*(V_thr - V_in)))',)), hash = -6391445111721950068 >
ExcitatoryRCO <OperatorIR(('d/dt * I = I_t', 'd/dt * I_t =  G * omega * r - omega^2 * I - 2. * omega * I_t')), hash = 1186711113592817207 >
ExcitatoryCPO <OperatorIR(('V_out = k * I + I_modulatory + I_driver',)), hash = -5857010408256079830 >

1626518669850294754
CorticalPRO <OperatorIR(('r = r_max / (1. + exp(sigma*(V_thr - V_in)))',)), hash = -6391445111721950068 >
InhibitorySlowRCO <OperatorIR(('d/dt * I = I_t', 'd/dt * I_t =  G * omega * r - omega^2 * I - 2. * omega * I_t')), hash = 1186711113592817207 >
InhibitorySlowCPO <OperatorIR(('V_out = k * I + I_modulatory + I_driver',)), hash = -5857010408256079830 >

1626518669850294754
CorticalPRO <OperatorIR(('r = r_max / (1. + exp(sigma*(V_thr - V_in)))',)), hash = -6391445111721950068 >
InhibitoryFastRCO <OperatorIR(('d/dt * I = I_t', 'd/dt * I_t =  G * omega * r - omega^2 * I - 2. * omega * I_t')), hash = 1186711113592817207 >
InhibitoryFastCPO <OperatorIR(('V_out = k * I + I_modulatory + I_driver',)), hash = -5857010408256079830 >

the first line is hash of a node, followed by a dictionary of operators. The first node (PYR) has some hash value and its operators, the second node (EXC) some different hash value and its operators, but the following two nodes have the same hash value as EXC despite being defined using different operators. I guess that is why the nodes then exhibit wrong addresses as described above.

EDIT: the operators may have the same equations, but the constants defined in yaml file are different - that is why each node has its own operators. Does hashing take into an account the predefined values of the constants in the operators' equations?

from pyrates.

dafrose avatar dafrose commented on May 25, 2024

Hey Nikola,
thanks a lot for your valuable feedback. I just wanted to let you know that we are working on it. Dev time turned out to be scarce during parental leave, but I'll be back in office soon and hope to finally get these issues done.

Best,
Daniel

from pyrates.

Richert avatar Richert commented on May 25, 2024

Closing this now. The bug with the edge delays has been solved with 0.9.0. Discretization of delayed-differential equations was stabilized and, additionally, the possibility was added to work with gamma-distributed delays instead, by using a spread keyword:

- [PYR/PyramidalCPO/V_out, TCR/ThalamicPRO/V_in, *LE, {*c : 1., weight: 1., delay: 0.001, spread: 0.0005}]

This way, delay and spread resemble the mean and variance of a gamma delay kernel function that the source signal is convoluted with. This is a way to prevent discretization inaccuracies that may arise when working with scalar delays.

from pyrates.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.