Giter Site home page Giter Site logo

Comments (6)

lubin-liu avatar lubin-liu commented on June 21, 2024

By design the user is not allowed to update the recurrent weight for SNNs. This is because the SNN just an implementation of this paper. If you go to equation 13, the recurrent weight is broken down into a static component and a learned decoder (that is a function of the readout weight), so the recurrent weight is updated when the readout weight is updated. For SNNs in this package, the 'recurrent kernel' is just the static component of the weight matrix times G, which isn't supposed to be updated, as the actual recurrent weight is updated indirectly when the readout weight is updated.

Could you clarify what you mean by learning step?

from tension.

gogoARS avatar gogoARS commented on June 21, 2024

Thank you very much for the quick reply!

By design the user is not allowed to update the recurrent weight for SNNs. This is because the SNN just an implementation of this paper. If you go to equation 13, the recurrent weight is broken down into a static component and a learned decoder (that is a function of the readout weight), so the recurrent weight is updated when the readout weight is updated. For SNNs in this package, the 'recurrent kernel' is just the static component of the weight matrix times G, which isn't supposed to be updated, as the actual recurrent weight is updated indirectly when the readout weight is updated.

For SNN, I wonder if this indirectly updated recurrent weights are used to compute the neuron voltage in the next timestep in tension? I checked the original implementation of the nature communications paper, but only found that the static recurrent weight itself is used to compute the postsynaptic current at every timestep.

Could you clarify what you mean by learning step?

Sorry for the possible misunderstand here. By learning I mean the training process, like the train_step function of the FullFORCEmodel. Is it possible to define a customized SNNModel and write my own train_step to introduce the update of recurrent weight for spiking neural nerworks, separately from the readout weight?

from tension.

lubin-liu avatar lubin-liu commented on June 21, 2024

For SNN, I wonder if this indirectly updated recurrent weights are used to compute the neuron voltage in the next timestep in tension? I checked the original implementation of the nature communications paper, but only found that the static recurrent weight itself is used to compute the postsynaptic current at every timestep.

From Eq. 11 of the Nature paper, the synaptic current is updated using the recurrent weight, not just the static recurrent weight, which is how it's done with tension (see compute_current method, noting that self.recurrent_kernel is the static recurrent weight * G).

Sorry for the possible misunderstand here. By learning I mean the training process, like the train_step function of the FullFORCEmodel. Is it possible to define a customized SNNModel and write my own train_step to introduce the update of recurrent weight for spiking neural nerworks, separately from the readout weight?

Note that in the implementation here there's no explicit variable defining the recurrent weight from the paper; you would have to construct it based on Eq. 13 using the other variables. If you wanted to update the static recurrent weight, first you would have to hack this line by changing recurrent_kernel_trainable=True (I hard coded this since the static weight is not supposed to be trainable). You can then sub-class SpikingNNModel and tweak the train_step based here. In your case it seems like you would want to write your own custom update_recurrent_kernel so the highlighted part would have to be updated

Capture

from tension.

gogoARS avatar gogoARS commented on June 21, 2024

From Eq. 11 of the Nature paper, the synaptic current is updated using the recurrent weight, not just the static recurrent weight, which is how it's done with tension (see compute_current method, noting that self.recurrent_kernel is the static recurrent weight * G).

Thanks for the explanations!

One more question: In compute_current, the current is computed as self.I_bias + backend.dot(h, self.recurrent_kernel) + backend.dot(out, self.feedback_kernel) + backend.dot(inputs, self.input_kernel). I can see the bias term, static recurrent weight related term, feedback term and input term here. Since the trained readout weight self.output_kernel is also served as the learnt weight for the synaptic weight $\omega$, I wonder how self.output_kernel is used here? I cannot find it in compute_current.

In addition, I found that compute_current is overriden in OptimizedSpikingNN:
self.I_bias + ipsc + backend.dot(out, self.feedback_kernel) + backend.dot(inputs, self.input_kernel). Here ipsc replaces backend.dot(h, self.recurrent_kernel) and is computed with jd, where jd = tf.math.reduce_sum(self.recurrent_kernel[v_mask[0] == 1], axis=0, keepdims=True). I still cannot see how the learnt recurrent weight contributed by self.output_kernel is applied here.

I'm confused with the computation here and I would very much appreciate it if you could explain more about it! Thank you!

from tension.

lubin-liu avatar lubin-liu commented on June 21, 2024

The output kernel is used in backend.dot(out, self.feedback_kernel), where out is the output of the network or equivalently, the firing rate of each neuron times the read out weights as computed from this line in the forward pass and retained as a SNN layer state. This is effectively the second part of Eq. 13 multiplied by the neuron firing rate; I re-used the previously computed network output to save on computation. The same applies to your second question regarding OptimizedSpikingNN.

from tension.

gogoARS avatar gogoARS commented on June 21, 2024

from tension.

Related Issues (5)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.