Giter Site home page Giter Site logo

tension's People

Contributors

computationandbrain avatar lubin-liu avatar zhenruiliao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

tension's Issues

Possibility to perform simulations using weights learned from FORCE

Hi there,

Very interested in using your package for FORCE! I had a methodological question which wasn't clear from the paper/github - lets say I am using FORCE to learn the recurrent weights in the reservoir pool to produce neuronal activity traces (like in the zebrafish example). Once the weights have been learned, I want to be able to perform simulations, for example to see how changing the weights of certain neurons in the network changes the network activity - is there a way in tension to simulate this (without any learning occurring)?

Thanks!

KeyError: 'The optimizer cannot recognize ... '

With Tensorflow 2.11.0 (that is on Colab as of this writing), the following error occurs when fitting the model:

KeyError: 'The optimizer cannot recognize variable rnn_1/no_feedback_esn_1/output_kernel:0. This usually means you are trying to call the optimizer to update different parts of the model separately. Please call optimizer.build(variables) with the full list of trainable variables before the training loop or use legacy optimizer `tf.keras.optimizers.legacy.{self.class.name}.'

It's still not clear to me what's causing this, but the workaround for now would be to install Tensorflow version 2.8.

Possibility to predict the model from a saved checkpoint, avoiding having to fit the model multiple times

Hey,

I'm playing around with the example datasets of the ConstrainedNoFeedbackESN to understand how the toolbox works. One issue I'm having is that I have to re-fit the model (model.fit) each time I want to make predictions with model.predict. In order to simulate with the learned weights, I always have to fit the model first - this is not ideal if I want to learn a large network with many parameters and then do simulations on this network. I want to avoid having to fit the model every time I want to use model.predict. Is there some way to do this?

I have tried saving the model.fit object but it's not serialisable and I have also tried using tf.keras.callbacks.ModelCheckpoint to save the learned weights (see code below) - when I do this and use a brand new model, I am not able to load in the weights without first training the model, and if i just train the model for 1 time step and then load the learned weights the predict output looks very different from the original model even though the weights and parameters should be the same.

Thanks so much for the help!

CODE for how I am using callbacks to save the weights

target_transposed = np.transpose(target).astype(np.float32) # convert to shape (timestep, number of neurons)
u = 1 # number of inputs, by default the forward pass does not use inputs so this is a stand-in
n = target_transposed.shape[1] # number of neurons
tau = 1.5 # neuron time constant
dt = 0.25 # time step
alpha = 1 # gain on P matrix at initialization
m = n # output dim equals the number of recurrent neurons
g = 1.25 # gain parameter controlling network chaos
p_recurr = 0.1 # (1 - p_recurr) of recurrent weights are randomly set to 0 and not trained
max_epoch = 10
structural_connectivity = np.ones((n, n)) # region connectivity matrix; set to all 1's since only looking at one subsection of the brain
noise_param = (0, 0.001) # mean and std of white noise injected to the forward pass
x_t = np.zeros((target_transposed.shape[0], u)).astype(np.float32) # stand-in input

checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=1)

tf.random.set_seed(123)
esn_layer = ConstrainedNoFeedbackESN(units=n,
activation='tanh',
dtdivtau=dt/tau,
p_recurr=p_recurr,
structural_connectivity=structural_connectivity,
noise_param=noise_param,
seed=123)

model = BioFORCEModel(force_layer=esn_layer, alpha_P=alpha)
model.compile(metrics=["mae"])

history = model.fit(x=x_t,
y=target_transposed,
epochs=max_epoch,
validation_data=(x_t, target_transposed)
, callbacks=[cp_callback])
_1 = model.predict(x_t)

Update recurrent weights for spiking neural networks

Thanks for such a nice library of FORCE learning! This is very helpful!

I think currently tension only supports the learning of readout weights for spiking neural networks. I was wondering if there's a way to also update the recurrent weights during the training of SNN, similar to what full-FORCE did? And where can I customize the learning steps if needed?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.