zhenruiliao / tension Goto Github PK
View Code? Open in Web Editor NEWA Python package for FORCE training chaotic neural networks
License: MIT License
A Python package for FORCE training chaotic neural networks
License: MIT License
Hi there,
Very interested in using your package for FORCE! I had a methodological question which wasn't clear from the paper/github - lets say I am using FORCE to learn the recurrent weights in the reservoir pool to produce neuronal activity traces (like in the zebrafish example). Once the weights have been learned, I want to be able to perform simulations, for example to see how changing the weights of certain neurons in the network changes the network activity - is there a way in tension to simulate this (without any learning occurring)?
Thanks!
With Tensorflow 2.11.0 (that is on Colab as of this writing), the following error occurs when fitting the model:
KeyError: 'The optimizer cannot recognize variable rnn_1/no_feedback_esn_1/output_kernel:0. This usually means you are trying to call the optimizer to update different parts of the model separately. Please call
optimizer.build(variables)
with the full list of trainable variables before the training loop or use legacy optimizer `tf.keras.optimizers.legacy.{self.class.name}.'
It's still not clear to me what's causing this, but the workaround for now would be to install Tensorflow version 2.8.
Hey,
I'm playing around with the example datasets of the ConstrainedNoFeedbackESN to understand how the toolbox works. One issue I'm having is that I have to re-fit the model (model.fit) each time I want to make predictions with model.predict. In order to simulate with the learned weights, I always have to fit the model first - this is not ideal if I want to learn a large network with many parameters and then do simulations on this network. I want to avoid having to fit the model every time I want to use model.predict. Is there some way to do this?
I have tried saving the model.fit object but it's not serialisable and I have also tried using tf.keras.callbacks.ModelCheckpoint to save the learned weights (see code below) - when I do this and use a brand new model, I am not able to load in the weights without first training the model, and if i just train the model for 1 time step and then load the learned weights the predict output looks very different from the original model even though the weights and parameters should be the same.
Thanks so much for the help!
target_transposed = np.transpose(target).astype(np.float32) # convert to shape (timestep, number of neurons)
u = 1 # number of inputs, by default the forward pass does not use inputs so this is a stand-in
n = target_transposed.shape[1] # number of neurons
tau = 1.5 # neuron time constant
dt = 0.25 # time step
alpha = 1 # gain on P matrix at initialization
m = n # output dim equals the number of recurrent neurons
g = 1.25 # gain parameter controlling network chaos
p_recurr = 0.1 # (1 - p_recurr) of recurrent weights are randomly set to 0 and not trained
max_epoch = 10
structural_connectivity = np.ones((n, n)) # region connectivity matrix; set to all 1's since only looking at one subsection of the brain
noise_param = (0, 0.001) # mean and std of white noise injected to the forward pass
x_t = np.zeros((target_transposed.shape[0], u)).astype(np.float32) # stand-in input
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=1)
tf.random.set_seed(123)
esn_layer = ConstrainedNoFeedbackESN(units=n,
activation='tanh',
dtdivtau=dt/tau,
p_recurr=p_recurr,
structural_connectivity=structural_connectivity,
noise_param=noise_param,
seed=123)
model = BioFORCEModel(force_layer=esn_layer, alpha_P=alpha)
model.compile(metrics=["mae"])
history = model.fit(x=x_t,
y=target_transposed,
epochs=max_epoch,
validation_data=(x_t, target_transposed)
, callbacks=[cp_callback])
_1 = model.predict(x_t)
One test fails when running test.py on Google Colab (recurrent kernel for TestConstrainedNoFBESNwithFORCE
). Relaxing np.allclose
tolerances for that test to atol=1e-07
and rtol=1e-04
will alleviate this issue.
Thanks for such a nice library of FORCE learning! This is very helpful!
I think currently tension only supports the learning of readout weights for spiking neural networks. I was wondering if there's a way to also update the recurrent weights during the training of SNN, similar to what full-FORCE did? And where can I customize the learning steps if needed?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.