Giter Site home page Giter Site logo

psychrnn's People

Contributors

abatanasov avatar dbehrlich avatar gkane26 avatar isagarnreiter avatar johndmurray avatar paxtonfitzpatrick avatar plhijk avatar syncrostone avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

psychrnn's Issues

Add function to stop and start trial with perturbations

def run_perturbation_trial(data,t_connectivity,N_rec=200,perturb_time=200,s_perturb=1,r_perturb=0):
   s = np.zeros([data[0].shape[1],data[0].shape[0],N_rec])

   for kk in range(data[0].shape[0]):
       #pre perturbation
       s[:perturb_time,kk,:] = e_sim.run_trial(data[0][kk,:perturb_time,:],t_connectivity=t_connectivity,init=None)[1].reshape([perturb_time,N_rec])
       #perturbation
       init = s[perturb_time-1,kk,:].reshape([1,200])
       init = init*s_perturb
       init += r_perturb*np.random.randn(1,200)/np.sqrt(200)
       #post perturbation
       s[perturb_time:,kk,:] = e_sim.run_trial(data[0][kk,perturb_time:,:],t_connectivity=t_connectivity,init=init)[1].reshape([data[0].shape[1]-perturb_time,N_rec])

   o = np.zeros([s.shape[0],s.shape[1],n_out])
   for ii in range(data[0].shape[0]):
       o[:,ii,:] = state_to_out(s[:,ii,:],w)

   return s, o

How to implement an integration task

I'm interested in implementing a simple integration task, like the one in Mante, Sussillo, et al., (2013). I don't need the "context-dependent" part of their task for now, so we can ignore that for simplicity.

The problem is that I don't see an easy way to do a cumulative sum within the task.trial_function(...) API. What I need to compute is something like:

x_t = np.random.randn()
y_t = y_t_minus_one + x_t

The only way I see how to make such a calculation right now is to compute the full input trace and output trace inside task.generate_trial_params(...).

params['x'] = np.random.randn(num_timesteps)
params['y'] = np.concatenate([0.0, np.cumsum(params['x'][:-1])])

But then trial_function is just iterating over these already defined arrays... which seems sub-optimal.

I think I might be trying to follow the example tasks too closely... Can I just override the task.generate_trial(...) function and leave task.trial_function(...) undefined? Or would that break something?

Minor plotting issue in Hello World! (Minimal_Example.ipynb)

First, thanks a lot for this awesome effort ๐Ÿ† ๐Ÿฅ‡

I just found a minor typo that prevents Minimal_Example.ipynb from plotting the model output. Since it's the first step on your website to get introduced to the package, it might be of your interest to know.

Running this line in Jupyter or Colab

...
# ---------------------- Plot the results ---------------------------
plt.plot(model_output[0][0,:,:])

results in IndexError: too many indices for array: array is 2-dimensional, but 3 were indexed, and no plot is shown.

It should be like this:

plt.plot(model_output[0,:,:])  # or simply model_output[0]

batch number for task incorrect in training loop

batch number is incremented once at the beginning of training loop and then once within each loop, or twice when curriculum is being used. fix this by using different batch generators for each purpose

task.get_trial_batch() doesn't increment batch number

get_trial_batch() makes a new generator each time and so does not increment the batch number when it is called successively. A local generator is used within the training loop so it is not an issue with training.

Add PsychRNN to Open Neuroscience

Hello!

We are reaching out because we would love to have your project listed on Open Neuroscience, and also share information about this project:

Open Neuroscience is a community run project, where we are curating and highlighting open source projects related to neurosciences!

Briefly, we have a website where short descritptions about projects are listed, with links to the projects themselves, their authors, together with images and other links.

Once a new entry is made, we make a quick check for spam, and publish it.

Once published, we make people aware of the new entry by Twitter and a Facebook group.

To add information about their project, developers only need to fill out this form

In the form, people can add subfields and tags to their entries, so that projects are filterable and searchable on the website!

The reason why we have the form system is that it makes it open for everyone to contribute to the website and allows developers themselves to describe their projects!

Also, there are so many amazing projects coming out in Neurosciences that it would be impossible for us to keep track and log them all!

Open Neuroscience tech stack leverages open source tools as much as possible:

  • The website is based on HUGO + Academic Theme
  • Everything is hosted on github here
  • We use plausible.io to see visit stats on the website. It respects visitors privacy, doesn't install cookies on your computer
    • You can check our visitor stats here

Please get in touch if you have any questions or would like to collaborate!

Setting the random seed

Thanks for writing a great and user-friendly package.

Sorry if I missed it, but I couldn't find any examples showing how to best set and store a unique random seed for each model. My use case is that I want to train multiple networks with identical hyperparameters but different random weights and training examples. Are there tools to help facilitate and keep track of this or should I just use tf.random.set_seed(...) and np.random.seed(...)?

Numerical stability is better when array sizes are powers of two

Problem: when running the same code multiple times with the same seeds, there are small numerical differences that arise over the course of training. This is fixed if array sizes are powers of two.

Suggestion: Use array sizes that are powers of two for now

Eventually I would like to implement a workaround (if tensorflow doesn't have a way to activate a built in one) where if the array size is not a power of two, in the background an array with dimensions that are powers of two is made and unneeded entries are set to 0. If this is relevant to you and you want to work on that workaround please do (and drop a comment here so people don't duplicate work).

Fixed weights Issue

Hello,

I have been trying to use the fixed weight functionality. I have a simple example with the PerceptualDiscrimination task as you can see in the attached zip file.
Fixed_weights_example.ipynb.zip

If I understand the functionality correct, the fixed weights should not change after training. Here are two plots of the weights of my network before and after training (replicable with the attached code).
Screenshot 2021-03-30 at 10 10 51 PM
Screenshot 2021-03-30 at 10 10 43 PM

I have fixed the weights in the second and third quadrant of the weight matrix. Although, after training (second plot picture), these weights seem to turn to zero instead of keeping their initial values (as shown in the first plot picture).

Is there something wrong with my code or logic?
Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.