Giter Site home page Giter Site logo

prfpy's People

Contributors

marcoaqil avatar niklas-mueller avatar tknapen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

prfpy's Issues

Refactor classes

Classes are not used optimally now. Need to change the code to fix this.

Grid fit GLM with slope only

Currently, the grid fit performs a GLM finding the optimal slope and baseline for predictions in the grid, in all cases. When desiring to keep the baseline fixed, the GLM baseline is set and fixed to the chosen value for iterative fitting. However, ideally this should be done already at the GLM stage (i.e. finding only the optimal slope with the given fixed baseline). this should improve speed of the grid-fit, speed of iterative-fit, and likely the stability of iterative-fit as well

Enable selective iterative fitting of only certain parameters, in a unit-wise fashion

Currently, the iterative_fit functions fit all the model parameters. However, in a variety of situations (for example see here it is desirable to keep certain parameters fixed, while fitting others. Python optimizers have Bounds, which are designed explicitly for this purpose, so it is natural to use them here as well. A parameter can easily be kept fixed by specifying identical upper and lower bounds.

The additional complication is that in the code as it is now, Bounds can only be set identically for all units in the fitting. However, we would want to be able to specify unit (voxel or vertex)-wise Bounds. There are a couple of suitable ways of doing this. I will implement one ASAP.

The most minimal way would be simply adding code that enables the iterative_fit function to check whether the user-provided Bounds have size (parameters, 2), in which case the Bounds would be set identically for all units; or, if Bounds have size (units, parameters, 2), they will be set in a unit-wise way. In this scenario, the user would have to specify the correct Bounds "manually", in order to keep certain parameters fixed to a specific value. This is the most minimal way.

The second option would be to make prfpy do the job of figuring out the bounds. The user could specify the indices of parameters that have to be kept fixed, and then prfpy would create suitable bounds for those parameters based on the user-provided initial parameter values. This reduces the burden on the user, but it is less minimal for prfpy. I am inclined to not go this way, but it would still be stylistically acceptable.

Tutorial notebooks

Would be good to have some tutorial notebooks that can be run in Binder or Colab

Cleanup code

We need to make sure that the code doesn't contain any hardcoded values, and make sure that specific output types, array dimensions, etc. are maintained.

Include HRF parameters at grid stage

Currently, when fitting the HRF, the grid stage for gauss is performed with the provided hrf initial parameters, and the grid stages for other models with fixed HRF parameters. it could be better to include the HRF parameter(s) directly within the grid stages, so that the grid output is not biased by the provided hrf initial parameters.

doctrings

Need to go through all docstrings and check validity

crossvalidate_fit needs to respect different filter parameters for train and test.

Here, you can see that a model takes the 'task_lengths' parameter from the stimulus class and puts it into the filter dictionary for the model. It does this in init:

self.filter_params['task_lengths'] = self.stimulus.task_lengths

Then, in the cross_validate fit method, the stimulus attribute of the model is modified to be the test stimulus:

fit_stimulus = deepcopy(self.model.stimulus)

However, since we only modify the stimulus attribute of the model and don't call its init , the task_lengths for the model will remain unchanged - and will still reflect those of the training stimulus - because the model's filter dictionary is not updated.

This means that when generating the predictions for the test dataset, the predictions will be filtered according to the task lengths of the training stimulus. This can cause havoc if the task lengths/data are different sizes between train and test.

The reason this probably hasn't been noticed before is that you would ordinarily want identical filter parameters between train and test. However, the problem is raised because the task lengths are also stored within the filter dictionary - which could quite often vary between train and test.

Any solution would need to respect the fact that the updated filter parameters would again need to be changed back to reflect the training set after the predictions for the test are generated (similar to how the model.stimulus is replaced and then restored in crossvalidate_fit)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.