vu-cog-sci / prfpy Goto Github PK
View Code? Open in Web Editor NEWprf fitting routines at Spinoza Centre for Neuroimaging
License: GNU General Public License v3.0
prf fitting routines at Spinoza Centre for Neuroimaging
License: GNU General Public License v3.0
until scipy 1.4 L-BFGS-B was working just fine when providing equal upper/lower bounds to keep a variable fixed. in fact that was and is the suggested way to keep variables fixed. now it stopped working (gives nan gradient and stops optimization). one solution is to provide fixed bounds with a small increment e.g. (1-1e-6, 1+1e-6). better: just use trust-constr (set constraints = [])
Classes are not used optimally now. Need to change the code to fix this.
The main method for doing HRF convolution uses fftconvolve for speed, and a manual padding to avoid edge effects. the method used in grid prediction does not have a padding, potentially leading to slight differences between grid and iterative fit prediction.
solution: add the same padding in grid-prediction HRF convolution.
Currently, the grid fit performs a GLM finding the optimal slope and baseline for predictions in the grid, in all cases. When desiring to keep the baseline fixed, the GLM baseline is set and fixed to the chosen value for iterative fitting. However, ideally this should be done already at the GLM stage (i.e. finding only the optimal slope with the given fixed baseline). this should improve speed of the grid-fit, speed of iterative-fit, and likely the stability of iterative-fit as well
Currently, the iterative_fit functions fit all the model parameters. However, in a variety of situations (for example see here it is desirable to keep certain parameters fixed, while fitting others. Python optimizers have Bounds, which are designed explicitly for this purpose, so it is natural to use them here as well. A parameter can easily be kept fixed by specifying identical upper and lower bounds.
The additional complication is that in the code as it is now, Bounds can only be set identically for all units in the fitting. However, we would want to be able to specify unit (voxel or vertex)-wise Bounds. There are a couple of suitable ways of doing this. I will implement one ASAP.
The most minimal way would be simply adding code that enables the iterative_fit function to check whether the user-provided Bounds have size (parameters, 2), in which case the Bounds would be set identically for all units; or, if Bounds have size (units, parameters, 2), they will be set in a unit-wise way. In this scenario, the user would have to specify the correct Bounds "manually", in order to keep certain parameters fixed to a specific value. This is the most minimal way.
The second option would be to make prfpy do the job of figuring out the bounds. The user could specify the indices of parameters that have to be kept fixed, and then prfpy would create suitable bounds for those parameters based on the user-provided initial parameter values. This reduces the burden on the user, but it is less minimal for prfpy. I am inclined to not go this way, but it would still be stylistically acceptable.
Simulations could be from PRF-Analyze, but in all should represent a ground truth that we can use as a target for fitting and testing.
Would be good to get the readthedocs integration up and running again
nistats
has been deprecated and folded into nilearn
. We should get rid of all nistats
imports.
Would be good to have some tutorial notebooks that can be run in Binder or Colab
We need to make sure that the code doesn't contain any hardcoded values, and make sure that specific output types, array dimensions, etc. are maintained.
Need a coherent way of handling exceptions throughout prfpy
Currently, when fitting the HRF, the grid stage for gauss is performed with the provided hrf initial parameters, and the grid stages for other models with fixed HRF parameters. it could be better to include the HRF parameter(s) directly within the grid stages, so that the grid output is not biased by the provided hrf initial parameters.
Spatial dimensions of the design matrix should possibly be rectangular. this would
one of scipy packages has been renamed from fftpack to fft in a recent update. ensure code and install requirements are consistent
Need to go through all docstrings and check validity
And use them to go through the entire suite of published tests.
Here, you can see that a model takes the 'task_lengths' parameter from the stimulus class and puts it into the filter dictionary for the model. It does this in init:
Line 206 in 2bb0e48
Then, in the cross_validate fit method, the stimulus attribute of the model is modified to be the test stimulus:
Line 309 in 2bb0e48
However, since we only modify the stimulus attribute of the model and don't call its init , the task_lengths for the model will remain unchanged - and will still reflect those of the training stimulus - because the model's filter dictionary is not updated.
This means that when generating the predictions for the test dataset, the predictions will be filtered according to the task lengths of the training stimulus. This can cause havoc if the task lengths/data are different sizes between train and test.
The reason this probably hasn't been noticed before is that you would ordinarily want identical filter parameters between train and test. However, the problem is raised because the task lengths are also stored within the filter dictionary - which could quite often vary between train and test.
Any solution would need to respect the fact that the updated filter parameters would again need to be changed back to reflect the training set after the predictions for the test are generated (similar to how the model.stimulus is replaced and then restored in crossvalidate_fit)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.