nicholas-roy / psytrack Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Hi,
Is there an easy way to extend the analysis work to tasks with more than 2 choices?
Thank you in advance
There is an aux folder in the build directory that prevents cloning the repo on Windows. Deleting the build folder from a non-Windows machine fixes the problem.
I'd like to give some suggested feedback on the example notebook:
https://github.com/nicholas-roy/psytrack/blob/master/psytrack/examples/ExampleNotebook.ipynb
Almost immediately, the tutorial uses generateSim()
and recoverSim()
functions, but there's no comments detailing what these functions are doing or what their inputs/outputs are. This could be remedied by printing the types, shapes, etc. of the input and output objects.
Looking at the code for generateSim()
, the documentation states that the function returns either:
save_path | (if savePath) : str, the name of the folder+file where
simulation data was saved in the local directory
save_dict | (if no SavePath) : dict, contains all relevant info
from the simulation
But unless I read the paper and the code, I have no idea what "all relevant info from the simulation" is.
if any of the feature inputs in inputs
is 1 dimesional then the following error is thrown:
Exception: lc given in weights not in dataset inputs
by making it 2d, it fixes the issue:
input_data = {"y": choices,
"inputs":
{"lc": left[:, np.newaxis],
"rc": right[:, np.newaxis]}}
it is in fact stated in the example that the inputs must be 2d, but that error message makes it seem like the key is not found at all
Hi there,
When trying to plot the performance, using the same data format as in your example, I get some conversion error from boolean to float.
Maybe something you want to check:
In [3]: fig_perf = psy.plot_performance(new_D)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-3-19d48a1a2ef1> in <module>
----> 1 fig_perf = psy.plot_performance(new_D)
~/miniconda3/lib/python3.7/site-packages/psytrack/plot/analysisFunctions.py in plot_performance(dat, xval_pL, sigma, figsize)
102
103 N = len(dat['y'])
--> 104 answerR = (dat['answer'] == 2).astype(float)
105
106 ### Plotting
AttributeError: 'bool' object has no attribute 'astype'
Cheers
Hi, thanks for making this code available - it is very useful!
I was wondering whether you have any pointers on how to allow some weights to be constant across trials while have others learnt to vary across trials. I ask this because in Figure 2C of your paper https://www.cell.com/neuron/fulltext/S0896-6273(20)30963-6?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627320309636%3Fshowall%3Dtrue you have done this with one of the weights having sigma = 0.
In the current implementation, the inverting of the covariance matrix of the prior (invSigma) would not allow any of the sigmas in hyper to be 0 and I was hence wondering how you achieved this for the figure in the paper.
Thank you in advance!
Are you able to include some documentation to describe how weights are ordered in wMode
? I would have assumed that the ordering of weights in wMode
and weights
would be the same, but I don't think that's true in practice. Instead, I think the rows in wMode
are the weights when the weight names are ordered alphabetically. Is this true? Thanks.
Hi,
I am trying to fit this model to our mice learning an audiotry 2AFC task, trying to identify the stimulus, action bias, and history effect components during learning. Our mice typically have a higher miss rate at the start of learning (average about 30% miss), do you have any advice on how we should deal with miss trials?
The first solution I think of is to put the miss trials as 'nan' in action and action history:
i.e. imagine we have the following trials
stimulus -- 1 -1 -1 1 -1 -1 1 1 (we only have 2 stimuli and do not manipulate the contrast)
action -- 1 1 -1 nan 1 -1 nan -1
prev action -- 1 1 -1 nan 1 -1 nan
Do you think this is the proper way to construct the regressors?
Another way is to directly discard all the miss trials in the action term, but I feel like that can mess up the order of the action history.
Thanks in advance!
Hi there,
This may be a problem that's due to errors in my implementation, but I got some strange behavior regarding the bias and bias weights that I can't make sense of while fitting some of my rat 2-AFC data. For some context, the rats are previously trained on a visual task, but in these data are seeing blank screens so the stimulus is uninformative. As you'll see below, the stimulus variable weights are 0 precisely because this is the case. The object of using Psytrack is to see what stimulus-independent strategies they might be implementing in order to make their choices.
In short, my doubts are:
1 - I am not sure how to interpret a negative bias weight
-is it L/R?
-is it the presence/absence of bias (in which case a negative value is confusing, as well as a weight when there is no bias)?
-is it bias interacting with the other variables? (this would also be confusing, especially since the paper and documentation notes that bias weights are independent of other variable weights, and this seems to be the case when I run different models)
2 - I am unsure as to why the bias weights seem to have such large magnitude in some cases
-I saw in the paper that large weights can be a red flag, but it is unclear to me what the potential issue could be here
Here are some examples of the strange behavior:
Example 1
This animal appears to have a strong positive (rightwards) bias:
But, its bias weight is strongly negative:
From the paper, it seemed like the interpretation of +/- in the bias weights also corresponded to R/L, which makes this result confusing.
Example 2
This other animal appears to have a strong negative bias:
And here, its bias weights are also very strongly negative:
This would seem as expected, except that the magnitude on the bias weights is quite alarming.
Example 3
This animal has a strong negative bias:
And quite strangely, the shape of that bias trajectory seems to be tracked almost perfectly by a different variable:
Example 4
This animal has a slight negative bias that then switches to a positive bias:
The trajectory of its bias weights resembles its bias trajectory, but the weights remain negative:
Example 5
As a sanity check, I looked at bias and bias weights in the control sessions with visible stimuli beforehand, and there the results are not as strange, but still somewhat unexpected. For example, this animal seems to have a very slight bias, if any:
And its performance is reliably above chance:
And yet its bias weight is strongly negative and has the largest magnitude out of any variable, including the stimulus:
Example 6
This other animal during a control sessions viewing visible stimuli has the following bias:
Which actually seems to interact quite clearly with its performance:
And it appears to be tracked quite well by the bias weights (albeit with large error):
The fact that in this case the bias weights perhaps behave as expected makes me think like there may not necessarily be a problem in my implementation of the model.
Any help would be greatly appreciated! Thank you!
Javier
Hello,
thanks for the great toolbox!
I was wondering how easy it is to extend this toolbox to include an additional choice. Specifically, I'd like to include omission trials in a 2AFC task.
In the paper it says that it can be done, together with a reference to Bak & Pillow 2018. I guess if it was that simple you would already have included it in this toolbox?
I can think of workarounds using a binary response variable:
Would that make sense?
Best wishes,
Daniel
If the first and last trial are both in the same test fold, then sum(missing_trials) is one less than len(test_trials). This leads to logli and gw from Kfold_crossVal_check to be one element short.
The actual error I am getting is in makeWeightPlot at X = np.array([i['gw'] for i in prediction]).flatten()
. This can be avoided by switching to np.hstack, but then I think that may mess up the indexing of the array.
More generally, whenever 0 is in the test fold, it wraps around and assigns it's 'missing trial' to the last trial (instead of no trial).
I'm not actually sure how the cross validated prediction should be made for the very first trial, do you have any suggestions?
Thanks for the great package.
Hi!
I wanted to try PsyTrack for a dataset I am working on and have found a bug as far as I can tell. It happens both when using my data and when just using the example notebook.
The bug: When using optList = []
, one gets an error when running hyp, evd, wMode, hess_info = psy.hyperOpt(new_D, hyper, weights, optList)
. Here is the Stacktrace:
ValueError Traceback (most recent call last)
<ipython-input-16-1364525e1808> in <module>
----> 1 hyp, evd, wMode, hess_info = psy.hyperOpt(new_D, hyper, weights, optList)
~/Code/masters-thesis/psytrack/psytrack/hyperOpt.py in hyperOpt(dat, hyper, weights, optList, method, showOpt, jump, hess_calc)
175 optVals += np.log2(current_hyper[val]).tolist()
176
--> 177 result = minimize(
178 hyperOpt_lossfun,
179 optVals,
~/miniconda3/envs/mastersthesis/lib/python3.9/site-packages/scipy/optimize/_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
612 return _minimize_cg(fun, x0, args, jac, callback, **options)
613 elif meth == 'bfgs':
--> 614 return _minimize_bfgs(fun, x0, args, jac, callback, **options)
615 elif meth == 'newton-cg':
616 return _minimize_newtoncg(fun, x0, args, jac, hess, hessp, callback,
~/miniconda3/envs/mastersthesis/lib/python3.9/site-packages/scipy/optimize/optimize.py in _minimize_bfgs(fun, x0, args, jac, callback, gtol, norm, eps, maxiter, disp, return_all, finite_diff_rel_step, **unknown_options)
1162 allvecs = [x0]
1163 warnflag = 0
-> 1164 gnorm = vecnorm(gfk, ord=norm)
1165 while (gnorm > gtol) and (k < maxiter):
1166 pk = -np.dot(Hk, gfk)
~/miniconda3/envs/mastersthesis/lib/python3.9/site-packages/scipy/optimize/optimize.py in vecnorm(x, ord)
164 def vecnorm(x, ord=2):
165 if ord == Inf:
--> 166 return np.amax(np.abs(x))
167 elif ord == -Inf:
168 return np.amin(np.abs(x))
<__array_function__ internals> in amax(*args, **kwargs)
~/miniconda3/envs/mastersthesis/lib/python3.9/site-packages/numpy/core/fromnumeric.py in amax(a, axis, out, keepdims, initial, where)
2731 5
2732 """
-> 2733 return _wrapreduction(a, np.maximum, 'max', axis, None, out,
2734 keepdims=keepdims, initial=initial, where=where)
2735
~/miniconda3/envs/mastersthesis/lib/python3.9/site-packages/numpy/core/fromnumeric.py in _wrapreduction(obj, ufunc, method, axis, dtype, out, **kwargs)
85 return reduction(axis=axis, out=out, **passkwargs)
86
---> 87 return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
88
89
ValueError: zero-size array to reduction operation maximum which has no identity
I'd be very grateful if you could let me know whether the errors reproduces on your end and whether you have ideas what the reason could be.
Thanks!
When running pip install psytrack
in an Anaconda shell on a Windows system I get the following error:
ERROR: Could not install packages due to an EnvironmentError: [WinError 267] The directory name is invalid: 'C:\\Users\\guido\\AppData\\Local\\Temp\\pip-install-sk1sle_r\\psytrack\\psytrack/aux'
Also directly installing from git did not work.
The default for the colors
parameter in plt.plot_weights
is a dictionary COLORS
with fixed keys related to the IBL datasets. Maybe to make it easier to quickly test out a model this should just be a list and the code at line 59/60:
plt.plot(W[i], lw=1.5, alpha=0.8, ls='-', c=colors[w],
zorder=zorder[w], label=w)
Could then index by i
instead of w
? Otherwise it isn't immediately obvious to someone using the code (like myself) that the colors dictionary is really a required parameter when you run your own data through this plotting routine.
Line 241 in d0ee66f
Hello,
I'd like to use the 'constant' method to prevent the sigmas from getting very large values. Would this be possible?
Thanks for the help!
Hello! Thank you for developing psytrack, it's a pretty cool tool.
I have used it thus far to look individually at the behaviour of my animals, and I am wondering if you have the code available to analyse them as a group.
Thanks!
Eliana
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.