Giter Site home page Giter Site logo

mriqc-learn's Issues

running the mriqc_clf on docker

Hi,
I am running the latest version of mriqc (21.0.0rc2), and are trying mriqc_clf on docker.
I've tried the following command:

docker run -v $PWD:/scratch -w /scratch --entrypoint=mriqc_clf nipreps/mriqc:21.0.0rc2 --load-classifier -X group_T1w_copy.tsv

But it didn't work.

Error:
Traceback (most recent call last):
File "/opt/conda/bin/mriqc_clf", line 8, in
sys.exit(main())
File "/opt/conda/lib/python3.8/site-packages/mriqc/bin/mriqc_clf.py", line 303, in main
cv_helper = CVHelper(
File "/opt/conda/lib/python3.8/site-packages/mriqc/classifier/helper.py", line 200, in init
self.load(load_clf)
File "/opt/conda/lib/python3.8/site-packages/mriqc/classifier/helper.py", line 716, in load
from sklearn.externals.joblib import load as loadpkl
ModuleNotFoundError: No module named 'sklearn.externals.joblib'

and I tried command: from sklearn.externals import joblib on spyder, and did't get any errors.
Please give me some advice.

Many Thanks!
Wenjun

Tutorial.py does not work because pipe are not defined

Dear experts, I am trying to run the Tutorial.ipynb on the jupyter notebooks: https://github.com/nipreps/mriqc-learn/tree/main/docs/notebooks using spyder. And got two errors: GridSearchCV and pipe are not defined.

first can be fixed by adding "from sklearn.model_selection import GridSearchCV"
but I don't know how to fix the "pipe" error.

and scripts with errors:
clf = GridSearchCV(
estimator=pipe,
param_grid=p_grid,
cv=inner_cv,
n_jobs=30,
scoring="roc_auc",
)

And I'm freshman with python and mriqc, so I'm not very sure how to evaluate my own group_T1w.tsv using the tutorial.py

Please give me some advice.

Many Thanks!

Best,

Wenjun

Improvements to the NoiseWinnow transformer

What would you like to see added in this software?

As Satra pointed out in nipreps/nireports@4ddb626#r102410149:

it was funny to see this pop up in my email. i would say this was a reasonable approach at the time, but one could parameterize the orthogonality/unrelatedness in more ways than corrcoef/auc. one can bring in information theoretic metrics, independence (via dcor), etc.,. it would also be useful to parameterize the margin (the bounds within which this noise vector is seen as useless) instead of fixing it as i did back then. finally given all the new feature importance methods (lofo, shap, etc.,.) that didn't exist back then, it may be useful to empirically evaluate that this works.

Do you have any interest in helping implement the feature?

Yes

Additional information / screenshots

No response

mriqc_clf does not work locally

Dear experts, I am trying to run mriqc_clf locally on my Mac and I get the following error:

mriqc_clf --load-classifier -X group_T1w.tsv
Traceback (most recent call last):
  File "/Users/Narlon/opt/miniconda3/bin/mriqc_clf", line 8, in <module>
    sys.exit(main())
  File "/Users/Narlon/opt/miniconda3/lib/python3.9/site-packages/mriqc/bin/mriqc_clf.py", line 156, in main
    from ..classifier.helper import CVHelper
  File "/Users/Narlon/opt/miniconda3/lib/python3.9/site-packages/mriqc/classifier/helper.py", line 20, in <module>
    from sklearn.metrics.scorer import check_scoring
ModuleNotFoundError: No module named 'sklearn.metrics.scorer'

Any ideas how to fix this issue, if possible at all?

Thank you!

mriqc --version
MRIQC v0.16.1

MRIQC classificator can't recognize MRI volumes with missing part of brain

HI,
I have some volumes ruined after running a defacing algorithm (before using mriqc), like this one:
image
I hoped that mriqc+mriqc_clf could help me identifying the failed ones, but unfortunately it's not the case (for example the volume of the image is classified as 0 with a probability of 0.47). Is it because the analysis performed by mriqc take into account only the parts that are not considered as background? Do you think that there is something I can do to detect the ruined volumes? Some metrics of mriqc that you would expect to be unusual in these cases or something like that? Just watching the graphs honestly I couldn't find any big differences from the values of other datasets that don't have this issue.

mriqc classification is complaining about the dimensions

Hi, @oesteban @dbirman
I have finished mriqc on my small dataset (64 subjects), when I am running
mriqc_clf --load-classifier -X /proj/Abbas_dataset_raw_BIDS_MRIQC/group_T1w.tsv
the data is 65(64+defaut column names)*69
using the most recent version docker image (singularity develop folder), it is complaiing the dimensions, do you have any idea why this may happen? below is the error message.
/usr/local/miniconda/lib/python3.7/site-packages/sklearn/utils/__init__.py:4: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working from collections import Sequence /usr/local/miniconda/lib/python3.7/site-packages/sklearn/ensemble/weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release. from numpy.core.umath_tests import inner1d Traceback (most recent call last): File "/usr/local/miniconda/bin/mriqc_clf", line 10, in <module> sys.exit(main()) File "/usr/local/miniconda/lib/python3.7/site-packages/mriqc/bin/mriqc_clf.py", line 199, in main thres=opts.threshold) File "/usr/local/miniconda/lib/python3.7/site-packages/mriqc/classifier/helper.py", line 517, in predict_dataset prob_y, pred_y = self.predict(_xeval[columns]) File "/usr/local/miniconda/lib/python3.7/site-packages/mriqc/classifier/helper.py", line 487, in predict proba = np.array(self._estimator.predict_proba(X)) File "/usr/local/miniconda/lib/python3.7/site-packages/sklearn/utils/metaestimators.py", line 115, in <lambda> out = lambda *args, **kwargs: self.fn(obj, *args, **kwargs) File "/usr/local/miniconda/lib/python3.7/site-packages/sklearn/pipeline.py", line 356, in predict_proba Xt = transform.transform(Xt) File "/usr/local/miniconda/lib/python3.7/site-packages/mriqc/classifier/sklearn/preprocessing.py", line 212, in transform new_x[new_x.columns[self.ftmask_]], y) File "/usr/local/miniconda/lib/python3.7/site-packages/mriqc/classifier/sklearn/preprocessing.py", line 165, in transform X.ix[:, colmask]) File "/usr/local/miniconda/lib/python3.7/site-packages/pandas/core/indexing.py", line 139, in __getitem__ return self._getitem_tuple(key) File "/usr/local/miniconda/lib/python3.7/site-packages/pandas/core/indexing.py", line 808, in _getitem_tuple retval = getattr(retval, self.name)._getitem_axis(key, axis=i) File "/usr/local/miniconda/lib/python3.7/site-packages/pandas/core/indexing.py", line 1008, in _getitem_axis return self._getitem_iterable(key, axis=axis) File "/usr/local/miniconda/lib/python3.7/site-packages/pandas/core/indexing.py", line 1114, in _getitem_iterable key = check_bool_indexer(labels, key) File "/usr/local/miniconda/lib/python3.7/site-packages/pandas/core/indexing.py", line 2399, in check_bool_indexer "Item wrong length {} instead of {}.".format(len(result), len(index)) IndexError: Item wrong length 36 instead of 37.

`mriqc_clf` T1w classifier not working in mriqc latest release on docker

Hello,

I am interesting in the T1w classifier function mriqc_clf which seems to be no longer supported in the latest release
On my system, container image files for both the latest and second latest version (0.16.1) of MRIQC were downloaded from docker hub.

When I ran the classifier on the latest version
singularity exec -H ~/mengjia_space -B ~/mengjia_space:/output ~/mriqc-21.0.0.sif mriqc_clf --load-classifier -X group_T1w.tsv -v
I got the error

WARNING: skipping mount of /local/path/to/output: stat /local/path/to/output: no such file or directory
FATAL:   container creation failed: mount /local/path/to/output->/output error: while mounting /local/path/to/output: mount source /local/path/to/output doesn't exist
(base) [rbc@cubic-login5 mengjia_space]$ singularity exec -H ~/mengjia_space -B ~/mengjia_space:/output ~/mriqc-21.0.0.sif mriqc_clf --load-classifier -X group_T1w.tsv -v
Traceback (most recent call last):
  File "/opt/conda/bin/mriqc_clf", line 8, in <module>
    sys.exit(main())
  File "/opt/conda/lib/python3.8/site-packages/mriqc/bin/mriqc_clf.py", line 190, in main
    from ..classifier.helper import CVHelper
  File "/opt/conda/lib/python3.8/site-packages/mriqc/classifier/helper.py", line 19, in <module>
    from sklearn.metrics.scorer import check_scoring
ModuleNotFoundError: No module named 'sklearn.metrics.scorer'

However, when I ran the same command with the second latest (0.16.1) version
singularity exec -H ~/mengjia_space -B ~/mengjia_space:/output ~/mriqc_0.16.1.sif mriqc_clf --load-classifier -X group_T1w.tsv -v, I was able to obtain the output file with prediction labels.

/usr/local/miniconda/lib/python3.7/site-packages/sklearn/utils/__init__.py:4: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
  from collections import Sequence
/usr/local/miniconda/lib/python3.7/site-packages/sklearn/ensemble/weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release.
  from numpy.core.umath_tests import inner1d
211112-09:45:37 mriqc.classifier:INFO Results saved as /cbica/projects/RBC/mengjia_space/mclf_run-20211112-094536*

Is it true that mriqc_clf is no longer supported? Thank you so much for your help and feedback!!

How to save output .csv file from mriqc_clf in directory of choice (using Singularity)?

I would like to use MRIQC's classifier to classify my T1 raw data. For that I used the following lines of code:

singularity exec -H /local/path/to/dir/of/my/choice \
-B /local/path/to/output/:/output \
/path/to/singularity_file/mriqc_latest.sif mriqc_clf --load-classifier -X /output/group_T1w.tsv -v

This works fine and I get my .csv file, however, the output .csv file is stored within the working directory of the shell from which this code was executed. If I leave away -H /path/to/dir/of/my/choice, the .csv file is simply stored in my local home directory (/home/johannes.wiesner/mclf_run-20210310-150418*).

However, I would like to be able a path to specify where exactly to store it. Is there such an option? It seems like in earlier day there was a -o flag that was supposed to handle this? I cannot find any documentation on how to do this, or maybe there is simply no solution for it?

This issue seems to be related to nipreps/mriqc#699 ?

Tutorial notebook out of date

What happened?

Tried to run the tutorial (docs/notebooks/Tutorial.ipynb) using the environment defined in setup.py and hit an issue in cell 7. It seems to be a sklearn version issue (see linked Github issue) - I'm running 1.4.2.

shankarpandala/lazypredict#442

What command did you use?

cv_score = cross_val_score(
    init_pipeline(),
    X=train_x,
    y=train_y,
    cv=outer_cv,
    scoring="roc_auc",
    n_jobs=16,
    error_score='raise'
)


### What version of the software are you running?

Latest

### How are you running this software?

Local installation ("bare-metal")

### Is your data BIDS valid?

Yes

### Are you reusing any previously computed results?

No

### Please copy and paste any relevant log output.

```shell
`TypeError: OneHotEncoder.__init__() got an unexpected keyword argument 'sparse'`

Additional information / screenshots

No response

abide, manual rating ... what the ground truth ?

Hello,

Thank you for providing this nice tools and sorry if it is not the right place to ask.

I am trying to replicate the learning on abide dataset, and I wonder how to use the manual ratting.

First I do not know which file to choose,
y_abide.csv or labels_abide_allraters.csv (in the archive subdir)

I try with the first one, and I found half of the line where the raters disagree ...(it is quite a lot ! )
with the second one I get 764 consistent rating over 1100.

So which one to use, and what to do in case of disagreement ? which label should I set ?

Since mriqc is performing a binary classificaiton, what to do with "doubfull" label is it treated as noise ?

So I do not see how to deduce a ground truth label (0/1) on all abide T1w

Many thanks for your help,
and sorry if I miss the explanation in one article

Romain

PS what about abide_MS.csv and abide_DB.csv ? it seems to contain a rating by one rater, to a subpart only

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.