Giter Site home page Giter Site logo

pennlinc / qsiprep Goto Github PK

View Code? Open in Web Editor NEW
137.0 15.0 55.0 182.4 MB

Preprocessing and reconstruction of diffusion MRI

Home Page: http://qsiprep.readthedocs.io

License: BSD 3-Clause "New" or "Revised" License

Dockerfile 0.03% Python 33.20% Shell 1.20% TeX 0.57% HTML 64.85% Smarty 0.16%
python diffusion-mri tractography connectomics pipelines

qsiprep's Introduction

QSIprep: Preprocessing and analysis of q-space images

GitHub Repository Documentation Status Docker Test Status Publication DOI License

Full documentation at https://qsiprep.readthedocs.io

About

qsiprep configures pipelines for processing diffusion-weighted MRI (dMRI) data. The main features of this software are

  1. A BIDS-app approach to preprocessing nearly all kinds of modern diffusion MRI data.
  2. Automatically generated preprocessing pipelines that correctly group, distortion correct, motion correct, denoise, coregister and resample your scans, producing visual reports and QC metrics.
  3. A system for running state-of-the-art reconstruction pipelines that include algorithms from Dipy_, MRTrix_, `DSI Studio`_ and others.
  4. A novel motion correction algorithm that works on DSI and random q-space sampling schemes

https://github.com/PennBBL/qsiprep/raw/master/docs/_static/workflow_full.png

Preprocessing

The preprocessing pipelines are built based on the available BIDS inputs, ensuring that fieldmaps are handled correctly. The preprocessing workflow performs head motion correction, susceptibility distortion correction, MP-PCA denoising, coregistration to T1w images, spatial normalization using ANTs_ and tissue segmentation.

Reconstruction

The outputs from the :ref:`preprocessing_def` pipelines can be reconstructed in many other software packages. We provide a curated set of :ref:`recon_workflows` in qsiprep that can run ODF/FOD reconstruction, tractography, Fixel estimation and regional connectivity.

Note

The qsiprep pipeline uses much of the code from FMRIPREP. It is critical to note that the similarities in the code do not imply that the authors of FMRIPREP in any way endorse or support this code or its pipelines.

qsiprep's People

Contributors

36000 avatar araikes avatar arokem avatar arovai avatar cookpa avatar dependabot[bot] avatar fredrmag avatar hamsiradhakrishnan avatar j-bourque avatar jbh1091 avatar jhlegarreta avatar kjamison avatar mattcieslak avatar nseider avatar octomike avatar pcamach2 avatar pierre-nedelec avatar psadil avatar richford avatar scovitz avatar shreyasfadnavis avatar smeisler avatar tinashemtapera avatar tsalo avatar valeriejill avatar willforan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

qsiprep's Issues

[BUG] N4 and occupying the same physical space

 caught: 
itk::ExceptionObject (0x7fe35de01fc0)
Location: "unknown" 
File: /Users/mcieslak/projects/ANTs/build/ITKv5-install/include/ITK-5.0/itkImageToImageFilter.hxx
Line: 240
Description: itk::ERROR: N4BiasFieldCorrectionImageFilter(0x7fe35e9c34f0): Inputs do not occupy the same physical space! 
InputImage Direction: 9.9939081e-01 -1.6381450e-05 -3.4900067e-02
2.4508465e-03 9.9756403e-01 6.9713677e-02
3.4813909e-02 -6.9756745e-02 9.9695636e-01
, InputImage_2 Direction: 9.9939081e-01 -1.5328408e-05 -3.4900037e-02
2.4497946e-03 9.9756403e-01 6.9713707e-02
3.4813953e-02 -6.9756738e-02 9.9695636e-01

	Tolerance: 1.0000000e-06

To Do:

  • Copy header from b=0 image into the mask image
  • Turn on verbose mode for all N4 nodes

Error with N4

Hi @mattcieslak,
Ran the preprocessing steps and got the following error. It looks like it kept running after the error and wrote all the files except the figures and report.

 Node: qsiprep_wf.single_subject_omega033_wf.dwi_preproc_ses_bline_dir_AP_run_001_wf.hmc_sdc_wf.pre_topup_enhance.n4_correct
Working directory: /data/omega/derivatives/qsiprep-0.6.4/baseline_2mm-iso_dedicated-fmap/scratch/qsiprep_wf/single_subject_omega033_wf/dwi_preproc_ses_bline_dir_AP_run_001_wf/hmc_sdc_wf/pre_topup_enhance/n4_correct

Node inputs:

args = <undefined>
bias_image = <undefined>
bspline_fitting_distance = 150.0
bspline_order = 3
convergence_threshold = 1e-06
copy_header = True
dimension = 3
environ = {'NSLOTS': '1'}
input_image = /data/omega/derivatives/qsiprep-0.6.4/baseline_2mm-iso_dedicated-fmap/scratch/qsiprep_wf/single_subject_omega033_wf/dwi_preproc_ses_bline_dir_AP_run_001_wf/hmc_sdc_wf/pre_topup_enhance/rescale_image/vol0000_LPS_TruncateImageIntensity_RescaleImage.nii.gz
mask_image = <undefined>
n_iterations = [200, 200]
num_threads = 1
output_image = <undefined>
save_bias = False
shrink_factor = <undefined>
weight_image = /data/omega/derivatives/qsiprep-0.6.4/baseline_2mm-iso_dedicated-fmap/scratch/qsiprep_wf/single_subject_omega033_wf/dwi_preproc_ses_bline_dir_AP_run_001_wf/hmc_sdc_wf/pre_topup_enhance/smooth_mask/vol0000_LPS_TruncateImageIntensity_RescaleImage_mask_FillHoles_MD_G.nii.gz

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
    result['result'] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 375, in run
    runtime = self._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/ants/segmentation.py", line 438, in _run_interface
    runtime, correct_return_codes)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 758, in _run_interface
    self.raise_exception(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 695, in raise_exception
    ).format(**runtime.dictcopy()))
RuntimeError: Command:
N4BiasFieldCorrection --bspline-fitting [ 150, 3 ] -d 3 --input-image /data/omega/derivatives/qsiprep-0.6.4/baseline_2mm-iso_dedicated-fmap/scratch/qsiprep_wf/single_subject_omega033_wf/dwi_preproc_ses_bline_dir_AP_run_001_wf/hmc_sdc_wf/pre_topup_enhance/rescale_image/vol0000_LPS_TruncateImageIntensity_RescaleImage.nii.gz --convergence [ 200x200, 1e-06 ] --output vol0000_LPS_TruncateImageIntensity_RescaleImage_corrected.nii.gz --weight-image /data/omega/derivatives/qsiprep-0.6.4/baseline_2mm-iso_dedicated-fmap/scratch/qsiprep_wf/single_subject_omega033_wf/dwi_preproc_ses_bline_dir_AP_run_001_wf/hmc_sdc_wf/pre_topup_enhance/smooth_mask/vol0000_LPS_TruncateImageIntensity_RescaleImage_mask_FillHoles_MD_G.nii.gz
Standard output:

Standard error:

Return code: 1

Problem or no?

[ENH] Adding optional atlases

During reconstruction, for the pre-defined pipelines, it would be helpful that the users can choose the atlas(es) of their interest.
In addition to existing Schaefer 17 RSN version, I would recommend to add:
(1) Schaefer 7 RSN version;
(2) AICHA (a functionally defined symmetric atlas);
(3) Human Brainnetome Atlas (a multimodally defined atlas);
(4) Maybe Lausanne for what it worths???

And in the future perhaps Glasser's, when it's feasible...

C3D dependency?

Should C3D technically be listed as a dependency of qsiprep since c3d_affine_tool is used?

[ENH] Improve eddy speed by reset "num_threads"

The default eddy parameter has an entire "num_threads" set to 1. In cases where multiple cores are available, it would be faster if this value can be set in consistent to the --omp-nthreads and/or --nthreads.

Error in combine_all_dwis

Thank you for sharing this great app!
I tried to apply qsiprep on my NODDI data (two runs with different PE) and faced the following errors:

Have you seen this error before?

qsiprep-docker --bids_dir /home/test/OCEAN/sourcedata --output_dir /home/test/OCEAN/derivatives/Preprocessed --analysis_level participant -w /home/test/OCEAN/derivatives/Preprocessed --fs-license-file /usr/local/freesurfer/license.txt --skip_bids_validation --output-space T1w --output-resolution 1.3 --force-spatial-normalization --write-graph --participant_label 00250201 --b0-threshold 100 --hmc-model eddy --denoise-before-combining --combine_all_dwis
['/home/test/.pyenv/versions/3.6.3/bin/qsiprep-docker', '--bids_dir', '/home/test/OCEAN/sourcedata', '--output_dir', '/home/test/OCEAN/derivatives/Preprocessed', '--analysis_level', 'participant', '-w', '/home/test/OCEAN/derivatives/Preprocessed', '--fs-license-file', '/usr/local/freesurfer/license.txt', '--skip_bids_validation', '--output-space', 'T1w', '--output-resolution', '1.3', '--force-spatial-normalization', '--write-graph', '--participant_label', '00250201', '--b0-threshold', '100', '--hmc-model', 'eddy', '--denoise-before-combining', '--combine_all_dwis']
RUNNING: docker run --rm -it -v /usr/local/freesurfer/license.txt:/opt/freesurfer/license.txt:ro -v /home/test/OCEAN/sourcedata:/data:ro -v /home/test/OCEAN/derivatives/Preprocessed:/out -v /home/test/OCEAN/derivatives/Preprocessed:/scratch pennbbl/qsiprep:0.6.4 --bids-dir /data --output-dir /out --analysis-level participant --skip_bids_validation --output-space T1w --output-resolution 1.3 --force-spatial-normalization --write-graph --participant_label 00250201 --b0-threshold 100 --hmc-model eddy --denoise-before-combining --combine_all_dwis -w /scratch
191108-01:56:31,124 nipype.workflow IMPORTANT:

Running qsiprep version 0.6.4:
  * BIDS dataset path: /data.
  * Participant list: ['00250201'].
  * Run identifier: 20191108-015629_524b7887-dacd-4cc8-a31b-e8c77ee57486.

191108-01:56:34,307 nipype.workflow INFO:
Combining all 2 dwis within the single available session
191108-01:56:34,307 nipype.workflow INFO:
[['/data/sub-00250201/dwi/sub-00250201_dir-AP_dwi.nii.gz', '/data/sub-00250201/dwi/sub-00250201_dir-PA_dwi.nii.gz']]
191108-01:56:35,641 nipype.workflow INFO:
The following DWI files were found with the following fieldmaps:

  • /data/sub-00250201/dwi/sub-00250201_dir-AP_dwi.nii.gz (PEDir j-): /data/sub-00250201/fmap/sub-00250201_dir-PA_epi.nii.gz (Type epi)
  • /data/sub-00250201/dwi/sub-00250201_dir-PA_dwi.nii.gz (PEDir j): /data/sub-00250201/fmap/sub-00250201_dir-AP_epi.nii.gz (Type epi)

The following warped spaces were defined to contain:

  • PEDir: j-, fmap: /data/sub-00250201/fmap/sub-00250201_dir-PA_epi.nii.gz
    -> /data/sub-00250201/dwi/sub-00250201_dir-AP_dwi.nii.gz
  • PEDir: j, fmap: /data/sub-00250201/fmap/sub-00250201_dir-AP_epi.nii.gz
    -> /data/sub-00250201/dwi/sub-00250201_dir-PA_dwi.nii.gz

The following warp groups were used to SDC each other

  • A single group containing j/j- PE dir groups was sent to TOPUP
    191108-01:56:35,725 nipype.workflow IMPORTANT:
    Creating dwi processing workflow "dwi_preproc_wf" to produce output sub-00250201 (1.20 GB / 147 DWIs). Memory resampled/largemem=1.79/1.99 GB.
    Process Process-2:
    Traceback (most recent call last):
    File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
    File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
    File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/cli/run.py", line 839, in build_qsiprep_workflow
    force_syn=opts.force_syn
    File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/workflows/base.py", line 241, in init_qsiprep_wf
    force_syn=force_syn)
    File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/workflows/base.py", line 624, in init_single_subject_wf
    sloppy=debug
    File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/workflows/dwi/base.py", line 308, in init_dwi_preproc_wf
    omp_nthreads=omp_nthreads)
    File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/workflows/dwi/pre_hmc.py", line 145, in init_dwi_pre_hmc_wf
    ('b0_indices', 'b0_indices')])
    File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/workflows.py", line 214, in connect
    '\n'.join(['Some connections were not found'] + infostr))
    Exception: Some connections were not found
    Module concat_rpe_splits has no input called original_files_plus

Module concat_rpe_splits has no input called original_files_minus

Error running qsiprep with singularity: AttributeError: 'NoneType' object has no attribute 'resolve'

Really great job on the app really look forward to trying it out!
However I'm having an issue getting it going with our existing singularity set-up, have you seen this error before?

[akhanf@gra603 test_qsiprep]$ singularity run -e -B /project:/project -B /scratch:/scratch -B /localscratch:/localscratch /project/6007967/akhanf/singularity/bids-apps/pennbbl_qsiprep_0.6.4.sif --bids-dir /home/akhanf/cfmm-bids/Khan/TestDatasets/dwi_multishell_blipped/ --output-dir test_out --analysis-level participant --fs-license-file /project/6007967/akhanf/opt/freesurfer/.license
Making sure the input data is BIDS compliant (warnings can be ignored in most cases).
	1: [WARN] The recommended file /README is missing. See Section 03 (Modality agnostic files) of the BIDS specification. (code: 101 - README_FILE_MISSING)

	Please visit https://neurostars.org/search?q=README_FILE_MISSING for existing conversations about this issue.


        Summary:                  Available Tasks:        Available Modalities:
        11 Files, 314.66MB                                T1w
        1 - Subject                                       dwi
        1 - Session


	If you have any questions, please post on https://neurostars.org/tags/bids.

Process Process-2:
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/cli/run.py", line 671, in build_qsiprep_workflow
    work_dir = opts.work_dir.resolve()
AttributeError: 'NoneType' object has no attribute 'resolve'

Empty CNR outputs

Hi @mattcieslak,
I haven't seen this issue mentioned yet. I've tested it with qsiprep:0.6.4 on both cross-sectional and longitudinal data with single and multishell acquisitions.

Everyone gets a desc-eddy_cnr.nii.gz file. However this is an empty file. The dimensions appear right when I mrinfo (97x116x98) but its just a blank image with a mean value of 0 (fslstats *eddy_cnr.nii.gz -m)?

Working as intended or bug?

[BUG] N4BiasFieldCorrection: Inputs do not occupy the same physical space!

During early step when b0 image will be corrected for bias field, sometimes probably due to some precision issue that some dimension parameters of the b0 image and its corresponding mask are not considered as matched (could not pass the tolerance test of 10^6, some suggest to lower the tolerance to 10^2 to solve this problem). A possible solution is to omit the mask during this process.

Example script where error emerges:
N4BiasFieldCorrection -d 3 --input-image /data/qsiprep_wf/single_subject_XXX_wf/dwi_preproc_wf/hmc_sdc_wf/sdc_wf/pepolar_unwarp_wf/prepare_epi_opposite_wf/split/mapflow/_split0/vol0000.nii.gz --mask-image /data/qsiprep_wf/single_subject_XXX_wf/dwi_preproc_wf/hmc_sdc_wf/sdc_wf/pepolar_unwarp_wf/prepare_epi_opposite_wf/enhance_and_skullstrip_dwi_wf/initial_mask/vol0000_mask.nii.gz --output vol0000_corrected.nii.gz

Example suggestion:
N4BiasFieldCorrection -d 3 --input-image /data/qsiprep_wf/single_subject_XXX_wf/dwi_preproc_wf/hmc_sdc_wf/sdc_wf/pepolar_unwarp_wf/prepare_epi_opposite_wf/split/mapflow/_split0/vol0000.nii.gz --output vol0000_corrected.nii.gz

Eddy branch: not-too-infrequent case that key metadata is missing for any reason

e.g. outdated acquisitions like HNU1 do not have associated BIDS .json files (i.e. with readout, bpppe, or effective echo spacing), nor raw dicoms that are available to infer this information, nor a scan protocol with anything useful (see: http://fcon_1000.projects.nitrc.org/indi/CoRR/html/_static/scan_parameters/HNU_1_scantable.pdf). Could I PR a trigger in the eddy interface to fallback to a dummy value for the acq. params file for TOPUP and EDDY in the case that there was no reverse phase encoding performed (i.e. and thus susceptibility correction will be skipped anyway)?

@dPys

[BUG] TOPUP incorrectly interprets directions from LPS+ inputs

After conforming everything to LPS+ before running topup/eddy, we can see that the fieldmaps (fieldcoef) are written with incorrect headers:

image

This will also be an issue for data that is in RAS+ orientation (nipreps/sdcflows#37).

Things to try:

  • flip A/P voxel data in the fieldcoef file
  • flip A/P voxel data in fieldcoef file and negate the field
  • convert everything to LAS+ before using eddy tools, then convert back :(

Also for good measure:

  • check that the fieldcoef comming from the GRE fieldmaps is being applied correctly

[ENH] Use alternate brain masking approach for DWI masks

The brain masks currently output by qsiprep ($sub/$ses/dwi/*_desc-brain_mask.nii.gz) tend to be too overinclusive, which can affect the performance of downstream post-processing steps (e.g. bias correction with N4BiasFieldCorrection, registrations, etc.).

Alternate solutions to generating final DWI brain masks that seem to produce higher quality outputs include:
1a. Use MrTrix's dwi2mask
1b. Use MrTrix's dwi2mask after running dwibiascorrect; this has been noted to enhance dwi2mask performance for most datasets
2. Resample the T1 mask in $sub/anat/*desc-brain_mask.nii.gz

[ENH] Make combine-all-dwis default

It is much more likely that a sampling scheme was separated into multiple scans than that two separate sampling schemes were acquired within a session.

To Do:

  • Deprecate --combine-all-dwis and make it True by default
  • Add a --separate-dwi-runs or something like that to make a different pipeline for each series.

Reusing outputs

Hi @mattcieslak,
I'm running qsiprep and I want to test out a couple of options for our lab (namely, RPE vs. Syn). I'm curious if there is a way to either:

  1. Reuse anat outputs from fmriprep -OR-
  2. Reuse anat outputs from qsiprep runs

That would help streamline my processing if I can just change the flags for how SDC is handled without having to fully re-run the anat workflow.

Thanks

[BUG] Local gradient rotation workflow

Looks like an unset input/output path

local_grad_rotation: TypeError: stat: path should be string, bytes, os.PathLike or integer, not _Undefined

TypeError: stat: path should be string, bytes, os.PathLike or integer, not _Undefined

Node _t1_conform0 failed to run

_t1_conform0 and _t1_conform1 throw errors that don't seem to be fatal. qsiprep continues, running eddy.

The offending assert checks orig_img.affine.dot(transform) is np.allcose to target_affine.

For those two we have the matrices:

(array([[   1.        ,    0.        ,    0.        ,  -87.5       ],
       [   0.        ,   -1.        ,    0.        ,  146.6443634 ],
       [   0.        ,    0.        ,    1.        , -156.40016174],
       [   0.        ,    0.        ,    0.        ,    1.        ]]), array([[  -1.        ,    0.        ,    0.        ,   87.5       ],
       [   0.        ,   -1.        ,    0.        ,  146.6443634 ],
       [   0.        ,    0.        ,    1.        , -156.40016174],
       [   0.        ,    0.        ,    0.        ,    1.        ]]))

(array([[   1.        ,    0.        ,    0.        ,  -85.32082367],
       [   0.        ,   -1.        ,    0.        ,  150.50846863],
       [   0.        ,    0.        ,    1.        , -125.54721069],
       [   0.        ,    0.        ,    0.        ,    1.        ]]), array([[  -1.        ,    0.        ,    0.        ,   89.67917633],
       [   0.        ,   -1.        ,    0.        ,  150.50846863],
       [   0.        ,    0.        ,    1.        , -125.54721069],
       [   0.        ,    0.        ,    0.        ,    1.        ]]))

But I'm not sure what these are, how bad it is they are not close, or how we'd better align them.

out/qsiprep/sub-11550/log/20191024-201223_a65f2775-c6ef-4ed5-8de5-6a0e4df8ef53/crash-20191024-201310-root-_t1_conform0-3ad0eb2e-6400-45cf-a238-baad0d2048b3.txt

Node: _t1_conform0
Working directory: /work/qsiprep_wf/single_subject_11550_wf/anat_preproc_wf/anat_template_wf/t1_conform/mapflow/_t1_conform0

Node inputs:

in_file = /data/sub-11550/ses-20160617/anat/sub-11550_ses-20160617_acq-1ADNIG2_run-2_T1w.nii.gz
target_shape = (176, 240, 256)
target_zooms = (1.0, 1.0, 1.0)

Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
result['result'] = node.run(updatehash=updatehash)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
result = self._run_interface(execute=True)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
return self._run_command(execute)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
result = self._interface.run(cwd=outdir)
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 375, in run
runtime = self._run_interface(runtime)
File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/interfaces/images.py", line 330, in _run_interface
assert np.allclose(orig_img.affine.dot(transform), target_affine)
AssertionError

The command

docker run --rm -it \
   -v $FREESURFER_HOME:$FREESURFER_HOME:ro \
   -v /Volumes:/Volumes:ro \
   -v /Volumes/Hera/Projects/mMR_PETDA/qsi/BIDS:/data:ro \
   -v /Volumes/Hera/Projects/mMR_PETDA/qsi/out:/out \
   -v /Volumes/Hera/Projects/mMR_PETDA/qsi/work:/work \
   pennbbl/qsiprep:0.6.1 \
     --bids-dir /data --output-dir /out -w /work \
     --analysis-level participant \
     --fs-license-file  $FREESURFER_HOME/license.txt \
     --output-resolution 2.3

mentions the error like

191024-20:12:57,848 nipype.workflow INFO:
[Node] Finished "qsiprep_wf.single_subject_11550_wf.bidssrc".
191024-20:13:00,990 nipype.workflow INFO:
[Node] Setting-up "qsiprep_wf.single_subject_11550_wf.anat_preproc_wf.anat_template_wf.t1_template_dimensions" in "/work/qsiprep_wf/single_subject_11550_wf/anat_preproc_wf/anat_template_wf/t1_template_dimensions".
191024-20:13:00,991 nipype.workflow INFO:
[Node] Outdated cache found for "qsiprep_wf.single_subject_11550_wf.anat_preproc_wf.anat_template_wf.t1_template_dimensions".
191024-20:13:01,220 nipype.workflow INFO:
[Node] Running "t1_template_dimensions" ("qsiprep.niworkflows.interfaces.images.TemplateDimensions")
191024-20:13:01,829 nipype.workflow INFO:
[Node] Finished "qsiprep_wf.single_subject_11550_wf.anat_preproc_wf.anat_template_wf.t1_template_dimensions".
191024-20:13:04,672 nipype.workflow INFO:
[Node] Setting-up "_t1_conform0" in "/work/qsiprep_wf/single_subject_11605_wf/anat_preproc_wf/anat_template_wf/t1_conform/mapflow/_t1_conform0".
191024-20:13:04,687 nipype.workflow INFO:
[Node] Setting-up "qsiprep_wf.single_subject_11550_wf.anat_preproc_wf.anat_derivatives_wf.t1_name" in "/work/qsiprep_wf/single_subject_11550_wf/anat_preproc_wf/anat_derivatives_wf/t1_name".
191024-20:13:04,687 nipype.workflow INFO:
[Node] Outdated cache found for "qsiprep_wf.single_subject_11550_wf.anat_preproc_wf.anat_derivatives_wf.t1_name".
191024-20:13:06,60 nipype.workflow INFO:
[Node] Running "_t1_conform0" ("qsiprep.interfaces.images.Conform")
191024-20:13:06,81 nipype.workflow INFO:
[Node] Running "t1_name" ("nipype.interfaces.utility.wrappers.Function")
191024-20:13:06,214 nipype.workflow INFO:
[Node] Finished "qsiprep_wf.single_subject_11550_wf.anat_preproc_wf.anat_derivatives_wf.t1_name".
191024-20:13:07,151 nipype.workflow WARNING:
[Node] Error on "_t1_conform0" (/work/qsiprep_wf/single_subject_11605_wf/anat_preproc_wf/anat_template_wf/t1_conform/mapflow/_t1_conform0)
191024-20:13:08,354 nipype.workflow ERROR:
Node _t1_conform0 failed to run on host b817b64e3492.
191024-20:13:08,359 nipype.workflow ERROR:
Saving crash info to /out/qsiprep/sub-11605/log/20191024-201223_a65f2775-c6ef-4ed5-8de5-6a0e4df8ef53/crash-20191024-201308-root-_t1_conform0-a47a2103-a57f-4c98-88f0-6bfd0e65d8b2.txt

ENH: Dealing with gradient non-linearities

For gradient systems with significant non-linearities (e.g. connectome scanner, some 7T head-only systems), it is important to account for these as they not only lead to spatial distortion, but also spatially-dependent changes in b-vector encoding. A recent paper quantifies this nicely: https://www.sciencedirect.com/science/article/pii/S1053811919307189
It is described in the HCP pre-processing pipeline as well: https://github.com/Washington-University/HCPpipelines/wiki/FAQ

There are a number of different features that could be related to this:

  1. Allowing the user to input a gradient table that describes the non-linearities (since the tables are proprietary); the app could use to this generate the warp, concatenate with any other relevant transforms (e.g. to reference T1 or template space), and generate a gradient deviations file (grad_dev nifti) that encodes the Jacobian of the warp.
  2. Have the user supply already unwarped data, along with grad_dev files (e.g. public HCP data) instead of performing the unwarping inside the app
  3. Incorporating the gradient deviation files (grad_dev) in the app for tensor fitting and other reconstructions -- now I am not sure whether the existing tools underlying qsiprep currently support the use of grad_dev files (e.g. FSL and MDT are two other software packages that do).

I am hopeful to hear that there would be interest from existing qsiprep users in incorporating these features, as I admit they are not required for the majority of datasets. Our in-house pipeline/BIDS-app (https://github.com/khanlab/prepdwi) actually implements these general steps, however that pipeline was not designed for longevity and I really would like to shift over to qsiprep since it is so much more feature-rich and is actually well-engineered!! Happy to help out in with this if I can -- I don't have alot of time to code, and still a newbie in nipype, but there are some other developers here that I could engage if needed.

One potential issue might be that the underlying fitting/reconstruction in Dipy may not yet support gradient deviations. We are currently using FSL (dtifit & bedpost) and MDT (https://github.com/cbclab/MDT) for that since they support using gradient deviation files for fitting.

PS - great job on putting qsiprep together, really keen to see it become the standard dMRI processing BIDS App in the community, excellent start!

anatomical ResampleInputSpecs voxel_size is 3x(None)

I'm not sure where to start looking for the error. I've already resolved some symlink issues that might be surfing again.

Running qsiprep version 0+unknown:

  * BIDS dataset path: /data.
  * Participant list: ['10195', '10843', '10880', '10985', '10990', '10997', '11048', '11228', >'11248', '11270', '11272', '11275', '11289', '11299', '11314', '11338', '11346', '11370', '11389', '11393'].
  * Run identifier: 20191021-210100_cbf64372-5fcb-4e65-892b-001a40cc4631.

Process Process-2:
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
self.run()
File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 99, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/cli/run.py", line 794, in build_qsiprep_workflow
force_syn=opts.force_syn
File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/workflows/base.py", line 241, in init_qsiprep_wf
force_syn=force_syn)
File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/workflows/base.py", line 507, in init_single_subject_wf
num_t1w=len(subject_data['t1w']))
File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/workflows/anatomical.py", line 276, in init_anat_preproc_wf
template_image=ref_img)
File "/usr/local/miniconda/lib/python3.7/site-packages/qsiprep/workflows/anatomical.py", line 928, in init_output_grid_wf
resample_to_voxel_size.inputs.voxel_size = (voxel_size, voxel_size, voxel_size)
File "/usr/local/miniconda/lib/python3.7/site-packages/traits/trait_handlers.py", line 172, in error
value )
traits.trait_errors.TraitError: The 'voxel_size' trait of a ResampleInputSpec instance must be a tuple of the form: (a float, a float, a float), but a value of (None, None, None) <class 'tuple'> was specified.

Command is:

docker run --rm -it -v $FREESURFER_HOME:$FREESURFER_HOME:ro -v /Volumes:/Volumes:ro -v /Volumes/Hera/Raw/BIDS/mMRDA-dev:/data:ro -v /Volumes/Hera/Projects/mMR_PETDA/qsi_test:/out pennbbl/qsiprep:0.6.1 --bids-dir /data --output-dir /out --analysis-level participant --fs-license-file  $FREESURFER_HOME/license.txt -w /out/workdir

An example t1.json

jq < sub-10195/ses-20160317/anat/sub-10195_ses-20160317_acq-1ADNIG2_run-2_T1w.json
{
  "Modality": "MR",
  "MagneticFieldStrength": 3,
  "ImagingFrequency": 123.227,
  "Manufacturer": "Siemens",
  "ManufacturersModelName": "Biograph_mMR",
  "InstitutionName": "UPMC_Presbyterian_University_Hospital",
  "InstitutionalDepartmentName": "MR_Research_Center",
  "InstitutionAddress": "Lothrop_Street_200_Pittsburgh_Pennsylvania_Pittsburgh_-_Columbus_US_15213",
  "DeviceSerialNumber": "51021",
  "StationName": "MRC51021",
  "PatientPosition": "HFS",
  "ProcedureStepDescription": "BRAIN_lun-neuro",
  "SoftwareVersions": "syngo_MR_B20P",
  "MRAcquisitionType": "3D",
  "SeriesDescription": "Sagittal_MPRAGE_ADNI_G2",
  "ProtocolName": "Sagittal_MPRAGE_ADNI_G2",
  "ScanningSequence": "GR_IR",
  "SequenceVariant": "SP_MP",
  "ScanOptions": "IR",
  "SequenceName": "_tfl3d1_ns",
  "ImageType": [
    "ORIGINAL",
    "PRIMARY",
    "M",
    "ND",
    "NORM"
  ],
  "SeriesNumber": 10,
  "AcquisitionTime": "09:20:41.865000",
  "AcquisitionNumber": 1,
  "SliceThickness": 1,
  "SAR": 0.0627606,
  "EchoTime": 0.00298,
  "RepetitionTime": 2.3,
  "InversionTime": 0.9,
  "FlipAngle": 9,
  "PartialFourier": 1,
  "BaseResolution": 256,
  "ShimSetting": [
    -5752,
    -7306,
    1610,
    -109,
    -375,
    -845,
    -591,
    293
  ],
  "TxRefAmp": 294.031,
  "PhaseResolution": 1,
  "ReceiveCoilName": "HeadNeck_MRPET",
  "CoilString": "t:HEA;HEP;NEA;NEP",
  "PulseSequenceDetails": "%SiemensSeq%_tfl",
  "RefLinesPE": 24,
  "PercentPhaseFOV": 93.75,
  "PhaseEncodingSteps": 239,
  "AcquisitionMatrixPE": 240,
  "ReconMatrixPE": 240,
  "PixelBandwidth": 238,
  "DwellTime": 8.2e-06,
  "ImageOrientationPatientDICOM": [
    0,
    1,
    0,
    0,
    0,
    -1
  ],
  "InPlanePhaseEncodingDirectionDICOM": "ROW",
  "ConversionSoftware": "dcm2niix",
  "ConversionSoftwareVersion": "v1.0.20190410  (JP2:OpenJPEG) GCC8.3.0"
}

Deep-learner for classifying bad DWI volumes?

Since fMRI folks not trained in the nuances of dMRI who plan to apply qsiprep to big datasets might be tempted not to look at their data, would it be worth training a little deep learner (e.g. using https://github.com/satra/nobrainer) to classify volume by volume of the raw dwi inputs and check for obvious anomalies (e.g. gross signal dropout, venetian blind artifact). That way such bad volumes could at least be flagged if not removed entirely, depending on the coverage of the sampling scheme? This would be along the lines of mriqc but a bit more dwi-specific.

@dPys

bidsmap/json setup

Hello,
I am getting data setup to process with QSIprep and was wondering if there were any requirements/recommendations in setting up the json files for DTI & accompanying field maps, i.e. necessary parameters, customLabels, etc. to include in the dcm2bids config file?
Thanks!

[BUG] Sentry fails to upload crash data

Here it looks like output_dir is a string, not a PosixPath

        if not opts.notrack:
            from ..utils.sentry import process_crashfile
            crashfolders = [output_dir / 'qsiprep' / 'sub-{}'.format(s) / 'log' / run_uuid
                            for s in subject_list]
            for crashfolder in crashfolders:
                for crashfile in crashfolder.glob('crash*.*'):
                    process_crashfile(crashfile)

Longitudinal reports don't have DWI svgs

I ran qsiprep:0.6.4 on my longitudinal dataset.

The HTML reports have the T1w SVGs embedded but the DWI SVGs are not included. There's a hyperlink for them and I can load them in a separate browser tab that way. However, they aren't embedded in the report even though the paths exist.

Is eddy interface missing in_mask input spec?

It looks like the mask variable is being passed correctly in https://github.com/PennBBL/qsiprep/blob/69d530ddf6573320609ab043e34135a904c5e167/qsiprep/workflows/dwi/fsl.py

but I keep getting:

ValueError: ExtendedEddy requires a value for input 'in_mask'. For a list of required inputs, see ExtendedEddy.help()

Command:

qsiprep --bids_dir /data --output_dir /out/HNU1_output --analysis_level participant -w /tmp --output_resolution 2 --participant_label 0025427 --mem_mb 3200 --hmc-model eddy --eddy-config /root/eddy_params.json

where eddy_params.json consists of:

{
  "flm": "linear",
  "slm": "linear",
  "fep": false,
  "interp": "spline",
  "nvoxhp": 1000,
  "fudge_factor": 10,
  "dont_sep_offs_move": false,
  "dont_peas": true,
  "niter": 5,
  "method": "jac",
  "repol": true,
  "num_threads": 1,
  "is_shelled": false,
  "use_cuda": false,
  "cnr_maps": true,
  "residuals": true,
  "output_type": "NIFTI_GZ",
  "args": ""
}

Thoughts?

@dPys

FYI: have you played with eddy_quad yet? if so, what are your thoughts? doesn't seem to provide a ton of useful info beyond what other qc already offers, but it plays nice with the default eddy outputs.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.