Giter Site home page Giter Site logo

nipreps / fmriprep Goto Github PK

View Code? Open in Web Editor NEW
618.0 31.0 290.0 58.5 MB

fMRIPrep is a robust and easy-to-use pipeline for preprocessing of diverse fMRI data. The transparent workflow dispenses of manual intervention, thereby ensuring the reproducibility of the results.

Home Page: https://fmriprep.org

License: Apache License 2.0

Python 15.46% Shell 0.06% HTML 83.68% Dockerfile 0.29% TeX 0.51% Makefile 0.01%
fmri fmri-preprocessing brain-imaging neuroimaging bids image-processing

fmriprep's Introduction

fMRIPrep: A Robust Preprocessing Pipeline for fMRI Data

fMRIPrep is a NiPreps (NeuroImaging PREProcessing toolS) application (www.nipreps.org) for the preprocessing of task-based and resting-state functional MRI (fMRI).

Docker image available! Available in CodeOcean! https://circleci.com/gh/nipreps/fmriprep/tree/master.svg?style=shield Documentation Status Latest Version Published in Nature Methods RRID:SCR_016216

About

fMRIPrep is a functional magnetic resonance imaging (fMRI) data preprocessing pipeline that is designed to provide an easily accessible, state-of-the-art interface that is robust to variations in scan acquisition protocols and that requires minimal user input, while providing easily interpretable and comprehensive error and output reporting. It performs basic processing steps (coregistration, normalization, unwarping, noise component extraction, segmentation, skull-stripping, etc.) providing outputs that can be easily submitted to a variety of group level analyses, including task-based or resting-state fMRI, graph theory measures, and surface or volume-based statistics.

Note

fMRIPrep performs minimal preprocessing. Here we define 'minimal preprocessing' as motion correction, field unwarping, normalization, bias field correction, and brain extraction. See the workflows section of our documentation for more details.

The fMRIPrep pipeline uses a combination of tools from well-known software packages, including FSL_, ANTs_, FreeSurfer_ and AFNI_. This pipeline was designed to provide the best software implementation for each state of preprocessing, and will be updated as newer and better neuroimaging software become available.

This tool allows you to easily do the following:

  • Take fMRI data from raw to fully preprocessed form.
  • Implement tools from different software packages.
  • Achieve optimal data processing quality by using the best tools available.
  • Generate preprocessing quality reports, with which the user can easily identify outliers.
  • Receive verbose output concerning the stage of preprocessing for each subject, including meaningful errors.
  • Automate and parallelize processing steps, which provides a significant speed-up from manual processing or shell-scripted pipelines.

More information and documentation can be found at https://fmriprep.readthedocs.io/

Principles

fMRIPrep is built around three principles:

  1. Robustness - The pipeline adapts the preprocessing steps depending on the input dataset and should provide results as good as possible independently of scanner make, scanning parameters or presence of additional correction scans (such as fieldmaps).
  2. Ease of use - Thanks to dependence on the BIDS standard, manual parameter input is reduced to a minimum, allowing the pipeline to run in an automatic fashion.
  3. "Glass box" philosophy - Automation should not mean that one should not visually inspect the results or understand the methods. Thus, fMRIPrep provides visual reports for each subject, detailing the accuracy of the most important processing steps. This, combined with the documentation, can help researchers to understand the process and decide which subjects should be kept for the group level analysis.

Citation

Citation boilerplate. Please acknowledge this work using the citation boilerplate that fMRIPrep includes in the visual report generated for every subject processed. For a more detailed description of the citation boilerplate and its relevance, please check out the NiPreps documentation.

Plagiarism disclaimer. The boilerplate text is public domain, distributed under the CC0 license, and we recommend fMRIPrep users to reproduce it verbatim in their works. Therefore, if reviewers and/or editors raise concerns because the text is flagged by automated plagiarism detection, please refer them to the NiPreps community and/or the note to this effect in the boilerplate documentation page.

Papers. fMRIPrep contributors have published two relevant papers: Esteban et al. (2019) [preprint], and Esteban et al. (2020) [preprint].

Other. Other materials that have been generated over time include the OHBM 2018 software demonstration and some conference posters:

  • Organization for Human Brain Mapping 2018 (Abstract; PDF)
_static/OHBM2018-poster_thumb.png
  • Organization for Human Brain Mapping 2017 (Abstract; PDF)
_static/OHBM2017-poster_thumb.png

License information

fMRIPrep adheres to the general licensing guidelines of the NiPreps framework.

License

Copyright (c) the NiPreps Developers.

As of the 21.0.x pre-release and release series, fMRIPrep is licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Acknowledgements

This work is steered and maintained by the NiPreps Community. This work was supported by the Laura and John Arnold Foundation, the NIH (grant NBIB R01EB020740, PI: Ghosh), and NIMH (R24MH114705, R24MH117179, R01MH121867, PI: Poldrack)

fmriprep's People

Contributors

adelavega avatar anibalsolon avatar bbfrederick avatar bpinsard avatar chrisgorgo avatar craigmoodie avatar danlurie avatar dimitripapadopoulos avatar effigies avatar emdupre avatar erramuzpe avatar feilong avatar frontiers-qc-sops avatar hippocampusgirl avatar jdkent avatar jsmentch avatar kfinc avatar madisoth avatar markushs avatar mgxd avatar nirjacoby avatar oesteban avatar rciric avatar romainvala avatar rwblair avatar theoschaefer avatar tsalo avatar utooley avatar yarikoptic avatar zhifangy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fmriprep's Issues

Making the workflow robust to differences in acquisition protocols

[i] Right now the preprocessing workflow can only handle inputs from workflows that follow the HCP scanning protocol. However, most studies and especially older studies, will have utilize a traditional double echo fieldmap approach. Hence, the workflow needs to be able to handle both approaches.

[ii] It would also be good to test different types of fieldmaps, such as GRE, SE and spiral fieldmaps.

[iii] Lastly, it would be good to build in a contingency for when a protocol has no fieldmaps, or when fieldmaps are missing for some subjects.

Missing output nodes

Nodes to be hooked up for datasink.

epi skull striped mask
t1 skull strip
t1 transformations to mni (forward and inverse)
motion corrected unwarped epi in sbref space
affine sbref to t1 transformation
estimated motion parameters (a text file containing translations and rotations)
bias field corrected t1
segmentation of t1

New output: EPI 4D volumes in MNI space (but using original resolution)

I had some discussions with NeuroSpin people and came to a conclusion that in addition to saving the unwarped and motion corrected 4D EPI volumes it would be also useful to save the normalized (transformed into MNI space) versions.

The main thing to keep in mind is that even though those outputs should be in the MNI space they should keep the resolution (voxel sizes) of the original EPI data (instead of the 2x2x2 or 1x1x1mm of the MNI template). This way we would not waste much space.

I think there should be a relatively easy way to apply transforms (the affine going from sbref to T1 and the nonlinear going from T1 to MNI) to the EPI data that would force the output resolution. It would be good if we could avoid creating an intermediate file in the MNI template resolution that would be then downsampled.

Extracting the fieldmap preparation workflows

  • Creating a new fmriprep.workflows.fieldmap module
  • Create a new function that returns a workflow to calculate the undistorted fieldmap from two SE fieldmap acquisitions
  • Create a new function that computes the distortion and the vsm (voxel shift map) from an input fieldmap. This is probably doable just importing one of the existing nipype workflows.

Make all imports absolute

@shoshber has started with this, and I see that the recommendation is using absolute imports always. I apologize, @rwblair, because at some point you started using them and I rolled all those changes back. I should have checked the recommendations first.

Extracting anatomical workflow

  • Creating a new module fmriprep.workflows.anatomical
  • Within that new module, create a new function t1w_preprocessing that returns a nipype workflow corresponding to the current anatomical (T1) preprocessing. This function includes:
    • It is documented, mentioning the principal processing nodes in the workflow
    • It has (in/out)putnodes to be connected to the main workflow.
    • It has a corresponding test with a subsampled T1w image (@oesteban will provide with this one) that runs the workflow in CircleCI (#5)

The last step of this issue is removing the corresponding nodes from the original workflow and connecting this workflow in-place.

Can't generate graph image with docker image.

Graphviz may not be installed:

Traceback (most recent call last):
File "/usr/local/miniconda2/envs/crnenv/bin/fmriprep", line 9, in
load_entry_point('fmriprep', 'console_scripts', 'fmriprep')()
File "/root/src/preprocessing-workflow/fmriprep/run_workflow.py", line 162, in main
workflow.write_graph()
File "/usr/local/miniconda2/envs/crnenv/lib/python2.7/site-packages/nipype/pipeline/engine/workflows.py", line 444, in write_graph
format_dot(dotfilename, format=format)
File "/usr/local/miniconda2/envs/crnenv/lib/python2.7/site-packages/nipype/pipeline/engine/utils.py", line 1044, in format_dot
raise IOError("Cannot draw directed graph; executable 'dot' is unavailable")
IOError: Cannot draw directed graph; executable 'dot' is unavailable

Parcellation // Supplementary data

For now, there is only one data resource (a brain parcellation) that does not belong to the FSL package and is used along the workflow.

I assume (@craigmoodie can correct me if I'm wrong) that the user should have the possibility to change this file, but generally, the default one will be used.

@chrisfilo: what do you think about having a data/ or resources/ folder where we keep and distribute these files. Right now we would place here a parcellation that is publicly available through neurovault. We would need to have a LICENSE file in that folder, indicating the appropriate licensing for each of the data files distributed.

Otherwise, a systematic solution to these data requirements should be defined.

Once this decision is made, we would close this issue when @craigmoodie places this file where we have decided.

BIDS reader function

Write a function that you provide with a subject id and returns a dict with the full bids information about that subject (not only files, but also parameters)

Change workflow name: se_pair_workflow

Use a more appropriate name.

The naming se_pair_workflow happens to be incorrect. Actually, in the AA database, the fieldmap is computed from 8 images (four pairs of SE acquisitions). We should think of a different way to name all of this because pair is particularly misleading: it is easily confounded with computing the fieldmap using a pair of GRE acquisitions (#20) and looking at the phase difference. Actually, the latter case is a lot more appropriate for the pair word, since it is always computed with two (you could use more, but I don't think anybody is doing that).

On the other case (our current se_pair_workflow) it is appropriate to have more than two images. Results improve significantly.

Generate fieldcoef image from fieldmap

Calculate the bspline coefficients image from the fieldmap image in the GRE-phasediff workflows (required by #20). There is a corresponding thread in FSL's mailing list.

The intended header is like:

sizeof_hdr      : 348
data_type       : 
db_name         : 
extents         : 0
session_error   : 0
regular         : r
dim_info        : 0
dim             : [ 3 46 53 35  1  1  1  1]
intent_p1       : 2.10000014305
intent_p2       : 2.10000014305
intent_p3       : 2.10000014305
intent_code     : <unknown code 2016>
datatype        : float32
bitpix          : 32
slice_start     : 0
pixdim          : [ 1.  2.  2.  2.  1.  0.  0.  0.]
vox_offset      : 0.0
scl_slope       : nan
scl_inter       : nan
slice_end       : 0
slice_code      : unknown
xyzt_units      : 10
cal_max         : 0.0
cal_min         : 0.0
slice_duration  : 0.0
toffset         : 0.0
glmax           : 0
glmin           : 0
descrip         : FSL5.0
aux_file        : 
qform_code      : scanner
sform_code      : scanner
quatern_b       : 0.0
quatern_c       : 0.0
quatern_d       : 0.0
qoffset_x       : 86.0
qoffset_y       : 100.0
qoffset_z       : 64.0
srow_x          : [  1.   0.   0.  86.]
srow_y          : [   0.    1.    0.  100.]
srow_z          : [  0.   0.   1.  64.]
intent_name     : 
magic           : n+1

When the field image is:

sizeof_hdr      : 348
data_type       : 
db_name         : 
extents         : 0
session_error   : 0
regular         : r
dim_info        : 0
dim             : [  3  86 100  64   1   1   1   1]
intent_p1       : 0.0
intent_p2       : 0.0
intent_p3       : 0.0
intent_code     : <unknown code 2018>
datatype        : float32
bitpix          : 32
slice_start     : 0
pixdim          : [-1.          2.10000014  2.10000014  2.10000014  1.          0.          0.
  0.        ]
vox_offset      : 0.0
scl_slope       : nan
scl_inter       : nan
slice_end       : 0
slice_code      : unknown
xyzt_units      : 10
cal_max         : 0.0
cal_min         : 0.0
slice_duration  : 0.0
toffset         : 0.0
glmax           : 0
glmin           : 0
descrip         : FSL5.0
aux_file        : 
qform_code      : scanner
sform_code      : scanner
quatern_b       : 0.0
quatern_c       : 0.999048233032
quatern_d       : 0.0436193868518
qoffset_x       : 90.2999801636
qoffset_y       : -108.217712402
qoffset_z       : -57.9707260132
srow_x          : [ -2.10000014   0.           0.          90.29998016]
srow_y          : [   0.            2.09200907   -0.18302707 -108.2177124 ]
srow_z          : [  0.           0.18302707   2.09200907 -57.97072601]
intent_name     : 
magic           : n+1

Naming scheme of the outputs should follow BIDS Derivatives

See here for details: https://docs.google.com/document/d/1Wwc4A6Mow4ZPPszDIWfCUCRNstn7d_zzaWPcfcHmgI4/edit

EDIT (@oesteban): The final product of this issue comprehends a couple of new tests. One test will apply to the ds005-type workflow and the other to the ds054-type workflow. The test will hard-code the desired outputs and their names, and after running the one-subject smoke tests in CircleCI, we will check that all the prescribed outputs are there with the name fulfilling BIDS-Derivatives.

For instance, the prescribed outputs for ds005 are:

ds005-out/
   derivatives/
       sub-01/
            anat/
                sub-01_T1w_brainmask.nii.gz
                sub-01_T1w_space-MNI152lin_brainmask.nii.gz
                sub-01_T1w_inu.nii.gz
                sub-01_T1w_dtissue.nii.gz
                sub-01_T1w_space-MNI152lin.nii.gz
                sub-01_T1w_target-MNI152lin_affine.mat
                sub-01_T1w_space-MNI152lin_class-CSF_probtissue.nii.gz
                sub-01_T1w_space-MNI152lin_class-GM_probtissue.nii.gz
                sub-01_T1w_space-MNI152lin_class-WM_probtissue.nii.gz
            func/
                sub-01_task-mixedgamblestask_run-01_bold_confounds.tsv
                sub-01_task-mixedgamblestask_run-01_bold_target-T1w_affine.txt
                sub-01_task-mixedgamblestask_run-01_bold_brainmask.nii.gz
                sub-01_task-mixedgamblestask_run-01_bold_space-MNI152lin.nii.gz
                sub-01_task-mixedgamblestask_run-01_bold_space-MNI152lin_brainmask.nii.gz
                sub-01_task-mixedgamblestask_run-01_bold_space-T1w.nii.gz
                sub-01_task-mixedgamblestask_run-01_bold_T1w-target-meanBOLD_affine.txt
                sub-01_task-mixedgamblestask_run-01_bold_hmc.nii.gz
                sub-01_task-mixedgamblestask_run-02_bold_confounds.tsv
                sub-01_task-mixedgamblestask_run-02_bold_target-T1w_affine.txt
                sub-01_task-mixedgamblestask_run-02_bold_brainmask.nii.gz
                sub-01_task-mixedgamblestask_run-02_bold_space-MNI152lin.nii.gz
                sub-01_task-mixedgamblestask_run-02_bold_space-MNI152lin_brainmask.nii.gz
                sub-01_task-mixedgamblestask_run-02_bold_space-T1w.nii.gz
                sub-01_task-mixedgamblestask_run-02_bold_T1w-target-meanBOLD_affine.txt
                sub-01_task-mixedgamblestask_run-02_bold_hmc.nii.gz
                sub-01_task-mixedgamblestask_run-03_bold_confounds.tsv
                sub-01_task-mixedgamblestask_run-03_bold_target-T1w_affine.txt
                sub-01_task-mixedgamblestask_run-03_bold_brainmask.nii.gz
                sub-01_task-mixedgamblestask_run-03_bold_space-MNI152lin.nii.gz
                sub-01_task-mixedgamblestask_run-03_bold_space-MNI152lin_brainmask.nii.gz
                sub-01_task-mixedgamblestask_run-03_bold_space-T1w.nii.gz
                sub-01_task-mixedgamblestask_run-03_bold_T1w-target-meanBOLD_affine.txt
                sub-01_task-mixedgamblestask_run-03_bold_hmc.nii.gz

This issue includes generating the prescribed outputs for ds054 as well for the corresponding test.

PDF Reports (HCP style)

PDF reports must include:

  • Skull-strippings (overlay of translucent mask over the designated image)
    • T1
    • SBRef
    • Fieldmap magnitude image
  • Image registration outputs:
    • EPI-to-SBRef (overlay of SBRef contours over EPI)
    • T1-to-SBRef (overlay of T1 segmentation contours over SBRef)
    • T1-to-MNI (overlay of T1 segmentation contours over MNI template)
  • Unwarping / Fieldmaps
    • T1 contours over the unwarped-EPI
    • SE Pairs only: contours of the magnitude image in the two PE directions over EPI
  • Head motion
    • Frame displacement plot

Revisit EPI preprocessing workflow

Should be closed with #79

Implement the following procedures:

  • Coregister the corrected SBRef to the mean EPI: Current co-registration using MCFLIRT seems to be rather inaccurate.
  • Perform EPI unwarping before HMC (head-motion-correction)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.