Giter Site home page Giter Site logo

pennlinc / xcp_d Goto Github PK

View Code? Open in Web Editor NEW
64.0 64.0 22.0 613.85 MB

Post-processing of fMRIPrep, NiBabies, and HCP outputs

Home Page: https://xcp-d.readthedocs.io

License: BSD 3-Clause "New" or "Revised" License

Python 90.93% TeX 4.86% Dockerfile 0.09% Makefile 0.04% Jinja 2.15% JavaScript 1.93%

xcp_d's Introduction

XCP-D : A Robust Postprocessing Pipeline of fMRI data

GitHub Repository

Documentation

Docker

Test Status

Codecov

DOI

License

This fMRI post-processing and noise regression pipeline is developed by the Satterthwaite lab at the University of Pennslyvania (XCP; eXtensible Connectivity Pipeline) and Developmental Cognition and Neuroimaging lab at the University of Minnesota (-DCAN) for open-source software distribution.

About

XCP-D paves the final section of the reproducible and scalable route from the MRI scanner to functional connectivity data in the hands of neuroscientists. We developed XCP-D to extend the BIDS and NiPrep apparatus to the point where data is most commonly consumed and analyzed by neuroscientists studying functional connectivity. Thus, with the development of XCP-D, data can be automatically preprocessed and analyzed in BIDS format, using NiPrep-style containerized code, all the way from the scanner to functional connectivity matrices.

XCP-D picks up right where fMRIprep ends, directly consuming the outputs of fMRIPrep. XCP-D leverages the BIDS and NiPreps frameworks to automatically generate denoised BOLD images, parcellated time series, functional connectivity matrices, and quality assessment reports. XCP-D can also process outputs from: NiBabies, ABCD-BIDS, Minimally preprocessed HCP <https://www.humanconnectome.org/study/hcp-lifespan-development/\ data-releases>, and UK Biobank data.

Please note that XCP is only compatible with HCP-YA versions downloaded c.a. Feb 2023 at the moment.

image

See the documentation for more details.

Why you should use XCP-D

XCP-D produces the following commonly-used outputs: matrices, parcellated time series, dense time series, and additional QC measures.

XCP-D is designed for resting-state or pseudo-resting-state functional connectivity analyses. XCP-D derivatives may be useful for seed-to-voxel and ROI-to-ROI functional connectivity analyses, as well as decomposition-based methods, such as ICA or NMF.

When you should not use XCP-D

XCP-D is not designed as a general-purpose postprocessing pipeline. It is really only appropriate for certain analyses, and other postprocessing/analysis tools are better suited for many types of data/analysis.

XCP-D derivatives are not particularly useful for task-dependent functional connectivity analyses, such as psychophysiological interactions (PPIs) or beta series analyses. It is also not suitable for general task-based analyses, such as standard task GLMs, as we recommend included nuisance regressors in the GLM step, rather than denoising data prior to the GLM.

Citing XCP-D

If you use XCP-D in your research, please use the boilerplate generated by the workflow. If you need an immediate citations, please cite the following preprint:

Mehta, K., Salo, T., Madison, T., Adebimpe, A., Bassett, D. S., Bertolero, M., ... & Satterthwaite, T. D. (2023). XCP-D: A Robust Pipeline for the post-processing of fMRI data. bioRxiv. doi:10.1101/2023.11.20.567926.

Please also cite the Zenodo DOI for the version you're referencing.

xcp_d's People

Contributors

a3sha2 avatar dependabot[bot] avatar kahinimehta avatar krmurtha avatar madisoth avatar mattcieslak avatar mb3152 avatar psychelzh avatar scovitz avatar smeisler avatar tsalo avatar valeriejill avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xcp_d's Issues

analysis level

For analysis_level: currently the only option is participant.
What does analysis_level refer to? Is it bids structure? Maybe a link could be added.
Why only participant for possible choices?

qc

  • registration quality : coreg, norm
  • freesurfer parameters( if present)
  • qc to json

task

do task ingression inside
generate first glm with nilearn or FSL

workflow/run

workflow/run.py:
419-421: commented out code - does this need to be here still?
427-428: commented out code - does this need to be here still?

error message IndexError: list index out of range

Hello, Many thanks for developing this tool.

I would like to run the postprocessing using xcp_abcd.

The OS is Ubuntu 18.

The xcp_abcd version is

docker run -it pennlinc/xcp_abcd --version
xcp_abcd v0.0.5+0.g35d429d.dirty

The following is my command:

export path_input=*/fmriprep
export path_output=directory_output
sub_ID=sub-01
docker run -it -v ${path_input}:/data:ro -v ${path_output}:/out pennlinc/xcp_abcd /data /out
participant --participant_label ${sub_ID} --cifti

The error message is:

211028-02:42:27,713 nipype.workflow IMPORTANT:

Running xcp_abcd version 0.0.5+0.g35d429d.dirty:
  * fMRIPrep directory path: /data.
  * Participant list: ['02R4928'].
  * Run identifier: 20211028-024137_f0590b68-ae0e-45e2-9a04-cce29d2210e5.

211028-02:43:10,845 nipype.utils WARNING:
A newer version (1.7.0) of nipy/nipype is available. You are using 1.6.1
Process Process-2:
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/miniconda/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/cli/run.py", line 456, in build_workflow
retval['workflow'] = init_xcpabcd_wf (
File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/workflow/base.py", line 146, in init_xcpabcd_wf
single_subj_wf = init_subject_wf(
File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/workflow/base.py", line 343, in init_subject_wf
DerivativesDataSink(base_directory=output_dir,source_file=subject_data[0][0],desc='summary', datatype="figures"),
IndexError: list index out of range

I do not know how to deal with this error. Many thanks.

Xiuyi

default call

There are no default calls / values listed in the arguments section under Usage Notes.
Also note: can the input be a file, a list, etc?

[BUG] Reho not performed due to templateflow not finding file

Run info

XCP_ABCD 0.0.8 on Linux HCP via Singularity

Command

Basically, a standard cifti file run on some resting state data, just adding despiking
singularity run -e -B ${scratch}:/workdir -B ${bids_dir} $IMG ${fmriprep_dir} ${output_dir} participant --participant_label ${subject} -w /workdir -s -t rest --despike --mem_gb 24

Error

Everything proceeds fine until reaching REHO nodes. Below is the log beginning with REHO execution:

	 [Node] Executing "reho_lh" <xcp_abcd.interfaces.resting_state.surfaceReho>
Downloading https://templateflow.s3.amazonaws.com/tpl-fsLR/tpl-fsLR_hemi-L_den-32k_midthickness.surf.gii

0.00B [00:00, ?B/s]
1.00B [00:00, 767B/s]211115-18:42:55,886 nipype.workflow INFO:
	 [Node] Finished "reho_lh", elapsed time 34.481254s.
211115-18:42:55,886 nipype.workflow WARNING:
	 Storing result file without outputs
211115-18:42:55,890 nipype.workflow WARNING:
	 [Node] Error on "xcpabcd_wf.single_subject_NDARAA306NT2_wf.cifti_postprocess_1_wf.surface_reho_wf.reho_lh" (/workdir/xcpabcd_wf/single_subject_NDARAA306NT2_wf/cifti_postprocess_1_wf/surface_reho_wf/reho_lh)
211115-18:42:57,212 nipype.workflow ERROR:
	 Node reho_lh failed to run on host node021.
211115-18:42:57,274 nipype.workflow ERROR:
	 Saving crash info to /om4/group/gablab/data/HBN/derivatives/xcp_abcd/sub-NDARAA306NT2/log/crash-20211115-184257-smeisler-reho_lh-c39954b0-7f60-4c65-83e9-16374125986d.txt
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 521, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 639, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 750, in _run_command
    raise NodeExecutionError(
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node reho_lh.

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nibabel/loadsave.py", line 42, in load
    stat_result = os.stat(filename)
FileNotFoundError: [Errno 2] No such file or directory: "[PosixPath('/home/smeisler/.cache/templateflow/tpl-fsLR/tpl-fsLR_hemi-L_den-32k_desc-vaavg_midthickness.shape.gii'), PosixPath('/home/smeisler/.cache/templateflow/tpl-fsLR/tpl-fsLR_hemi-L_den-32k_midthickness.surf.gii')]"

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 398, in run
    runtime = self._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/interfaces/resting_state.py", line 69, in _run_interface
    write_gii(datat=reho_surf,template=self.inputs.surf_bold,
  File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/utils/write_save.py", line 133, in write_gii
    template = nb.load(template)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nibabel/loadsave.py", line 44, in load
    raise FileNotFoundError(f"No such file or no access: '{filename}'")
FileNotFoundError: No such file or no access: '[PosixPath('/home/smeisler/.cache/templateflow/tpl-fsLR/tpl-fsLR_hemi-L_den-32k_desc-vaavg_midthickness.shape.gii'), PosixPath('/home/smeisler/.cache/templateflow/tpl-fsLR/tpl-fsLR_hemi-L_den-32k_midthickness.surf.gii')]'


211115-18:42:57,365 nipype.workflow INFO:
	 [Node] Setting-up "xcpabcd_wf.single_subject_NDARAA306NT2_wf.cifti_postprocess_1_wf.surface_reho_wf.reho_rh" in "/workdir/xcpabcd_wf/single_subject_NDARAA306NT2_wf/cifti_postprocess_1_wf/surface_reho_wf/reho_rh".
211115-18:42:57,394 nipype.workflow INFO:
	 [Node] Executing "reho_rh" <xcp_abcd.interfaces.resting_state.surfaceReho>
Downloading https://templateflow.s3.amazonaws.com/tpl-fsLR/tpl-fsLR_hemi-R_den-32k_midthickness.surf.gii

0.00B [00:00, ?B/s]
1.00B [00:00, 649B/s]211115-18:43:31,806 nipype.workflow INFO:
	 [Node] Finished "reho_rh", elapsed time 34.404558s.
211115-18:43:31,806 nipype.workflow WARNING:
	 Storing result file without outputs
211115-18:43:31,832 nipype.workflow WARNING:
	 [Node] Error on "xcpabcd_wf.single_subject_NDARAA306NT2_wf.cifti_postprocess_1_wf.surface_reho_wf.reho_rh" (/workdir/xcpabcd_wf/single_subject_NDARAA306NT2_wf/cifti_postprocess_1_wf/surface_reho_wf/reho_rh)
211115-18:43:33,250 nipype.workflow ERROR:
	 Node reho_rh failed to run on host node021.
211115-18:43:33,260 nipype.workflow ERROR:
	 Saving crash info to /om4/group/gablab/data/HBN/derivatives/xcp_abcd/sub-NDARAA306NT2/log/crash-20211115-184333-smeisler-reho_rh-02a80c06-02a0-4fd4-9cdc-757d25e8240e.txt
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 521, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 639, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/pipeline/engine/nodes.py", line 750, in _run_command
    raise NodeExecutionError(
nipype.pipeline.engine.nodes.NodeExecutionError: Exception raised while executing Node reho_rh.

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nibabel/loadsave.py", line 42, in load
    stat_result = os.stat(filename)
FileNotFoundError: [Errno 2] No such file or directory: "[PosixPath('/home/smeisler/.cache/templateflow/tpl-fsLR/tpl-fsLR_hemi-R_den-32k_desc-vaavg_midthickness.shape.gii'), PosixPath('/home/smeisler/.cache/templateflow/tpl-fsLR/tpl-fsLR_hemi-R_den-32k_midthickness.surf.gii')]"

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/site-packages/nipype/interfaces/base/core.py", line 398, in run
    runtime = self._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/interfaces/resting_state.py", line 69, in _run_interface
    write_gii(datat=reho_surf,template=self.inputs.surf_bold,
  File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/utils/write_save.py", line 133, in write_gii
    template = nb.load(template)
  File "/usr/local/miniconda/lib/python3.8/site-packages/nibabel/loadsave.py", line 44, in load
    raise FileNotFoundError(f"No such file or no access: '{filename}'")
FileNotFoundError: No such file or no access: '[PosixPath('/home/smeisler/.cache/templateflow/tpl-fsLR/tpl-fsLR_hemi-R_den-32k_desc-vaavg_midthickness.shape.gii'), PosixPath('/home/smeisler/.cache/templateflow/tpl-fsLR/tpl-fsLR_hemi-R_den-32k_midthickness.surf.gii')]'


211115-18:43:35,248 nipype.workflow ERROR:
	 could not run node: xcpabcd_wf.single_subject_NDARAA306NT2_wf.cifti_postprocess_1_wf.surface_reho_wf.reho_lh
211115-18:43:35,255 nipype.workflow ERROR:
	 could not run node: xcpabcd_wf.single_subject_NDARAA306NT2_wf.cifti_postprocess_1_wf.surface_reho_wf.reho_rh

Looking at my templateflow cache now, and the file appears to be there, so I'm not sure why this error is occurring. I do not get errors like this when using other BIDs apps that use templateflow.

All non-REHO outputs are present, but I'm a little confused as to why the HTMLs for the subject report no error.

I am currently running the same command but using volumetric outputs, and will update below if I get a similar error.

detrend

Hello deverlopper,

I am writing to confirm whether detrend is done using xcp_abcd by default. It seems like it is done based on the source code but I did not find this info in the method section in the sub-ID.html file.

The following is the code to do the detrend if I understand correctly.

xcp_abcd/interfaces/regression.py

line 78 ~ 80

orderx =np.floor(1+ data_matrix.shape[1]*self.inputs.tr/150)
dd_data = demean_detrend_data(data=data_matrix,TR=self.inputs.tr,order=orderx)
confound = demean_detrend_data(data=confound,TR=self.inputs.tr,order=orderx)

The following is the method section in the sub-ID.html file.
Post-processing of fMRIPrep outputs

For each of the nine CIFTI runs found per subject (across all tasks and sessions), the following post-processing was performed: before nuissance regression and filtering any volumes with framewise-displacement greater than 10.0 mm (Power et al. 2014; Satterthwaite et al. 2013) were flagged as outlier and excluded from nuissance regression. In total, 36 nuisance regressors were selected from the nuisance confound matrices of fMRIPrep output. These nuisance regressors included six motion parameters, global signal, the mean white matter, the mean CSF signal with their temporal derivatives, and the quadratic expansion of six motion parameters, tissues signals and their temporal derivatives (Ciric et al. 2018, 2017; Satterthwaite et al. 2013). These nuisance regressors were regressed from the BOLD data using linear regression - as implemented in Scikit-Learn 0.24.2 (Pedregosa et al. 2011). Residual timeseries from this regression were then band-pass filtered to retain signals within the 0.01-0.08 Hz frequency band. The processed BOLD was smoothed using Connectome Workbench with a gaussian kernel size of 6.0 mm (FWHM).

Many thanks for your confirmation.

Xiuyi

"mem_mb" argument

Hi xcp_abcd team,

Just a quick report that the documentation page says the CLI argument is "mem_mb" when in reality the code accepts "mem_gb".

Thanks,
Steven

Grayplots - Concatenate Across Runs

As discussed at 11/19/2021 meeting, the executive summary from DCAN Lab historically included concatenated grayplots across runs. Is it possible to add this feature to the XCP executive summary output page?

ReadTheDocs on front page of github

Could you add a comprehensive description of what this codebase does? Maybe it could go at the top of the ReadTheDocs and on GitHub?
We felt like one would have to be a neuroimaging expert or collaborating with the UPenn team to use this codebase.
It might be helpful to have a tree with different things you can run to input into xcp_abcd and also where you could use the outputs.
What does the future look like in terms of incorporating xcp_abcd with xcp engine? https://xcpengine.readthedocs.io/
Is this an extension of of xcp engine? If so, it would be good to show where we are in the workflow pipeline in the xcp ecosphere.

run.py

Looked over run.py.
Line 5 - should this say post-processing? Otherwise, clarity on what terminology means would be helpful.
Line 197 - explanation of numbers or variable definition needed.
Line 223 - Should we log an error if this happens?
Line 281/294 - Could we put more information about the actual error?
Line 285-298 - Looks like it is nearly identical to the preceding block, line 269- 284. Since only a handful of things are different, it would make more sense to put it into a function and call that function twice with different parameters.

qc report

  • include GM, WM, CSF label in carpet plot for nifti
  • cortical and subcortical label in cifti
  • brainsprite to view reho/alff @mattcieslak
  • functional connectivity plot in seaborn @mb3152

Grayplot scaling on Executive Summary.

Discussed at meeting on 11/19/2021 - The executive summary that is currently functioning does not have scaled gray plots. DCAN Lab generally scaled at +/- 600. We also need to report what the scaling represents on the html output file so anybody else who views it is not confused. Reach out to Eric Feczko if you need help.

input dir and template

inputdir output participant [--input-type] and [-- brain-template]. Could both nibabies and dcan runs be under the same flag (e.g. --input-type)?
Could we code it so that if --input-type nibabies, it would auto input --brain-template as MNI152Lin6Asym?
--brain-template inputs need a bit more description. Also, do we need extensions specified in the documentation (e.g. .nii.gz)? Can we feed in a different brain template?
--brain-template option: are you using template flow someway in this? Is the name of the template used to activate a template flow to the appropriate files?

template space

process different templates, native and anat spaces

  • anat
  • native
  • MNI2009 - default fmriprep
  • MNI2006 FSL
  • OASIS
  • NKI
  • PNC

notetbooks

/notebooks
preprocessing.ipynb --> Is this truly preprocessing or post-processing? something else?
Is it running
Maybe mention the notebooks/use cases in the ReadMe

workflow/base.py:

workflow/base.py:
a lot of arguments to be fed in - maybe use a structure? dictionary or class?
323-325: add to --help for assistance
362-434: a lot of arguments, not very commented, perhaps redundant code
the rest also not very commented

respiratory filter

bandstop / respiratory filter tables in the Usage Notes when you mention them in the General Workflow page.

framewise displacement threshold (-f FD_THRESH flag)

Do we include a framewise displacement threshold (-f FD_THRESH flag)?
Does it allow us to set thresholds?
Perhaps indicate exactly how the fd_threshold is being used.
Do these operate the same way as DCAN bold proc?

head_radius

If we include FD, how do we use the --head_radius flag? - Documentation only says: "head radius for computing FD, it is 35mm for baby".
Do we input "35"? Or "35 mm"? What is the exact call?

report

label the report
executive summary

respiratory filter

a respiratory filter part of the --band-stop flag? Can we include our own?

Default smoothing 5 or 6 mm FWHM?

When using the default smoothing option, the HTML output says a 6mm kernel was used, but the online documentation says 5mm. Thought I should point out the discrepency.

Thanks,
Steven

Font size increased?

As discussed at meeting on 11/19/2021 - Damien suggested that the font size be increased on the XCP executive summary html page.

Using other MNI Spaces

Hello,

I am trying to use this software with fMRIPrep outputs in MNIPediatricAsym_cohort-1 space (T1w-aligned outputs are also available but not of interest). Here is how I run XCP_ABCD on HPC:
singularity run -e -B ${scratch}:/workdir -B ${bids_dir} $IMG ${fmriprep_dir} ${output_dir} participant --participant_label ${subject:4} -w /workdir -t Phon --lower-bpf 0.01 --upper-bpf 0.08 -p 36P -f 1 --smoothing 5

I get the following error after the pybids layout is made:

space not supported
Process Process-2:
Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/local/miniconda/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/cli/run.py", line 455, in build_workflow
    retval['workflow'] = init_xcpabcd_wf (
  File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/workflow/base.py", line 147, in init_xcpabcd_wf
    single_bold_wf = init_single_bold_wf(
  File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/workflow/base.py", line 380, in init_single_bold_wf
    bold_postproc_wf = init_boldpostprocess_wf(bold_file=bold_file,
  File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/workflow/bold.py", line 243, in init_boldpostprocess_wf
    fcon_ts_wf = init_fcon_ts_wf(mem_gb=mem_gbx['timeseries'],mni_to_t1w=mni_to_t1w,
  File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/workflow/connectivity.py", line 119, in init_fcon_ts_wf
    transformfile = get_transformfile(bold_file=bold_file, mni_to_t1w=mni_to_t1w,
  File "/usr/local/miniconda/lib/python3.8/site-packages/xcp_abcd/utils/utils.py", line 110, in get_transformfile
    return transformfile
UnboundLocalError: local variable 'transformfile' referenced before assignment

Do you have any guidance for how to proceed?

Thanks,
Steven

module nifti_smoothing has no output called out_file

Hi experts,
I run an analysis with xcpabcd (version 0.0.1), the log file throw this error to me. Then I change to the latest version on docker hub. Every thing seem ok. So, this may be a bug in the version 0.0.1, right?

Here is my code to run the analysis:

#!/bin/bash
#User inputs:
bids_root_dir=/media/jinbo/datapool/lab/HBN_Base/ds_01_CUNY/BIDS/rsfMRI
bids_root_dir_output_wd4singularity=/media/jinbo/datapool/lab/HBN_Base/ds_01_CUNY_wd/rsfMRI
subj=$1
nthreads=4

#Run fmriprep
echo ""
echo "Running xcpengine on participant: sub-$subj"
echo ""

#Make xcpabcd directory and participant directory in derivatives folder
if [ ! -d $bids_root_dir/derivatives/xcp_abcd ]; then
    mkdir $bids_root_dir/derivatives/xcp_abcd
fi

if [ ! -d $bids_root_dir/derivatives/xcp_abcd/sub-${subj} ]; then
    mkdir $bids_root_dir/derivatives/xcp_abcd/sub-${subj}
fi
if [ ! -d $bids_root_dir_output_wd4singularity/derivatives/xcp_abcd ]; then
    mkdir $bids_root_dir_output_wd4singularity/derivatives/xcp_abcd
fi

if [ ! -d $bids_root_dir_output_wd4singularity/derivatives/xcp_abcd/sub-${subj} ]; then
    mkdir $bids_root_dir_output_wd4singularity/derivatives/xcp_abcd/sub-${subj}
fi

#Run XCP-ABCD
export SINGULARITYENV_TEMPLATEFLOW_HOME=/home/jinbo/cache_fmriprep/templateflow
unset PYTHONPATH; singularity run --cleanenv --bind $bids_root_dir \
    /media/jinbo/codepool/sigularity_build_wd/xcp_abcd.simg \
    -w $bids_root_dir_output_wd4singularity/derivatives/xcp_abcd/sub-${subj} \
    --resource-monitor \
    --participant_label ${subj} \
    --nthreads $nthreads \
    --omp-nthreads $nthreads \
    --cifti \
    --smoothing 6 \
    -p 36P \
    --despike \
    --lower-bpf 0.01 --upper-bpf 0.08 \
    $bids_root_dir/derivatives/fmriprep $bids_root_dir/derivatives/xcp_abcd/sub-${subj} \
    participant

singularity and docker

Could there be a link to the usage notes in the "Running a Singularity Image" to the appropriate spot for flags/usage notes.

General workflow

DCAN Bold Proc when we use outlier detection we actually use despiking that is listed in #3. Language is a bit confusing. Does temporal censoring happen before despiking?
Is 3 and 4 detection? And 5 the removal?
Language is a little confusing. We may need to have a conversation about how xcp-abcd is similar and different than dcan_bold_proc.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.