Giter Site home page Giter Site logo

miykael / fmriflows Goto Github PK

View Code? Open in Web Editor NEW
59.0 7.0 20.0 31.48 MB

fmriflows is a consortium of many (dependent) fMRI analysis pipelines, including anatomical and functional pre-processing, univariate 1st and 2nd-level analysis, as well as multivariate pattern analysis.

License: BSD 3-Clause "New" or "Revised" License

Shell 2.00% Jupyter Notebook 22.16% Python 12.09% HTML 0.85% Dockerfile 1.44% MATLAB 60.27% Singularity 1.20%
python neuroimaging fmri bids

fmriflows's Introduction

CircleCi GitHub issues GitHub pull-requests GitHub contributors GitHub Commits GitHub size Docker Hub

fMRIflows

fMRIflows is a collection of fully autonomous uni- & multivariate fMRI processing pipelines. This comprises anatomical and functional preprocessing, estimation of singal confounds, as well as univariate and multivariate analysis on the subject and group level.

As is rather obvious, fMRIflows was greatly inspired by fmriprep and other open source projects and borrows strongly from their ideas and solutions. But while fmriprep can be described as a "glass" box software, fMRIflows is more like a shoebox. Simple to open, easy to understand what's inside and uncomplicated to replace and change components inside. This is all due to the fact that all the source code of fMRIflows is stored within the notebooks.

If you are using fMRIflows in your publication, please contact the author Michael Notter to get further information about how to cite this toolbox, as the publication is currently in preparation for submission.

Installation and Usage

Using Containers

The best way to use fMRIflows is to run it directly within the corresponding container (Docker or Singularity). The docker image for this can be downloaded from Docker Hub with the command docker pull miykael/fmriflows, and can be run with the command:

docker run -it --rm -p 9999:8888 -v /home/user/ds001:/data miykael/fmriflows
  • The -it means that docker will be run in an interactive mode.
  • The --rm means that the docker container (not the image!) will be removed from your system, once you close the container.
  • The -p flag specifies which port should be used to access the jupyter notebook inside the docker container. The first 4 numbers can be changed (we recommend 9999), but the second half needs to be fixed to :8888.
  • The -v flag tells docker which folder to mount within the docker container. The path /home/user/ds001 should therefore point to your BIDS conform dataset.

Once this command has been executed (note: the terminal will not go back to a new input prompt, but rather stays on this command) you need to copy paste/open the link http://127.0.0.1:9999/?token=fmriflows or http://localhost:9999/?token=fmriflows in your web browser to access the jupyter notebooks required to run fMRIflows.

Note: Should you by any chance encounter a web page that asks you for "Password or token", use the token fmriflows to login.

Using your native environment

It is also possible to run fMRIflows outside of a container. All the important code is stored within the notebooks. But for this, you need to make sure that all necessary neuroimaging and python dependencies are installed on your system. For a list of what needs to be done, see the generating file.

fMRIflows pipelines

fMRIflows contains many different pipelines. The following is a short summary and explanation of each individual notebook.

01_spec_preparation.ipynb

The notebook 01_spec_preparation.ipynb helps you to create the JSON files that contain the specifiation to run fMRIflows. This is the most direct way to specify the processing parameters for fMRIflows. Open the notebook, run it, change a few parameters as you want and you're good to go. Alternatively, you can also directly change the JSON specification file within your dataset folder (but you need to run 01_spec_preparation.ipynb for this to work at least once).

02_preproc_anat.ipynb

The notebook 02_preproc_anat.ipynb contains perform anatomical preprocessing. Parameters that can be specified in the fmriflows_spec_preproc.json file are the following:

  • subject_list_anat: List of subject identifier to preprocess
  • session_list_anat: List of session identifier to preprocess (set to [] if dataset doesn't contain session identifiers)
  • T1w_id: T1-image identifier, usually T1w. For MP2RAGE acquisiton, this can for example be changed to T1w_UNI-DEN
  • res_norm: Requested voxel resolution after normalization
  • norm_accuracy: Precision accuracy of the normalization. This can be either precise or fast

The simplified workflow of the anatomical preprocessing looks as follows:

After anatomical preprocessing the following output figures are generated:

Tissue Segmentation of T1w

This panel shows quality of the T1-weighted image segmentation, with brain tissues highlighted in green for Gray Matter (GM), in beige for White Matter (WM) and in blue for Cerebrospinal Fluid (CSF).

Brain Extraction of T1w

This panel shows the quality of the brain extraction. The brain highlighted in red is overlayed on the subject specific T1-weighted image in gray.

T1w to MNI registration

This panel shows the quality of the nonlinear template registration. The normalized brain highlighted in yellow is overlayed on the ICBM T1-weighted template image in gray. Regions in red and blue show negative and positive deformation discrepancy between the normalized subject image and the template.

03_preproc_func.ipynb

The notebook 03_preproc_func.ipynb contains perform functional preprocessing. Parameters that can be specified in the fmriflows_spec_preproc.json file are the following:

  • subject_list_func: List of subject identifier to preprocess
  • session_list_func: List of session identifier to preprocess (set to [] if dataset doesn't contain session identifiers)
  • task_list: List of task identifier to preprocess
  • run_list: List of run identifiers (int) to preprocess (set to [] if dataset doesn't contain session identifiers)
  • ref_timepoint: Reference timepoint for slicetime correction, in milliseconds
  • res_func: Isometric resample resolution for after image coregistration
  • filters_spatial: List of spatial filters to apply. Use LP for lowpass, HP for highpass and BP for bandpass spatial filtering. For example [[LP, 6.0], [HP, 2.0]] will run the analysis once with a spatial lowpass filter of FWHM=6mm and once with a spatial highpass filter of FWHM=2mm.
  • filters_temporal: List of temporal filters to apply. First element will define lowpass, second highpass cutoff. Use null to not use given filter. For example [[5.0, 100.0], [null, 100.0]] will run the analysis once with a temporal lowpass filter of 5Hz and highpass filter of 100Hz, and once with a temporal filter of only a highpass filter of 100Hz.,
  • n_compcor_confounds: Number of anatomical and functional CompCor componenets to compute. Defautl is 5.
  • outlier_thresholds: Thresholds used to performe outlier detection on FD, DVARS, TV, GM, WM and CSF. Use None if parameter shouldn't be used for outlier detection.
  • n_independent_components: Number of ICA components that should be estimaed. Default is 10.

The simplified workflow of the anatomical preprocessing looks as follows:

Additionally to the preprocessing, the following signal confounds are computed:

  • Friston's 24-parameter model for motion parameters
  • Framewise Displacement (FD) and DVARS
  • Average signal in total volume (TV), in GM, in WM and in CSF
  • Anatomical and temporal CompCor components
  • Independent components in image before smoothing

After functional preprocessing the following output figures are generated:

Mean Image and CompCor Masks

This panel shows the mean functional image in gray, and the brainmask (red) that was used to compute DVARS and the temporal (green) and anatomical (blue) CompCor mask.

Carpet plot of the temporal filtered functional image

This panel shows the signal for (almost) every voxel (y-axis), over time in volumes (x-axis). The panel shows voxel in the gray matter (top part), white matter (between blue and red line) and CSF (bottom section). The data was standardized to the global signal, and ordered within a given region according to correlation coefficient to the average signal.

Main confounds with highlighted outliers

This panel shows the main confounds. Shown are FD (Framewise Displacement) and DVARS, as well as the average signal in TV (total brain volume), GM (gray matter), WM (white matter) and CSF (cerebral spinal fluid). Vertical lines in black indicate outliers, as defined by threshold values.

Motion Parameters

This panel shows the motion parameters (3 rotation and 3 translation parameters). Colored lines are motion parameters after the application of a temporal low-pass filter at 5Hz. Gray lines indicate motion parameters before temporal filter were applied.

Anatomical CompCor Components

This panel shows the anatomical CompCor components.

Temporal CompCor Components

This panel shows the temporal CompCor components.

ICA Components

This panel shows the ICA components. Left side of the figure shows the correlation between the component and the functional image, over time. The right panel shows the power spectrum density of this component, with values in Hz on the x-axis.

This panel shows the load of the ICA components within the brain volume.

Feedback, Help & Support

If you want to help with this project or have any questions, fell free to fork the repo, send us a pull request or open a new issue on github. Every help is highly appreciated, every feature request or bug report is very welcomed.

Thanks and Acknowledgment

We would like to thank everybody who's developing code for Nipype, fmriprep, Nilearn, PyMVPA and many other related open-source toolboxes. We were able to integrate their knowledge, code and approaches in fMRIflows, because their code is open.

Thanks also to all the contributors, tester, bug reporter and feedback giver to fMRIflows. And a special thanks to Faruk who created the great fMRIflows logo.

fmriflows's People

Contributors

ilkayisik avatar miykael avatar ofgulban avatar peerherholz avatar yarikoptic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

fmriflows's Issues

html reports dependency

The sufficient generation of html summary reports currently depends on the preproc_anat workflow and is absent in case only the preproc_worklfow or any other workflow is run.

We should change the corresponding behavior to a generation of the html summary reports that is done independent of a given workflow.

How to do group analysis with SpaceNet?

As mentioned in this neurostars post: It is possible to do a standard 2nd level analysis on the whole-brain weights, using FDR to correct for multiple comparison. But if I understand correctly, this means that the SpaceNet would need to be run on each subject individually?

But I'm not 100% sure how to to do FDR correction. Does it imply to bootstrap the process (with randomized labels) and than estimate cluster extend with FDR correction? I.e. as done in Stelzer et al. (2013)?

High-Pass value in CompCor estimation

CompCor accepts a high_pass_cutoff value (see here) that is by default 128Hz. This should be adjusted to the value given by a user (if specified).

But it seems that this only takes place if one uses pre_filter = cosine. By default this pre_filter is polynomial.

This issue is also related to this issue #17.

Error running workflow in 01_preproc_anat

I'm exploring nipype as a means to implement our rsfMRI preprocessing in the near future, and am testing fmriflows with a small BIDS dataset to get a sense of how it works before I go about customizing containers. Unfortunately the workflow keeps failing at the antsRegistration stage, and I'm unsure of how to address the issue. Here is the error I'm seeing in the console:

190128-17:14:28,154 nipype.workflow INFO:
Workflow preproc_anat settings: ['check', 'execution', 'logging', 'monitoring']
190128-17:14:28,175 nipype.workflow INFO:
Running serially.
190128-17:14:28,176 nipype.workflow INFO:
[Node] Setting-up "preproc_anat.select_files" in "/workingdir/preproc_anat/_session_id_01_subject_id_102/select_files".
190128-17:14:28,179 nipype.workflow INFO:
[Node] Running "select_files" ("nipype.interfaces.utility.wrappers.Function")
190128-17:14:28,274 nipype.workflow INFO:
[Node] Finished "preproc_anat.select_files".
190128-17:14:28,275 nipype.workflow INFO:
[Node] Setting-up "preproc_anat.mainflow.reorient" in "/workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/reorient".
190128-17:14:28,277 nipype.workflow INFO:
[Node] Running "reorient" ("nipype.interfaces.image.Reorient")
190128-17:14:28,310 nipype.workflow INFO:
[Node] Finished "preproc_anat.mainflow.reorient".
190128-17:14:28,311 nipype.workflow INFO:
[Node] Setting-up "preproc_anat.mainflow.crop_FOV" in "/workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/crop_FOV".
190128-17:14:28,314 nipype.workflow INFO:
[Node] Running "crop_FOV" ("nipype.interfaces.fsl.utils.RobustFOV"), a CommandLine Interface with command:
robustfov -i /data/sub-102/ses-01/anat/sub-102_ses-01_highres-T1w.nii.gz -r sub-102_ses-01_highres-T1w_ROI.nii.gz -m sub-102_ses-01_highres-T1w_to_ROI.nii.gz
/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/grabbit/core.py:436: ResourceWarning: unclosed file <_io.TextIOWrapper name='/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/bids/layout/config/bids.json' mode='r' encoding='UTF-8'>
domain = json.load(open(domain, 'r'))
190128-17:14:32,426 nipype.interface INFO:
stdout 2019-01-28T17:14:32.426646:Final FOV is:
190128-17:14:32,427 nipype.interface INFO:
stdout 2019-01-28T17:14:32.426646:0.000000 176.000000 0.000000 256.000000 61.000000 170.000000
190128-17:14:32,428 nipype.interface INFO:
stdout 2019-01-28T17:14:32.426646:
190128-17:14:32,484 nipype.workflow INFO:
[Node] Finished "preproc_anat.mainflow.crop_FOV".
190128-17:14:32,485 nipype.workflow INFO:
[Node] Setting-up "preproc_anat.mainflow.n4" in "/workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/n4".
190128-17:14:32,492 nipype.workflow INFO:
[Node] Running "n4" ("nipype.interfaces.ants.segmentation.N4BiasFieldCorrection"), a CommandLine Interface with command:
N4BiasFieldCorrection -d 3 --input-image /workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/crop_FOV/sub-102_ses-01_highres-T1w_ROI.nii.gz --output sub-102_ses-01_highres-T1w_ROI_corrected.nii.gz
190128-17:15:57,189 nipype.workflow INFO:
[Node] Finished "preproc_anat.mainflow.n4".
190128-17:15:57,190 nipype.workflow INFO:
[Node] Setting-up "preproc_anat.mainflow.gunzip" in "/workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/gunzip".
190128-17:15:57,196 nipype.workflow INFO:
[Node] Running "gunzip" ("nipype.algorithms.misc.Gunzip")
190128-17:15:57,473 nipype.workflow INFO:
[Node] Finished "preproc_anat.mainflow.gunzip".
190128-17:15:57,474 nipype.workflow INFO:
[Node] Setting-up "preproc_anat.mainflow.segment" in "/workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/segment".
190128-17:15:57,480 nipype.workflow INFO:
[Node] Running "segment" ("nipype.interfaces.spm.preprocess.NewSegment")
190128-17:20:30,601 nipype.workflow INFO:
[Node] Finished "preproc_anat.mainflow.segment".
190128-17:20:30,602 nipype.workflow INFO:
[Node] Setting-up "preproc_anat.mainflow.compressor" in "/workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/compressor".
190128-17:20:30,614 nipype.workflow INFO:
[Node] Running "compressor" ("nipype.interfaces.utility.wrappers.Function")
190128-17:20:32,192 nipype.workflow INFO:
[Node] Finished "preproc_anat.mainflow.compressor".
190128-17:20:32,193 nipype.workflow INFO:
[Node] Setting-up "preproc_anat.mainflow.extract_brain" in "/workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/extract_brain".
190128-17:20:32,199 nipype.workflow INFO:
[Node] Running "extract_brain" ("nipype.interfaces.utility.wrappers.Function")
190128-17:20:34,660 nipype.workflow INFO:
[Node] Finished "preproc_anat.mainflow.extract_brain".
190128-17:20:34,661 nipype.workflow INFO:
[Node] Setting-up "preproc_anat.mainflow.antsreg" in "/workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/antsreg".
190128-17:20:34,675 nipype.workflow INFO:
[Node] Running "antsreg" ("nipype.interfaces.ants.registration.Registration"), a CommandLine Interface with command:
antsRegistration --collapse-output-transforms 1 --dimensionality 3 --float 0 --initial-moving-transform [ /templates/mni_icbm152_nlin_asym_09c/template_brain_1.0_1.0_1.0.nii.gz, /workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/extract_brain/sub-102_ses-01_highres-T1w_ROI_corrected_brain.nii.gz, 1 ] --initialize-transforms-per-stage 0 --interpolation BSpline --output [ transform, transform_Warped.nii.gz, transform_InverseWarped.nii.gz ] --transform Rigid[ 0.01 ] --metric Mattes[ /templates/mni_icbm152_nlin_asym_09c/template_brain_1.0_1.0_1.0.nii.gz, /workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/extract_brain/sub-102_ses-01_highres-T1w_ROI_corrected_brain.nii.gz, 1, 32, Random, 0.15 ] --convergence [ 1000, 1e-06, 20 ] --smoothing-sigmas 4.0vox --shrink-factors 4 --use-estimate-learning-rate-once 1 --use-histogram-matching 1 --transform Affine[ 0.08 ] --metric Mattes[ /templates/mni_icbm152_nlin_asym_09c/template_brain_1.0_1.0_1.0.nii.gz, /workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/extract_brain/sub-102_ses-01_highres-T1w_ROI_corrected_brain.nii.gz, 1, 32, Regular, 0.15 ] --convergence [ 500x250x100, 1e-06, 20 ] --smoothing-sigmas 4.0x2.0x0.0vox --shrink-factors 4x2x1 --use-estimate-learning-rate-once 1 --use-histogram-matching 1 --transform SyN[ 0.1, 3.0, 0.0 ] --metric Mattes[ /templates/mni_icbm152_nlin_asym_09c/template_brain_1.0_1.0_1.0.nii.gz, /workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/extract_brain/sub-102_ses-01_highres-T1w_ROI_corrected_brain.nii.gz, 1, 56, Regular, 0.25 ] --convergence [ 50x20, 1e-06, 10 ] --smoothing-sigmas 1.0x0.0vox --shrink-factors 2x1 --use-estimate-learning-rate-once 1 --use-histogram-matching 1 -v --winsorize-image-intensities [ 0.005, 0.995 ] --write-composite-transform 1
190128-17:28:28,514 nipype.workflow WARNING:
[Node] Error on "preproc_anat.mainflow.antsreg" (/workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/antsreg)
190128-17:28:28,550 nipype.workflow ERROR:
Node antsreg.a0 failed to run on host a7e1d17d7382.
190128-17:28:28,634 nipype.workflow ERROR:
Saving crash info to /home/neuro/notebooks/crash-20190128-172828-neuro-antsreg.a0-885c3770-57a8-41b0-8a77-ae2c5b771e3c.pklz
Traceback (most recent call last):
File "/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/nipype/pipeline/plugins/linear.py", line 48, in run
node.run(updatehash=updatehash)
File "/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
result = self._run_interface(execute=True)
File "/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
return self._run_command(execute)
File "/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
result = self._interface.run(cwd=outdir)
File "/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 369, in run
runtime = self._run_interface(runtime)
File "/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/nipype/interfaces/ants/registration.py", line 940, in _run_interface
runtime = super(Registration, self)._run_interface(runtime)
File "/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 752, in _run_interface
self.raise_exception(runtime)
File "/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 689, in raise_exception
).format(**runtime.dictcopy()))
RuntimeError: Command:
antsRegistration --collapse-output-transforms 1 --dimensionality 3 --float 0 --initial-moving-transform [ /templates/mni_icbm152_nlin_asym_09c/template_brain_1.0_1.0_1.0.nii.gz, /workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/extract_brain/sub-102_ses-01_highres-T1w_ROI_corrected_brain.nii.gz, 1 ] --initialize-transforms-per-stage 0 --interpolation BSpline --output [ transform, transform_Warped.nii.gz, transform_InverseWarped.nii.gz ] --transform Rigid[ 0.01 ] --metric Mattes[ /templates/mni_icbm152_nlin_asym_09c/template_brain_1.0_1.0_1.0.nii.gz, /workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/extract_brain/sub-102_ses-01_highres-T1w_ROI_corrected_brain.nii.gz, 1, 32, Random, 0.15 ] --convergence [ 1000, 1e-06, 20 ] --smoothing-sigmas 4.0vox --shrink-factors 4 --use-estimate-learning-rate-once 1 --use-histogram-matching 1 --transform Affine[ 0.08 ] --metric Mattes[ /templates/mni_icbm152_nlin_asym_09c/template_brain_1.0_1.0_1.0.nii.gz, /workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/extract_brain/sub-102_ses-01_highres-T1w_ROI_corrected_brain.nii.gz, 1, 32, Regular, 0.15 ] --convergence [ 500x250x100, 1e-06, 20 ] --smoothing-sigmas 4.0x2.0x0.0vox --shrink-factors 4x2x1 --use-estimate-learning-rate-once 1 --use-histogram-matching 1 --transform SyN[ 0.1, 3.0, 0.0 ] --metric Mattes[ /templates/mni_icbm152_nlin_asym_09c/template_brain_1.0_1.0_1.0.nii.gz, /workingdir/preproc_anat/mainflow/_session_id_01_subject_id_102/extract_brain/sub-102_ses-01_highres-T1w_ROI_corrected_brain.nii.gz, 1, 56, Regular, 0.25 ] --convergence [ 50x20, 1e-06, 10 ] --smoothing-sigmas 1.0x0.0vox --shrink-factors 2x1 --use-estimate-learning-rate-once 1 --use-histogram-matching 1 -v --winsorize-image-intensities [ 0.005, 0.995 ] --write-composite-transform 1
Standard output:

Standard error:

Return code: 137

190128-17:28:28,640 nipype.workflow INFO:


190128-17:28:28,641 nipype.workflow ERROR:
could not run node: preproc_anat.mainflow.antsreg.a0
190128-17:28:28,643 nipype.workflow INFO:
crashfile: /home/neuro/notebooks/crash-20190128-172828-neuro-antsreg.a0-885c3770-57a8-41b0-8a77-ae2c5b771e3c.pklz
190128-17:28:28,643 nipype.workflow INFO:



RuntimeError Traceback (most recent call last)
in
1 # Run the workflow in sequential mode
----> 2 res = preproc_anat.run('Linear')

/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/nipype/pipeline/engine/workflows.py in run(self, plugin, plugin_args, updatehash)
597 if str2bool(self.config['execution']['create_report']):
598 self._write_report_info(self.base_dir, self.name, execgraph)
--> 599 runner.run(execgraph, updatehash=updatehash, config=self.config)
600 datestr = datetime.utcnow().strftime('%Y%m%dT%H%M%S')
601 if str2bool(self.config['execution']['write_provenance']):

/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/nipype/pipeline/plugins/linear.py in run(self, graph, config, updatehash)
69
70 os.chdir(old_wd) # Return wherever we were before
---> 71 report_nodes_not_run(notrun)

/opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/nipype/pipeline/plugins/tools.py in report_nodes_not_run(notrun)
80 logger.debug(subnode._id)
81 logger.info("***********************************")
---> 82 raise RuntimeError(('Workflow did not execute cleanly. '
83 'Check log for details'))
84

RuntimeError: Workflow did not execute cleanly. Check log for details

The log has the following error message:

Error! /home/neuro/notebooks/crash-20190128-172828-neuro-antsreg.a0-885c3770-57a8-41b0-8a77-ae2c5b771e3c.pklz is not UTF-8 encoded
Saving disabled.
See Console for more details.

I'm also a new python user, and I'd greatly appreciate any help or guidance, thank you!

antsApplyTransforms error in 03_preproc_func part

WARNING message

230209-11:56:53,552 nipype.interface INFO:
stderr 2023-02-09T11:56:53.551912:WARNING: In /usr/include/ITK-4.10/itkTransformFactoryBase.h, line 81
230209-11:56:53,554 nipype.interface INFO:
stderr 2023-02-09T11:56:53.551912:TransformFactoryBase (0x56294b0c0d70): Refusing to register transform "MatrixOffsetTransformBase_float_3_3" again!
Screenshot from 2023-02-09 20-12-19
Screenshot from 2023-02-09 20-13-48

finnally, cuasing the working browser to crash.

Platform details:

running in fmriflows docker,
computer: ubuntu 22.04 LTS

Execution environment

computer: ubuntu 22.04 LTS

Choose one

  • Container [miykael/fmriflows]
  • My python environment inside container [Python 3.9.5 ]
  • My python environment outside container [Python 3.9.13]

Fail to use docker

Summary

Faild to open container.

Actual behavior

Use provided run command return site cannot be reached error. If use 8888:8888 it will show jupyter notebook website but need tokern which i didn't find.

Expected behavior

Successfully open the container.

How to replicate the behavior

Just use the docker run -it --rm -p 9999:8888 -v /home/user/ds001:/data miykael/fmriflows command.
I just change tmy local path.

Script/Workflow details

[I 16:20:30.797 NotebookApp] [nb_conda_kernels] enabled, 2 kernels found
[I 16:20:30.801 NotebookApp] Writing notebook server cookie secret to /home/neuro/.local/share/jupyter/runtime/notebook_cookie_secret
[I 16:20:30.922 NotebookApp] [jupyter_nbextensions_configurator] enabled 0.4.1
[W 2023-07-11 16:20:30.990 LabApp] Config option `kernel_spec_manager_class` not recognized by `LabApp`.
[W 2023-07-11 16:20:30.992 LabApp] 'port' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[W 2023-07-11 16:20:30.992 LabApp] 'ip' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[W 2023-07-11 16:20:30.992 LabApp] 'token' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[W 2023-07-11 16:20:30.992 LabApp] 'token' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[W 2023-07-11 16:20:30.992 LabApp] 'token' has moved from NotebookApp to ServerApp. This config will be passed to ServerApp. Be sure to update your config before our next release.
[W 2023-07-11 16:20:30.992 LabApp] Config option `kernel_spec_manager_class` not recognized by `LabApp`.
[I 2023-07-11 16:20:30.996 LabApp] JupyterLab extension loaded from /opt/miniconda-latest/envs/neuro/lib/python3.7/site-packages/jupyterlab
[I 2023-07-11 16:20:30.996 LabApp] JupyterLab application directory is /opt/miniconda-latest/envs/neuro/share/jupyter/lab
[I 16:20:31.072 NotebookApp] [nb_conda] enabled
[I 16:20:31.073 NotebookApp] Serving notebooks from local directory: /home/neuro/notebooks
[I 16:20:31.073 NotebookApp] Jupyter Notebook 6.2.0 is running at:
[I 16:20:31.073 NotebookApp] http://f2f6d97f67d6:8888/?token=...
[I 16:20:31.073 NotebookApp]  or http://127.0.0.1:8888/?token=...
[I 16:20:31.073 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
Please put URL to code or code here (if not too long).

Platform details:

Ubuntu 22.

Execution environment

Python 3.7.

Should there be an option to shorten normalization time?

Should there be an option for "quick" normalization? The current normalization with ANTs will take about 1-2h per subject on 4 parallel jobs.

By reducing the number of iterations, the computation could probably be divided by 4. The results might not be as superb, but nonetheless, probably comparable to SPM's or FSL's normalization.

Rewriting of Function Nodes, concerning file paths.

Because of not knowing better, some of the Function nodes actually store their temporary results in other nodes folders. This is because the way it is written:

in_file = 'node_01/func.nii.gz'
# do some fancy plotting
out_file = in_file.replace('.nii.gz', '.svg')

The problem with this is that certain nodes always have to be rerun during re-execution. Using something like basename and abspath should help to keep the files within each folder.

Error after pulling

Hi there

After pulling the fmriflows docker and the command:
docker run -it --rm -p 9999:8888 -v /persistent/home//myHome/example:/data miykael/fmriflows

I get the error message:
/neurodocker/startup.sh: line 9: jupyter notebook --port=8888 --no-browser --ip=0.0.0.0 --NotebookApp.token=fmriflows: command not found

thanks for help!
mike

summary html [enh]

Would you be interested in a summary html comparable to the fmriprep one including the visualizations you're currently plotting/viewing in the notebooks? Not as an replacement for those, but rather as an easy to check complementary add on or function? If so, I could work on that.

Workflow and node sub-segmentation

  1. The 1st-level workflow is currently only on one hierarchical. It might make more sense to have always pro notebook a main, a report and an additional, e.g. confounds or normalize workflow. So that the whole thing looks a bit more modular.

  2. Also, some of the nodes that use the Function interface do alot at once, e.g. the create_report node in preproc_anat. They should be taken apart into the different submodules, while also not fragmenting too extrem.

Filters in functional preprocessing should be orthogonal

The Linquist paper about "Modular preprocessing pipelines can reintroduce artifacts into fMRI data" shows that subsequent filters in fMRI analysis should be orthogonal.

The problem is, that preprocessing steps performed at a later stage of the pipeline can potentially reintroduce artifacts that had previously been removed from the data in an earlier step.

We note that the issues discussed in this paper can be circumvented in two ways. The first approach would be to abandon the modular approach to preprocessing the data, and instead use a joint approach that simultaneously performs the different preprocessing steps within an omnibus framework. For example, it would be relatively straightforward to formulate a single linear model that simultaneously performs motion regression, nuisance regression, and temporal filtering. [...]
The second approach is to formulate the design matrices used in each preprocessing step in such a manner that when applied sequentially they are constrained to project onto orthogonal subspaces. For example, if motion correction is performed after temporal filtering, the columns of the design matrix used in the motion regression should also be temporally filtered in a similar manner to ensure the data is projected onto an orthogonal subspace. The general rule-of-thumb is that one must orthogonalize both the data, and all subsequent projections to maintain data orthogonality with the current projection.

The current version of fmriflows doesn't orthogonalize filters:

  • Motion correction is followed by temporal filtering (without filtering motion regressors before they are applied)
  • Also, CompCor components, FD, DVARS, and Friston24 components are estimated without considering temporal filters or each other as well.

This is problematic, but Linquist also mentions that the second approach often is already implemented in task fMRI studies, as their GLM include motion regressors and high-pass filters. But fmriflows wants also to give the option of low-pass filtering, which would need to be considered in the later GLM.

I'm not sure yet how to best tackle this issue, but it's clear that it needs to be addressed. The smoothest approach would be to put everything into a GLM, but this would imply to not perform motion correction. It would require that users understand this issue and setup the GLM correctly.

proposals for fmriflows logo

I have this nice image from the parcellation_fragmenter project:
fmriflows_logo

And I thought I could use this image (or somehting similar) as a logo for fmriflows. Together with a nice "flowing" font.

@ofgulban - You're a crafty designer! Would you have an idea and perhaps even the pleasure to create a logo? Because when I do it you get something like this.

06_analysis_multivariate “ImportError: cannot import name analyze”

Summary

It seems to be a problem with "nibabel", because "ImportError: cannot import name analyze" will appear when importing nibabel. But I still have this problem after I delete "nibabel" and reinstall it. I installed "nibabel==2.5.1" in python2.7 on my laptop, and there is no error in "import nibable"

Actual behavior

Expected behavior

How to replicate the behavior

Script/Workflow details

from mvpa2.suite import *


ImportError Traceback (most recent call last)
in ()
2 import os
3 from os.path import basename
----> 4 from mvpa2.suite import *

/opt/miniconda-latest/envs/mvpa/lib/python2.7/site-packages/mvpa2/suite.py in ()
225 from mvpa2.misc.plot.base import *
226 from mvpa2.misc.plot.erp import *
--> 227 from mvpa2.misc.plot.scatter import *
228 if externals.exists(['griddata', 'scipy']):
229 from mvpa2.misc.plot.topo import *

/opt/miniconda-latest/envs/mvpa/lib/python2.7/site-packages/mvpa2/misc/plot/scatter.py in ()
14 import sys, os
15 import pylab as pl
---> 16 import nibabel as nb
17 import numpy as np
18

/opt/miniconda-latest/envs/mvpa/lib/python2.7/site-packages/nibabel/init.py in ()
60
61 # module imports
---> 62 from . import analyze as ana
63 from . import spm99analyze as spm99
64 from . import spm2analyze as spm2

ImportError: cannot import name analyze

Please put URL to code or code here (if not too long).

Platform details:



Execution environment

Ubuntu20.04
Choose one

  • Container [Tag: latest]
  • My python environment inside container [Base Tag: latest]

Creation of `00_parameter_specification.ipynb` notebook

Might be an idea to create a notebook that initiates the JSON parameters. And than executes the notebooks one after the other.

Like this, this would be the only notebook that needs to be executed to run the whole group analysis. It could even have a switch to do univariate or multivariate analysis.

Having everything in one informative notebook makes it also possible to quickly create a BIDS app out of it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.