Giter Site home page Giter Site logo

freesurfer's Introduction

Docker version

Freesurfer recon-all BIDS App

Description

This app implements surface reconstruction using Freesurfer. It reconstructs the surface for each subject individually and then creates a study specific template. In case there are multiple sessions the Freesurfer longitudinal pipeline is used (creating subject specific templates) unless instructed to combine data across sessions. This app is available for both Freesurfer 6 and 7.

The current Freesurfer version for Freesurfer 6 is based on:

  • freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.1.tar.gz

The current Freesurfer version for Freesurfer 7 is based on:

  • freesurfer-linux-centos7_x86_64-7.4.1.tar.gz

We only plan to support only one version of Freesurfer 6 and Freesurfer 7 at a time.

The output of the pipeline consist of the SUBJECTS_DIR created during the analysis.

How to get it

Freesurfer 6 will remain the default image till 2024, at which point Freesurfer 7 will become the default.

You can get the default version with docker pull bids/freesurfer.

Freesurfer 7 is available at docker pull bids/freesurfer:7.

Freesurfer 6 is available at docker pull bids/freesurfer:6.

Documentation

How to report errors

https://surfer.nmr.mgh.harvard.edu/fswiki/FreeSurferSupport

Acknowledgements

https://surfer.nmr.mgh.harvard.edu/fswiki/FreeSurferMethodsCitation

Usage

This App has the following command line arguments:

    $ docker run -ti --rm bids/freesurfer --help
    usage: run.py [-h]
                  [--participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]]
                  [--session_label SESSION_LABEL [SESSION_LABEL ...]]
                  [--n_cpus N_CPUS]
                  [--stages {autorecon1,autorecon2,autorecon2-cp,autorecon2-wm,autorecon-pial,autorecon3,autorecon-all,all}
                            [{autorecon1,autorecon2,autorecon2-cp,autorecon2-wm,autorecon-pial,autorecon3,autorecon-all,all} ...]]
                  [--steps {cross-sectional,template,longitudinal}
                           [{cross-sectional,template,longitudinal} ...]]
                  [--template_name TEMPLATE_NAME] --license_file LICENSE_FILE
                  [--acquisition_label ACQUISITION_LABEL]
                  [--refine_pial_acquisition_label REFINE_PIAL_ACQUISITION_LABEL]
                  [--multiple_sessions {longitudinal,multiday}]
                  [--refine_pial {T2,FLAIR,None,T1only}]
                  [--hires_mode {auto,enable,disable}]
                  [--parcellations {aparc,aparc.a2009s} [{aparc,aparc.a2009s} ...]]
                  [--measurements {area,volume,thickness,thicknessstd,meancurv,gauscurv,foldind,curvind}
                                  [{area,volume,thickness,thicknessstd,meancurv,gauscurv,foldind,curvind} ...]]
                  [-v] [--bids_validator_config BIDS_VALIDATOR_CONFIG]
                  [--skip_bids_validator] [--3T {true,false}]
                  bids_dir output_dir {participant,group1,group2}

    FreeSurfer recon-all + custom template generation.

    positional arguments:
      bids_dir              The directory with the input dataset formatted
                            according to the BIDS standard.
      output_dir            The directory where the output files should be stored.
                            If you are running group level analysis this folder
                            should be prepopulated with the results of
                            theparticipant level analysis.
      {participant,group1,group2}
                            Level of the analysis that will be performed. Multiple
                            participant level analyses can be run independently
                            (in parallel) using the same output_dir. "group1"
                            creates study specific group template. "group2"
                            exports group stats tables for cortical parcellation,
                            subcortical segmentation a table with euler numbers.

    optional arguments:
      -h, --help            show this help message and exit
      --participant_label PARTICIPANT_LABEL [PARTICIPANT_LABEL ...]
                            The label of the participant that should be analyzed.
                            The label corresponds to sub-<participant_label> from
                            the BIDS spec (so it does not include "sub-"). If this
                            parameter is not provided all subjects should be
                            analyzed. Multiple participants can be specified with
                            a space separated list.
      --session_label SESSION_LABEL [SESSION_LABEL ...]
                            The label of the session that should be analyzed. The
                            label corresponds to ses-<session_label> from the BIDS
                            spec (so it does not include "ses-"). If this
                            parameter is not provided all sessions should be
                            analyzed. Multiple sessions can be specified with a
                            space separated list.
      --n_cpus N_CPUS       Number of CPUs/cores available to use.
      --stages {autorecon1,autorecon2,autorecon2-cp,autorecon2-wm,autorecon-pial,autorecon3,autorecon-all,all}
                            [{autorecon1,autorecon2,autorecon2-cp,autorecon2-wm,autorecon-pial,autorecon3,autorecon-all,all} ...]
                            Autorecon stages to run.
      --steps {cross-sectional,template,longitudinal} [{cross-sectional,template,longitudinal} ...]
                            Longitudinal pipeline steps to run.
      --template_name TEMPLATE_NAME
                            Name for the custom group level template generated for
                            this dataset
      --license_file LICENSE_FILE
                            Path to FreeSurfer license key file. To obtain it you
                            need to register (for free) at
                            https://surfer.nmr.mgh.harvard.edu/registration.html
      --acquisition_label ACQUISITION_LABEL
                            If the dataset contains multiple T1 weighted images
                            from different acquisitions which one should be used?
                            Corresponds to "acq-<acquisition_label>"
      --refine_pial_acquisition_label REFINE_PIAL_ACQUISITION_LABEL
                            If the dataset contains multiple T2 or FLAIR weighted
                            images from different acquisitions which one should be
                            used? Corresponds to "acq-<acquisition_label>"
      --multiple_sessions {longitudinal,multiday}
                            For datasets with multiday sessions where you do not
                            want to use the longitudinal pipeline, i.e., sessions
                            were back-to-back, set this to multiday, otherwise
                            sessions with T1w data will be considered independent
                            sessions for longitudinal analysis.
      --refine_pial {T2,FLAIR,None,T1only}
                            If the dataset contains 3D T2 or T2 FLAIR weighted
                            images (~1x1x1), these can be used to refine the pial
                            surface. If you want to ignore these, specify None or
                            T1only to base surfaces on the T1 alone.
      --hires_mode {auto,enable,disable}
                            Submilimiter (high resolution) processing. 'auto' -
                            use only if <1.0mm data detected, 'enable' - force on,
                            'disable' - force off
      --parcellations {aparc,aparc.a2009s} [{aparc,aparc.a2009s} ...]
                            Group2 option: cortical parcellation(s) to extract
                            stats from.
      --measurements {area,volume,thickness,thicknessstd,meancurv,gauscurv,foldind,curvind}
                            [{area,volume,thickness,thicknessstd,meancurv,gauscurv,foldind,curvind} ...]
                            Group2 option: cortical measurements to extract stats
                            for.
      -v, --version         show program's version number and exit
      --bids_validator_config BIDS_VALIDATOR_CONFIG
                            JSON file specifying configuration of bids-validator:
                            See https://github.com/INCF/bids-validator for more
                            info
      --skip_bids_validator
                            skips bids validation
  --3T {true,false}     enables the two 3T specific options that recon-all
  			supports: nu intensity correction params, and the
			special schwartz atlas

Participant level

To run it in participant level mode (for one participant):

	docker run -ti --rm \
  -v /Users/filo/data/ds005:/bids_dataset:ro \
  -v /Users/filo/outputs:/outputs \
  -v /Users/filo/freesurfer_license.txt:/license.txt \
  bids/freesurfer \
    /bids_dataset /outputs participant --participant_label 01 \
    --license_file "/license.txt"

Group level

After doing this for all subjects (potentially in parallel) the group level analyses can be run.

Template creation

To create a study specific template run:

	docker run -ti --rm \
  -v /Users/filo/data/ds005:/bids_dataset:ro \
  -v /Users/filo/outputs:/outputs \
  -v /Users/filo/freesurfer_license.txt:/license.txt \
  bids/freesurfer \
    /bids_dataset /outputs group1 \
    --license_file "/license.txt"
Stats and quality tables export

To export tables with aggregated measurements within regions of cortical parcellation and subcortical segementation, and a table with euler numbers (a quality metric, see Rosen et. al, 2017) run:

	docker run -ti --rm \
	-v /Users/filo/data/ds005:/bids_dataset:ro \
	-v /Users/filo/outputs:/outputs \
	-v /Users/filo/freesurfer_license.txt:/license.txt \
	bids/freesurfer \
	/bids_dataset /outputs group2 \
	--license_file "/license.txt"

Also see the --parcellations and --measurements arguments.

This step writes ouput into <output_dir>/00_group2_stats_tables/. E.g.:

  • lh.aparc.thickness.tsv contains cortical thickness values for the left hemisphere extracted via the aparac parcellation.
  • aseg.tsv contains subcortical information from the aseg segmentation.
  • euler.tsv contains the euler numbers

freesurfer's People

Contributors

alexlicohen avatar armaneshaghi avatar chrisgorgo avatar fliem avatar jdkent avatar niniko1997 avatar ntraut avatar peerherholz avatar pre-commit-ci[bot] avatar remi-gau avatar shotgunosine avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

freesurfer's Issues

Step decomposition in longitunal studies

Hello,
In longitudinal studies we do not have the possibility do decompose the execution in steps, sessions and stages. This may cause the execution to take a very long time and this can be a problem if we are running under a cluster with jobs limited to 24 hours.
It would be very nice if you let me to make a pull request to allow this decomposition...

Introduce automated image builds

While we're have tests running, we could/should also include automated builds through the dockerhub-github integration that follows specific build rules for e.g. latest vs. specific versions. WDYT @Shotgunosine?

Group tables

How about renaming group -> group1, and adding group-level stats tables (aparcstats2table, asegstats2table) as group2? I could work on that.

QA

Is there anything I can do to revive the QA branch?

T2 and T2 failure modes with multiple input

if there are multiple T2 or FLAIR images in the directory structure, the T2pial or FLAIRpial stages do not seem to be run.

Additionally, if multiple T1s are provided, but of different resolution, freesurfer crashes.

I'm thinking to write an additional switch to select the BIDS -rec_xyz- tags to designate a particular image to use and/or users can generate their own averages and assign them a -rec_avg- tag

i.e., labeling files as sub-003_ses-01_acq-mprage_rec-forFS_run-01_T1w.nii.gz
or
sub-003_ses-01_acq-mprage_rec-avg_T1w.nii.gz

Option to allow lower resolution T2s

Currently T2s and FLAIRs with any single voxel dimension greater than 1.2 mm are not used. Could we add an option to allow a user to bypass that limit? I've got some 1x1x2mm T2s that I'd like to try to use to refine the surfaces before I have to resort to hand editing the surfaces.

deploy failure

@Shotgunosine as suggested, I did a new release and "luckily" that resolved in an error that points to further updates that should be done. Within the deploy part, the following is failing:

#!/bin/bash -eo pipefail
if [[ -n "$DOCKER_PASS" ]]; then docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS && docker push bids/${CIRCLE_PROJECT_REPONAME}:latest; fi 
if [[ -n "$DOCKER_PASS" ]]; then docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS && docker tag bids/${CIRCLE_PROJECT_REPONAME} bids/${CIRCLE_PROJECT_REPONAME}:$CIRCLE_TAG && docker push bids/${CIRCLE_PROJECT_REPONAME}:$CIRCLE_TAG; fi 

with the error unknown shorthand flag: 'e' in -e, which is due docker removing this argument since the last time a new version was released. We'll need to adapt these parts accordingly.

force recomputation

  • At the moment we check if there is a is a {fsid}/scripts/IsRunning.lh+rh file. If it is there we remove the folder and start from scratch.
  • Elseif we check if there is a {fid} folder. If that's the case, we run the resume_cmd. If the recon-all has been completed, that seems to recompute everything.

How about adding an input argument that needs to be set to recompute everything (like --force-recompute)? That way

  • completed subjects that have been submitted to the analysis by mistake won't be computed
  • we could probably run test cases for longitudinal data on circleci by providing the cross-sectional recons and only testing the base and long step (#23)

rmtree removal

Currently, this does not allow for sequentially running autorecon1 then 2, etc... because of the deletion of the output directory if it exists:

if os.path.exists(os.path.join(output_dir, fsid)):
rmtree(os.path.join(output_dir, fsid))
run(cmd)

The default should be to use the existing output directory per standard freesurfer convention. You could add an optional flag/option of --clean to switch the rmtree lines back on...

Question about .annot file

hello, After I updated the vertex_label of rh.aparc.a2009s.annot with an algorithm, now I want to replace the vertex_label in the .annot with the updated vertex_label, mainly to see the effect of the segmentation, do you know what to do ?

Build with local freesurfer

I want to build the docker image manually with previous downloaded freesurfer to save time.

I copied freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0.tar.gz and license.txt into the BIDS-Apps/freesurfer clone folder, then modified the Dockerfile head part:

FROM ubuntu:16.04

RUN tar --no-same-owner -C /opt \
    --exclude='freesurfer/trctrain' \
    --exclude='freesurfer/subjects/fsaverage_sym' \
    --exclude='freesurfer/subjects/fsaverage3' \
    --exclude='freesurfer/subjects/fsaverage4' \
    --exclude='freesurfer/subjects/fsaverage5' \
    --exclude='freesurfer/subjects/fsaverage6' \
    --exclude='freesurfer/subjects/cvs_avg35' \
    --exclude='freesurfer/subjects/cvs_avg35_inMNI152' \
    --exclude='freesurfer/subjects/bert' \
    --exclude='freesurfer/subjects/V1_average' \
    --exclude='freesurfer/average/mult-comp-cor' \
    --exclude='freesurfer/lib/cuda' \
    --exclude='freesurfer/lib/qt'   \
    -zxvf freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0.tar.gz

COPY license.txt /opt/freesurfer/

RUN apt-get update
RUN apt-get install -y python3
...

I pulled ubuntu first:
docker pull ubuntu:16.04

Then build:
docker build -t fresurfer:6.0.0 .

And get this error:

Sending build context to Docker daemon  4.918GB
Step 1/48 : FROM ubuntu:16.04
 ---> 5e13f8dd4c1a
Step 2/48 : RUN tar --no-same-owner -C /opt     --exclude='freesurfer/trctrain'     --exclude='freesurfer/subjects/fsaverage_sym'     --exclude='freesurfer/subjects/fsaverage3'     --exclude='freesurfer/subjects/fsaverage4'     --exclude='freesurfer/subjects/fsaverage5'     --exclude='freesurfer/subjects/fsaverage6'     --exclude='freesurfer/subjects/cvs_avg35'     --exclude='freesurfer/subjects/cvs_avg35_inMNI152'     --exclude='freesurfer/subjects/bert'     --exclude='freesurfer/subjects/V1_average'     --exclude='freesurfer/average/mult-comp-cor'     --exclude='freesurfer/lib/cuda'     --exclude='freesurfer/lib/qt'       -zxvf freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0.tar.gz
 ---> Running in 6428bc61f682
tar (child): freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
The command '/bin/sh -c tar --no-same-owner -C /opt     --exclude='freesurfer/trctrain'     --exclude='freesurfer/subjects/fsaverage_sym'     --exclude='freesurfer/subjects/fsaverage3'     --exclude='freesurfer/subjects/fsaverage4'     --exclude='freesurfer/subjects/fsaverage5'     --exclude='freesurfer/subjects/fsaverage6'     --exclude='freesurfer/subjects/cvs_avg35'     --exclude='freesurfer/subjects/cvs_avg35_inMNI152'     --exclude='freesurfer/subjects/bert'     --exclude='freesurfer/subjects/V1_average'     --exclude='freesurfer/average/mult-comp-cor'     --exclude='freesurfer/lib/cuda'     --exclude='freesurfer/lib/qt'       -zxvf freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0.tar.gz' returned a non-zero code: 2

[BUG] - docker tags are wrong in the README.md and run.py

What version of the bids app were you using?

Version 6 and Version 7 have the same problem.

Describe your problem in detail.

docker pull bids/freesurfer:6
docker pull bids/freesurfer:7

These 2 commands from the README.md won't work as tags have changed to another format such 7-202309, 7.4.1-202309, etc.

Same with the warn message when v6 is used.

What command did you run?

See above.

Describe what you expected.

I expected docker/apptainer to pull the container but it didn't.

community standards

Hi gang,

I think we should add a CoC and contributor guidelines to make it easier for new folks to become an active part and ensure community standards. Looking around a bit, fmriprep's version(s) would be great to adapt from. WDYT @Shotgunosine?

Minimal freesurfer image

Hello together!

Some time ago I've created a minimal freesurfer image that is based on freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0 and is able to run recon-all.
It is only 732.6 MB compressed and 1.93GB when pulled.

I've figured you might be interested using it as a base in order to make your image a little smaller.

Here is it: https://hub.docker.com/r/alerokhin/freesurfer6

longitudinal tests

At the moment longitudinal test cases cannot be run as they take >2h.
Different handling of already processed data, or pass-through parameters (#18) might shorten run time.

how template to fsaverage5?

hi, I have a question to ask you. I want let rh.white template to fsaverage5.
I know can use mri_surf2surf, but I don't know how to set the parameters(srcsurfval、 src_type 、trg_surf)
mri_surf2surf --srcsubject bert --srcsurfval thickness --src_type curv --trg_type curv --trgsubject fsaverage5 --trgsurfval white --hemi rh

so, How to set these parameters for rh.white?

Bypass bids_validator

Even in participant level, the bids_validator does not allow to run freesurfer if there are errors even in 'func' files such as '(code: 13 - SLICE_TIMING_NOT_DEFINED)'.

Any way to bypass this for the participant level?

License not being recognized...

I was trying to test drive the BIDS freesurfer app, but couldn't get my license to be recognized.

My license exists as a file and is the new version (I'm using it for fmriprep... so I know it works/is valid).
jamielh@pfc:~/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1/derivatives$ ls /home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1/derivatives/license.txt /home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1/derivatives/license.txt

But this syntax:
docker run -ti --rm -v /home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1:/bids_dataset:ro -v /home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1/derivatives/freesurfer:/outputs bids/freesurfer /bids_dataset /outputs participant --participant_label NDARAA075AMK --n_cpus 2 --stages autorecon1 --steps cross-sectional --skip_bids_validator --license_file "/home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1/derivatives/license.txt"

Leads to this error output:
Traceback (most recent call last): File "/run.py", line 167, in <module> raise Exception("Provided license file does not exist") Exception: Provided license file does not exist

What am I doing wrong with that command line call? I tried with and without quotation marks, etc., but no luck. Thoughts?

-FLAIR and -FLAIRpial options should be added

Presently, if T2s are present, they are automatically added in (run.py lines 102-107 and 154-158). This should be made optional (what if the T2 is odd or not all your subjects in a study have T2s).

Additionally, this section should be replicated for -FLAIR and -FLAIRpial scraping files with _FLAIR per BIDS spec

Possible to skip participants without aparc.stats files?

Hi,

I have recently been using this container to run FreeSurfer 6.0, using Singularity (v3.8) on an HPC Cluster on a large number of subjects. I was able to run recon-all just fine excepted for a few subjects (14) where the MRI was not able to finish processing despite a lot of time given to the task.

For now, we have decided to exclude these participants, so I tried generating the statistics spreadsheet (the equivalent to what aparcstats2table would output I am guessing?) with the group2 argument, but the processing fails once it can't find a .stats file:

skipping bids-validator...
Writing stats tables to /output/00_group2_stats_tables.
Creating cortical stats table for lh aparc thickness
SUBJECTS_DIR : /output
Parsing the .stats files

ERROR: Cannot find stats file /output/sub-XXXXXX/stats/lh.aparc.stats


Traceback (most recent call last):
  File "/run.py", line 533, in <module>
    run(cmd, env={"SUBJECTS_DIR": output_dir, 'FS_LICENSE': args.license_file})
  File "/run.py", line 28, in run
    raise Exception("Non zero return code: %d" % process.returncode)
Exception: Non zero return code: 1

I know that the original aparcstats2table has a --skip option, as the default behavior is to throw an error, so I was wondering if the same was true for the container. So far, I tried adding the --skip option to my command, but it did not change anything at all or it tells me that it is not a recognized argument:

SINGULARITY_CMD="singularity run --cleanenv \
-B ${BIDS_DIR}:/bids_dirs:ro \
-B ${FS_OUTPUT}:/output \
-B ${FS_LISENCE}/license.txt:/license.txt \
${SINGULARITY_IMG} \
/bids_dirs /output group2 \
--license_file /license.txt \
--skip \
--skip_bids_validator"

Is there a way to switch this behavior in the container? And if so, could it be added to the documentation?

Thank you for your time and for creating this container!

Add a flag to ignore specific runs for some subjects

Your idea

I have a couple of subjects with several runs of T1w or FLAIR and I naively thought that .bidsignore was an (okay) place to indicate which runs to ignore. Little did I know, .bidsignore is meant for the validator only and doesn't do anything (#131). Unless I missed something in the BIDS-App documentation, it appears there is no simple way to ignore certain runs. The problem atm is it creates a robust template with all the runs available for a participant and won't tell unless you skim the logs.

The rogue way is to simply delete the undesired scans but I hope there could be way to deal with this problem without deleting files.

Would a --bids-filter-file à la smriprep #104 solve this problem? I never used such in the past so I don't know.. Maybe a simple hack to ignore specific runs for some subjects only already exists ?

Am I right in thinking that the most straightforward approach to address this would likely involve passing a file that contains a list of scans to be ignored/removed when listing the T1w, T2, flair, etc.?
Cheers!

Support for CUDA

Hi!

Thank you for the Dockerfile. I was wondering whether you got to work freesurfer with CUDA on a docker image. It's being a nightmare until now for me, but the gain in computation time is promising.

Best,
Victor

Docker build currently failing

Looks like a build on DockerHub with the current Dockerfile is failing with the following error:

Running setup.py (path:/tmp/pip_build_root/pandas/setup.py) egg_info for package pandas
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip_build_root/pandas/setup.py", line 840, in <module>
**setuptools_kwargs
File "/usr/lib/python3.4/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 239, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 264, in fetch_build_eggs
replace_conflicting=True
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 620, in resolve
dist = best[req.key] = env.best_match(req, ws, installer)
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 858, in best_match
return self.obtain(req, installer) # try and download/install
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 870, in obtain
return installer(requirement)
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 314, in fetch_build_egg
return cmd.easy_install(req)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 616, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 646, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 834, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1040, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1025, in run_setup
run_setup(setup_script, args)
File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 50, in run_setup
lambda: execfile(
File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 100, in run
return func()
File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 52, in <lambda>
{'__file__':setup_script, '__name__':'__main__'}
File "/usr/lib/python3/dist-packages/setuptools/compat.py", line 78, in execfile
exec(compile(source, fn, 'exec'), globs, locs)
File "setup.py", line 31, in <module>
return sys.platform == "darwin"
RuntimeError: Python version >= 3.5 required.
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip_build_root/pandas/setup.py", line 840, in <module>
**setuptools_kwargs
File "/usr/lib/python3.4/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 239, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 264, in fetch_build_eggs
replace_conflicting=True
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 620, in resolve
dist = best[req.key] = env.best_match(req, ws, installer)
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 858, in best_match
return self.obtain(req, installer) # try and download/install
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 870, in obtain
return installer(requirement)
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 314, in fetch_build_egg
return cmd.easy_install(req)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 616, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 646, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 834, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1040, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1025, in run_setup
run_setup(setup_script, args)
File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 50, in run_setup
lambda: execfile(
File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 100, in run
return func()
File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 52, in <lambda>
{'__file__':setup_script, '__name__':'__main__'}
File "/usr/lib/python3/dist-packages/setuptools/compat.py", line 78, in execfile
exec(compile(source, fn, 'exec'), globs, locs)
File "setup.py", line 31, in <module>
return sys.platform == "darwin"
RuntimeError: Python version >= 3.5 required.
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/pandas
Storing debug log for failure in /root/.pip/pip.log
Removing intermediate container 14f102f30eae
The command '/bin/sh -c pip3 install nibabel pandas' returned a non-zero code: 1

Adding the -notal-check flag

Hi,
I need to skip the automatic failure detection of Talairach alignment for a few subject (-notal-check flag). Perhaps I missed something in the documentation, but I couldn't find a way to do it. Is there a way to add flags to the execution? Thanks.

fix deploy dependencies

image

would also be good to deploy the head of the master branch on every push but with an "unstable" tag like is done for most other bids-apps

Longitudinal vs multi-day 'sessions'

Presently, the code assumes that the longitudinal pipeline should be run if multiple sessions are present, however it is sometimes the case (often in clinical research) that the two sessions are only a few days apart. What should be the most appropriate way to handle this?

  • a switch to reset longitudinal_study back to False?
    or something else?

question about running in parallel

Hi all,

Thanks for the great app! I am trying to run a large number of participants but I am having trouble running in parallel. I am running it on an hpc with singularity. I have tried specifying multiple participants with the participant label and changing the number of cpus but it is not running in parallel. Thanks for your help!

Best,
Alex

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.