Giter Site home page Giter Site logo

freesurfer's Issues

Question about .annot file

hello, After I updated the vertex_label of rh.aparc.a2009s.annot with an algorithm, now I want to replace the vertex_label in the .annot with the updated vertex_label, mainly to see the effect of the segmentation, do you know what to do ?

Minimal freesurfer image

Hello together!

Some time ago I've created a minimal freesurfer image that is based on freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0 and is able to run recon-all.
It is only 732.6 MB compressed and 1.93GB when pulled.

I've figured you might be interested using it as a base in order to make your image a little smaller.

Here is it: https://hub.docker.com/r/alerokhin/freesurfer6

Option to allow lower resolution T2s

Currently T2s and FLAIRs with any single voxel dimension greater than 1.2 mm are not used. Could we add an option to allow a user to bypass that limit? I've got some 1x1x2mm T2s that I'd like to try to use to refine the surfaces before I have to resort to hand editing the surfaces.

how template to fsaverage5?

hi, I have a question to ask you. I want let rh.white template to fsaverage5.
I know can use mri_surf2surf, but I don't know how to set the parameters(srcsurfval、 src_type 、trg_surf)
mri_surf2surf --srcsubject bert --srcsurfval thickness --src_type curv --trg_type curv --trgsubject fsaverage5 --trgsurfval white --hemi rh

so, How to set these parameters for rh.white?

deploy failure

@Shotgunosine as suggested, I did a new release and "luckily" that resolved in an error that points to further updates that should be done. Within the deploy part, the following is failing:

#!/bin/bash -eo pipefail
if [[ -n "$DOCKER_PASS" ]]; then docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS && docker push bids/${CIRCLE_PROJECT_REPONAME}:latest; fi 
if [[ -n "$DOCKER_PASS" ]]; then docker login -e $DOCKER_EMAIL -u $DOCKER_USER -p $DOCKER_PASS && docker tag bids/${CIRCLE_PROJECT_REPONAME} bids/${CIRCLE_PROJECT_REPONAME}:$CIRCLE_TAG && docker push bids/${CIRCLE_PROJECT_REPONAME}:$CIRCLE_TAG; fi 

with the error unknown shorthand flag: 'e' in -e, which is due docker removing this argument since the last time a new version was released. We'll need to adapt these parts accordingly.

Longitudinal vs multi-day 'sessions'

Presently, the code assumes that the longitudinal pipeline should be run if multiple sessions are present, however it is sometimes the case (often in clinical research) that the two sessions are only a few days apart. What should be the most appropriate way to handle this?

  • a switch to reset longitudinal_study back to False?
    or something else?

QA

Is there anything I can do to revive the QA branch?

force recomputation

  • At the moment we check if there is a is a {fsid}/scripts/IsRunning.lh+rh file. If it is there we remove the folder and start from scratch.
  • Elseif we check if there is a {fid} folder. If that's the case, we run the resume_cmd. If the recon-all has been completed, that seems to recompute everything.

How about adding an input argument that needs to be set to recompute everything (like --force-recompute)? That way

  • completed subjects that have been submitted to the analysis by mistake won't be computed
  • we could probably run test cases for longitudinal data on circleci by providing the cross-sectional recons and only testing the base and long step (#23)

Possible to skip participants without aparc.stats files?

Hi,

I have recently been using this container to run FreeSurfer 6.0, using Singularity (v3.8) on an HPC Cluster on a large number of subjects. I was able to run recon-all just fine excepted for a few subjects (14) where the MRI was not able to finish processing despite a lot of time given to the task.

For now, we have decided to exclude these participants, so I tried generating the statistics spreadsheet (the equivalent to what aparcstats2table would output I am guessing?) with the group2 argument, but the processing fails once it can't find a .stats file:

skipping bids-validator...
Writing stats tables to /output/00_group2_stats_tables.
Creating cortical stats table for lh aparc thickness
SUBJECTS_DIR : /output
Parsing the .stats files

ERROR: Cannot find stats file /output/sub-XXXXXX/stats/lh.aparc.stats


Traceback (most recent call last):
  File "/run.py", line 533, in <module>
    run(cmd, env={"SUBJECTS_DIR": output_dir, 'FS_LICENSE': args.license_file})
  File "/run.py", line 28, in run
    raise Exception("Non zero return code: %d" % process.returncode)
Exception: Non zero return code: 1

I know that the original aparcstats2table has a --skip option, as the default behavior is to throw an error, so I was wondering if the same was true for the container. So far, I tried adding the --skip option to my command, but it did not change anything at all or it tells me that it is not a recognized argument:

SINGULARITY_CMD="singularity run --cleanenv \
-B ${BIDS_DIR}:/bids_dirs:ro \
-B ${FS_OUTPUT}:/output \
-B ${FS_LISENCE}/license.txt:/license.txt \
${SINGULARITY_IMG} \
/bids_dirs /output group2 \
--license_file /license.txt \
--skip \
--skip_bids_validator"

Is there a way to switch this behavior in the container? And if so, could it be added to the documentation?

Thank you for your time and for creating this container!

rmtree removal

Currently, this does not allow for sequentially running autorecon1 then 2, etc... because of the deletion of the output directory if it exists:

if os.path.exists(os.path.join(output_dir, fsid)):
rmtree(os.path.join(output_dir, fsid))
run(cmd)

The default should be to use the existing output directory per standard freesurfer convention. You could add an optional flag/option of --clean to switch the rmtree lines back on...

community standards

Hi gang,

I think we should add a CoC and contributor guidelines to make it easier for new folks to become an active part and ensure community standards. Looking around a bit, fmriprep's version(s) would be great to adapt from. WDYT @Shotgunosine?

Build with local freesurfer

I want to build the docker image manually with previous downloaded freesurfer to save time.

I copied freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0.tar.gz and license.txt into the BIDS-Apps/freesurfer clone folder, then modified the Dockerfile head part:

FROM ubuntu:16.04

RUN tar --no-same-owner -C /opt \
    --exclude='freesurfer/trctrain' \
    --exclude='freesurfer/subjects/fsaverage_sym' \
    --exclude='freesurfer/subjects/fsaverage3' \
    --exclude='freesurfer/subjects/fsaverage4' \
    --exclude='freesurfer/subjects/fsaverage5' \
    --exclude='freesurfer/subjects/fsaverage6' \
    --exclude='freesurfer/subjects/cvs_avg35' \
    --exclude='freesurfer/subjects/cvs_avg35_inMNI152' \
    --exclude='freesurfer/subjects/bert' \
    --exclude='freesurfer/subjects/V1_average' \
    --exclude='freesurfer/average/mult-comp-cor' \
    --exclude='freesurfer/lib/cuda' \
    --exclude='freesurfer/lib/qt'   \
    -zxvf freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0.tar.gz

COPY license.txt /opt/freesurfer/

RUN apt-get update
RUN apt-get install -y python3
...

I pulled ubuntu first:
docker pull ubuntu:16.04

Then build:
docker build -t fresurfer:6.0.0 .

And get this error:

Sending build context to Docker daemon  4.918GB
Step 1/48 : FROM ubuntu:16.04
 ---> 5e13f8dd4c1a
Step 2/48 : RUN tar --no-same-owner -C /opt     --exclude='freesurfer/trctrain'     --exclude='freesurfer/subjects/fsaverage_sym'     --exclude='freesurfer/subjects/fsaverage3'     --exclude='freesurfer/subjects/fsaverage4'     --exclude='freesurfer/subjects/fsaverage5'     --exclude='freesurfer/subjects/fsaverage6'     --exclude='freesurfer/subjects/cvs_avg35'     --exclude='freesurfer/subjects/cvs_avg35_inMNI152'     --exclude='freesurfer/subjects/bert'     --exclude='freesurfer/subjects/V1_average'     --exclude='freesurfer/average/mult-comp-cor'     --exclude='freesurfer/lib/cuda'     --exclude='freesurfer/lib/qt'       -zxvf freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0.tar.gz
 ---> Running in 6428bc61f682
tar (child): freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0.tar.gz: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
The command '/bin/sh -c tar --no-same-owner -C /opt     --exclude='freesurfer/trctrain'     --exclude='freesurfer/subjects/fsaverage_sym'     --exclude='freesurfer/subjects/fsaverage3'     --exclude='freesurfer/subjects/fsaverage4'     --exclude='freesurfer/subjects/fsaverage5'     --exclude='freesurfer/subjects/fsaverage6'     --exclude='freesurfer/subjects/cvs_avg35'     --exclude='freesurfer/subjects/cvs_avg35_inMNI152'     --exclude='freesurfer/subjects/bert'     --exclude='freesurfer/subjects/V1_average'     --exclude='freesurfer/average/mult-comp-cor'     --exclude='freesurfer/lib/cuda'     --exclude='freesurfer/lib/qt'       -zxvf freesurfer-Linux-centos6_x86_64-stable-pub-v6.0.0.tar.gz' returned a non-zero code: 2

Add a flag to ignore specific runs for some subjects

Your idea

I have a couple of subjects with several runs of T1w or FLAIR and I naively thought that .bidsignore was an (okay) place to indicate which runs to ignore. Little did I know, .bidsignore is meant for the validator only and doesn't do anything (#131). Unless I missed something in the BIDS-App documentation, it appears there is no simple way to ignore certain runs. The problem atm is it creates a robust template with all the runs available for a participant and won't tell unless you skim the logs.

The rogue way is to simply delete the undesired scans but I hope there could be way to deal with this problem without deleting files.

Would a --bids-filter-file à la smriprep #104 solve this problem? I never used such in the past so I don't know.. Maybe a simple hack to ignore specific runs for some subjects only already exists ?

Am I right in thinking that the most straightforward approach to address this would likely involve passing a file that contains a list of scans to be ignored/removed when listing the T1w, T2, flair, etc.?
Cheers!

Support for CUDA

Hi!

Thank you for the Dockerfile. I was wondering whether you got to work freesurfer with CUDA on a docker image. It's being a nightmare until now for me, but the gain in computation time is promising.

Best,
Victor

[BUG] - docker tags are wrong in the README.md and run.py

What version of the bids app were you using?

Version 6 and Version 7 have the same problem.

Describe your problem in detail.

docker pull bids/freesurfer:6
docker pull bids/freesurfer:7

These 2 commands from the README.md won't work as tags have changed to another format such 7-202309, 7.4.1-202309, etc.

Same with the warn message when v6 is used.

What command did you run?

See above.

Describe what you expected.

I expected docker/apptainer to pull the container but it didn't.

question about running in parallel

Hi all,

Thanks for the great app! I am trying to run a large number of participants but I am having trouble running in parallel. I am running it on an hpc with singularity. I have tried specifying multiple participants with the participant label and changing the number of cpus but it is not running in parallel. Thanks for your help!

Best,
Alex

Introduce automated image builds

While we're have tests running, we could/should also include automated builds through the dockerhub-github integration that follows specific build rules for e.g. latest vs. specific versions. WDYT @Shotgunosine?

fix deploy dependencies

image

would also be good to deploy the head of the master branch on every push but with an "unstable" tag like is done for most other bids-apps

License not being recognized...

I was trying to test drive the BIDS freesurfer app, but couldn't get my license to be recognized.

My license exists as a file and is the new version (I'm using it for fmriprep... so I know it works/is valid).
jamielh@pfc:~/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1/derivatives$ ls /home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1/derivatives/license.txt /home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1/derivatives/license.txt

But this syntax:
docker run -ti --rm -v /home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1:/bids_dataset:ro -v /home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1/derivatives/freesurfer:/outputs bids/freesurfer /bids_dataset /outputs participant --participant_label NDARAA075AMK --n_cpus 2 --stages autorecon1 --steps cross-sectional --skip_bids_validator --license_file "/home/jamielh/Volumes/Hanson/NKI_HealthyBrainNetwork/SI/R1/derivatives/license.txt"

Leads to this error output:
Traceback (most recent call last): File "/run.py", line 167, in <module> raise Exception("Provided license file does not exist") Exception: Provided license file does not exist

What am I doing wrong with that command line call? I tried with and without quotation marks, etc., but no luck. Thoughts?

longitudinal tests

At the moment longitudinal test cases cannot be run as they take >2h.
Different handling of already processed data, or pass-through parameters (#18) might shorten run time.

T2 and T2 failure modes with multiple input

if there are multiple T2 or FLAIR images in the directory structure, the T2pial or FLAIRpial stages do not seem to be run.

Additionally, if multiple T1s are provided, but of different resolution, freesurfer crashes.

I'm thinking to write an additional switch to select the BIDS -rec_xyz- tags to designate a particular image to use and/or users can generate their own averages and assign them a -rec_avg- tag

i.e., labeling files as sub-003_ses-01_acq-mprage_rec-forFS_run-01_T1w.nii.gz
or
sub-003_ses-01_acq-mprage_rec-avg_T1w.nii.gz

Bypass bids_validator

Even in participant level, the bids_validator does not allow to run freesurfer if there are errors even in 'func' files such as '(code: 13 - SLICE_TIMING_NOT_DEFINED)'.

Any way to bypass this for the participant level?

Group tables

How about renaming group -> group1, and adding group-level stats tables (aparcstats2table, asegstats2table) as group2? I could work on that.

-FLAIR and -FLAIRpial options should be added

Presently, if T2s are present, they are automatically added in (run.py lines 102-107 and 154-158). This should be made optional (what if the T2 is odd or not all your subjects in a study have T2s).

Additionally, this section should be replicated for -FLAIR and -FLAIRpial scraping files with _FLAIR per BIDS spec

Docker build currently failing

Looks like a build on DockerHub with the current Dockerfile is failing with the following error:

Running setup.py (path:/tmp/pip_build_root/pandas/setup.py) egg_info for package pandas
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip_build_root/pandas/setup.py", line 840, in <module>
**setuptools_kwargs
File "/usr/lib/python3.4/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 239, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 264, in fetch_build_eggs
replace_conflicting=True
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 620, in resolve
dist = best[req.key] = env.best_match(req, ws, installer)
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 858, in best_match
return self.obtain(req, installer) # try and download/install
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 870, in obtain
return installer(requirement)
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 314, in fetch_build_egg
return cmd.easy_install(req)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 616, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 646, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 834, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1040, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1025, in run_setup
run_setup(setup_script, args)
File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 50, in run_setup
lambda: execfile(
File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 100, in run
return func()
File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 52, in <lambda>
{'__file__':setup_script, '__name__':'__main__'}
File "/usr/lib/python3/dist-packages/setuptools/compat.py", line 78, in execfile
exec(compile(source, fn, 'exec'), globs, locs)
File "setup.py", line 31, in <module>
return sys.platform == "darwin"
RuntimeError: Python version >= 3.5 required.
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip_build_root/pandas/setup.py", line 840, in <module>
**setuptools_kwargs
File "/usr/lib/python3.4/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 239, in __init__
self.fetch_build_eggs(attrs.pop('setup_requires'))
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 264, in fetch_build_eggs
replace_conflicting=True
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 620, in resolve
dist = best[req.key] = env.best_match(req, ws, installer)
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 858, in best_match
return self.obtain(req, installer) # try and download/install
File "/usr/lib/python3/dist-packages/pkg_resources.py", line 870, in obtain
return installer(requirement)
File "/usr/lib/python3/dist-packages/setuptools/dist.py", line 314, in fetch_build_egg
return cmd.easy_install(req)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 616, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 646, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 834, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1040, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 1025, in run_setup
run_setup(setup_script, args)
File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 50, in run_setup
lambda: execfile(
File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 100, in run
return func()
File "/usr/lib/python3/dist-packages/setuptools/sandbox.py", line 52, in <lambda>
{'__file__':setup_script, '__name__':'__main__'}
File "/usr/lib/python3/dist-packages/setuptools/compat.py", line 78, in execfile
exec(compile(source, fn, 'exec'), globs, locs)
File "setup.py", line 31, in <module>
return sys.platform == "darwin"
RuntimeError: Python version >= 3.5 required.
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/pandas
Storing debug log for failure in /root/.pip/pip.log
Removing intermediate container 14f102f30eae
The command '/bin/sh -c pip3 install nibabel pandas' returned a non-zero code: 1

Adding the -notal-check flag

Hi,
I need to skip the automatic failure detection of Talairach alignment for a few subject (-notal-check flag). Perhaps I missed something in the documentation, but I couldn't find a way to do it. Is there a way to add flags to the execution? Thanks.

Step decomposition in longitunal studies

Hello,
In longitudinal studies we do not have the possibility do decompose the execution in steps, sessions and stages. This may cause the execution to take a very long time and this can be a problem if we are running under a cluster with jobs limited to 24 hours.
It would be very nice if you let me to make a pull request to allow this decomposition...

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.