Giter Site home page Giter Site logo

nipy / mindboggle Goto Github PK

View Code? Open in Web Editor NEW
142.0 20.0 54.0 37.75 MB

Automated anatomical brain label/shape analysis software (+ website)

Home Page: http://mindboggle.info

License: Other

Python 87.48% Shell 0.74% CMake 0.34% C++ 10.62% Dockerfile 0.82%
brain-imaging image-analysis image-processing surfaces mesh-processing shape-descriptor shape-analysis python python3

mindboggle's Introduction

Software

Mindboggle's open source brain morphometry platform takes in preprocessed T1-weighted MRI data, and outputs volume, surface, and tabular data containing label, feature, and shape information for further analysis. Mindboggle can be run on the command line as "mindboggle" and also exists as a cross-platform Docker container for convenience and reproducibility of results. The software runs on Linux and is written in Python 3 and Python-wrapped C++ code called within a Nipype pipeline framework. We have tested the software most extensively with Python 3.5.1 on Ubuntu Linux 14.04.

Release
Date

modindex and genindex

Contents

_Reference

A Klein, SS Ghosh, FS Bao, J Giard, Y Hame, E Stavsky, N Lee, B Rossa, M Reuter, EC Neto, A Keshavan. 2017. Mindboggling morphometry of human brains. PLoS Computational Biology 13(3): e1005350. doi:10.1371/journal.pcbi.1005350

_Help

General questions about Mindboggle, or having some difficulties getting started? Please search for relevant mindboggle posts in NeuroStars or post your own message with the tag "mindboggle".

Found a bug, big or small? Please submit an issue on GitHub.

_Installation

We recommend installing Mindboggle and its dependencies as a cross-platform Docker container for greater convenience and reproducibility of results. All the examples below assume you are using this Docker container, with the path /home/jovyan/work/ pointing to your host machine. (Alternatively, one can create a Singularity image.)

1. Install and run Docker on your (macOS, Linux, or Windows) host machine.

2. Download the Mindboggle Docker container (copy/paste the following in a terminal window):

docker pull nipy/mindboggle

Note 1: This contains FreeSurfer, ANTs, and Mindboggle, so it is currently over 6GB.*

Note 2: You may need to increase memory allocated by Docker to at least 5GB. For example: By default, Docker for Mac is set to use 2 GB runtime memory.

3. Recommended: download sample data. To try out the mindboggle examples below, download and unzip the directory of example input data mindboggle_input_example.zip (455 MB). For example MRI data to preprocess with FreeSurfer and ANTs software, download and unzip example_mri_data.zip (29 MB).

4. Recommended: set environment variables for clarity in the commands below (modify accordingly, except for DOCK -- careful, this step is tricky!):

HOST=/Users/binarybottle  # path on local host seen from Docker container to access/save data
DOCK=/home/jovyan/work  # path to HOST from Docker container (DO NOT CHANGE)
IMAGE=$DOCK/example_mri_data/T1.nii.gz  # brain image in $HOST to process
ID=arno  # ID for brain image
OUT=$DOCK/mindboggle123_output # output path ('--out $OUT' below is optional)

_Tutorial

To run the Mindboggle jupyter notebook tutorial, first install the Mindboggle Docker container (above) and run the notebook in a web browser as follows (replacing $HOST with the absolute path where you want to access/save data):

docker run --rm -ti -v $HOST:/home/jovyan/work -p 8888:8888 nipy/mindboggle jupyter notebook /opt/mindboggle/docs/mindboggle_tutorial.ipynb --ip=0.0.0.0 --allow-root

In the output on the command line you'll see something like:

[I 20:47:38.209 NotebookApp] The Jupyter Notebook is running at:
[I 20:47:38.210 NotebookApp] http://(057a72e00d63 or 127.0.0.1):8888/?token=62853787e0d6e180856eb22a51609b25e

You would then copy and paste the corresponding address into your web browser (in this case, http://127.0.0.1:8888/?token=62853787e0d6e180856eb22a51609b25e), and click on "mindboggle_tutorial.ipynb".

_Run one command

The Mindboggle Docker container can be run as a single command to process a T1-weighted MR brain image through FreeSurfer, ANTs, and Mindboggle. Skip to the next section if you wish to run recon-all, antsCorticalThickness.sh, and mindboggle differently:

docker run --rm -ti -v $HOST:$DOCK nipy/mindboggle mindboggle123 $IMAGE --id $ID

Outputs are stored in $DOCK/mindboggle123_output/ by default, but you can set a different output path with --out $OUT.

_Run separate commands

If finer control is needed over the software in the Docker container, the following instructions outline how to run each command separately. Mindboggle currently takes output from FreeSurfer and optionally from ANTs. FreeSurfer version 6 or higher is recommended because by default it uses Mindboggle’s DKT-100 surface-based atlas to generate corresponding labels on the cortical surfaces and in the cortical and non-cortical volumes (v5.3 generates these surface labels by default; older versions require "-gcs DKTatlas40.gcs" to generate these surface labels).

  1. Enter the Docker container's bash shell to run recon-all, antsCorticalThickness.sh, and mindboggle commands:

    docker run --rm -ti -v $HOST:$DOCK -p 5000:5000 nipy/mindboggle
  2. Recommended: reset environment variables as above within the Docker container:

    DOCK=/home/jovyan/work  # path to HOST from Docker container
    IMAGE=$DOCK/example_mri_data/T1.nii.gz  # input image on HOST
    ID=arno  # ID for brain image

3. FreeSurfer generates labeled cortical surfaces, and labeled cortical and noncortical volumes. Run recon-all on a T1-weighted IMAGE file (and optionally a T2-weighted image), and set the output ID name as well as the $FREESURFER_OUT output directory:

FREESURFER_OUT=$DOCK/freesurfer_subjects

recon-all -all -i $IMAGE -s $ID -sd $FREESURFER_OUT

4. ANTs provides brain volume extraction, segmentation, and registration-based labeling. antsCorticalThickness.sh generates transforms and segmentation files used by Mindboggle, and is run on the same IMAGE file and ID as above, with $ANTS_OUT output directory. TEMPLATE points to the OASIS-30_Atropos_template folder already installed in the Docker container (backslashes split the command for readability):

ANTS_OUT=$DOCK/ants_subjects
TEMPLATE=/opt/data/OASIS-30_Atropos_template

antsCorticalThickness.sh -d 3 -a $IMAGE -o $ANTS_OUT/$ID/ants \
  -e $TEMPLATE/T_template0.nii.gz \
  -t $TEMPLATE/T_template0_BrainCerebellum.nii.gz \
  -m $TEMPLATE/T_template0_BrainCerebellumProbabilityMask.nii.gz \
  -f $TEMPLATE/T_template0_BrainCerebellumExtractionMask.nii.gz \
  -p $TEMPLATE/Priors2/priors%d.nii.gz \
  -u 0

5. Mindboggle can be run on data preprocessed by recon-all and antsCorticalThickness.sh as above by setting:

FREESURFER_SUBJECT=$FREESURFER_OUT/$ID
ANTS_SUBJECT=$ANTS_OUT/$ID
OUT=$DOCK/mindboggled  # output folder

Or it can be run on the mindboggle_input_example preprocessed data by setting:

EXAMPLE=$DOCK/mindboggle_input_example
FREESURFER_SUBJECT=$EXAMPLE/freesurfer/subjects/arno
ANTS_SUBJECT=$EXAMPLE/ants/subjects/arno
OUT=$DOCK/mindboggled  # output folder

Example Mindboggle commands:

To learn about Mindboggle's command options, type this in a terminal window:

mindboggle -h

Example 1: Run Mindboggle on data processed by FreeSurfer but not ANTs:

mindboggle $FREESURFER_SUBJECT --out $OUT

Example 2: Same as Example 1 with output to visualize surface data with roygbiv:

mindboggle $FREESURFER_SUBJECT --out $OUT --roygbiv

Example 3: Take advantage of ANTs output as well ("\" splits for readability):

mindboggle $FREESURFER_SUBJECT --out $OUT --roygbiv \
    --ants $ANTS_SUBJECT/antsBrainSegmentation.nii.gz

Example 4: Generate only volume (no surface) labels and shapes:

mindboggle $FREESURFER_SUBJECT --out $OUT \
    --ants $ANTS_SUBJECT/antsBrainSegmentation.nii.gz \
    --no_surfaces

_Visualize output

To visualize Mindboggle output with roygbiv, start the Docker image (#1 above), then run roygbiv on an output directory:

roygbiv $OUT/$ID

and open a browser to localhost:5000.

Currently roygbiv only shows summarized data, but one of our goals is to work on by-vertex visualizations (for the latter, try Paraview).

_Appendix: processing

The following steps are performed by Mindboggle (with links to code on GitHub):

  1. Create hybrid gray/white segmentation from FreeSurfer and ANTs output (combine_2labels_in_2volumes).
  2. Fill hybrid segmentation with FreeSurfer- or ANTs-registered labels.
  3. Compute volume shape measures for each labeled region:

  4. Compute surface shape measures for every cortical mesh vertex:

  5. Extract cortical surface features:

  6. For each cortical surface label/sulcus, compute:

  7. Compute statistics (stats_per_label in compute.py) for each shape measure in #4 for each label/feature:

    • median
    • median absolute deviation
    • mean
    • standard deviation
    • skew
    • kurtosis
    • lower quartile
    • upper quartile

_Appendix: output

Example output data can be found on Mindboggle's examples site on osf.io. By default, output files are saved in $HOME/mindboggled/SUBJECT, where $HOME is the home directory and SUBJECT is a name representing the person's brain that has been scanned. Volume files are in NIfTI format, surface meshes in VTK format, and tables are comma-delimited. Each file contains integers that correspond to anatomical labels <labels> or features (0-24 for sulci). All output data are in the original subject's space. The following include outputs from most, but not all, optional arguments.

Folder

Contents Format

labels/

number-labeled surfaces and volumes

.vtk, .nii.gz

features/

surfaces with features: sulci, fundi

.vtk

shapes/

surfaces with shape measures (per vertex)

.vtk

tables/

tables of shape measures (per label/feature/vertex) .csv

mindboggled / $SUBJECT /

labels /

freesurfer_wmparc_labels_in_hybrid_graywhite.nii.gz: hybrid segmentation filled with FS labels

ants_labels_in_hybrid_graywhite.nii.gz: hybrid segmentation filled with ANTs + FS cerebellar labels

[left,right]_cortical_surface / freesurfer_cortex_labels.vtk: DKT cortical surface labels

features / [left,right]_cortical_surface /

folds.vtk: (unidentified) depth-based folds

sulci.vtk: sulci defined by DKT label pairs in depth-based folds

fundus_per_sulcus.vtk: fundus curve per sulcus -- UNDER EVALUATION --

cortex_in_MNI152_space.vtk: cortical surfaces aligned to an MNI152 template

shapes / [left,right]_cortical_surface /

area.vtk: per-vertex surface area

mean_curvature.vtk: per-vertex mean curvature

geodesic_depth.vtk: per-vertex geodesic depth

travel_depth.vtk: per-vertex travel depth

freesurfer_curvature.vtk: FS curvature files converted to VTK

freesurfer_sulc.vtk: FS sulc (convexity) files converted to VTK

freesurfer_thickness.vtk: FS thickness files converted to VTK

tables /

volume_per_freesurfer_label.csv: volume per FS label

volumes_per_ants_label.csv: volume per ANTs label

[left,right]_cortical_surface /

label_shapes.csv: per-label surface shape statistics

sulcus_shapes.csv: per-sulcus surface shape statistics

fundus_shapes.csv: per-fundus surface shape statistics -- UNDER EVALUATION --

vertices.csv: per-vertex surface shape statistics

mindboggle's People

Contributors

akeshavan avatar binarybottle avatar brianthelion avatar djarecka avatar eliezerstavsky avatar forrestbao avatar gdevenyi avatar halcanary avatar joachimgiard avatar katrinleinweber avatar lorensen avatar m-reuter avatar mgxd avatar mih avatar nicholsn avatar ohinds avatar peerherholz avatar razlighi avatar satra avatar shnizzedy avatar skrish13 avatar tgbugs avatar yhame avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mindboggle's Issues

Fundus extraction by skeletonizing and smoothing

Arno Klein and Yrjo Hame (5/1/2013):

Arno:
i revised the skeletonize() function to try to extract fundi from your likelihood values, and it looks like it's doing a pretty good job. at each iteration it now removes all (non-region-boundary) endpoints, then removes only a fraction of threshold edge vertices, which ensures that high-value vertices are retained.

Yrjo:
Looks good, it seems to me that the skeletonize finds better paths overall for the 'irregular' folds, whereas the connect_points() maybe produces smoother results? Could we try to combine the two approaches?

The skeletonize could be run first as a prior step. Then, we could take a 'band' around the skeletonization result, and run the connect_points() as usual, but ignoring regions outside the band. This would produce the same overall path, but increase smoothness of the fundi. What do you think?

Arno:
i wrote functions to dilate the eroded skeleton and tried running your hmmf code, but the results are still not smooth.

antsApplyTransforms inputs

I have added antsRegistration to the pipeline, but am having trouble with antsApplyTransforms:

https://github.com/binarybottle/mindboggle/blob/master/mindboggle/mindboggler.py#L1767

The crash file ends with:

Node inputs:

args =
default_value = 0.0
dimension = 3
environ = {}
ignore_exception = False
input_image =
input_image_type =
interpolation = NearestNeighbor
invert_transform_flags =
num_threads = -1
output_image = labels.nii.gz
print_out_composite_warp_file =
reference_image =
terminal_output = stream
transforms =

Traceback:
Traceback (most recent call last):
File "/Users/arno/anaconda/lib/python2.7/site-packages/nipype/pipeline/plugins/base.py", line 349, in _send_procs_to_workers
jobid].hash_exists()
File "/Users/arno/anaconda/lib/python2.7/site-packages/nipype/pipeline/engine.py", line 1210, in hash_exists
hashed_inputs, hashvalue = self._get_hashval()
File "/Users/arno/anaconda/lib/python2.7/site-packages/nipype/pipeline/engine.py", line 1345, in _get_hashval
self._get_inputs()
File "/Users/arno/anaconda/lib/python2.7/site-packages/nipype/pipeline/engine.py", line 1405, in _get_inputs
self.set_input(key, deepcopy(output_value))
File "/Users/arno/anaconda/lib/python2.7/site-packages/nipype/pipeline/engine.py", line 1189, in set_input
setattr(self.inputs, parameter, deepcopy(val))
File "/Users/arno/anaconda/lib/python2.7/site-packages/nipype/interfaces/traits_extension.py", line 74, in validate
validated_value = super( BaseFile, self ).validate( object, name, value )
File "/Users/arno/anaconda/lib/python2.7/site-packages/traits/trait_types.py", line 320, in validate
self.error( object, name, value )
File "/Users/arno/anaconda/lib/python2.7/site-packages/traits/trait_handlers.py", line 170, in error
value )
TraitError: The 'input_image' trait of an ApplyTransformsInputSpec instance must be an existing file name, but a value of None <type 'NoneType'> was specified.
Error setting node input:
Node: antsApplyTransform
input: input_image
results_file: /Users/arno/mindboggle_cache/workspace/Mindboggle/Volume_labels/_atlas_OASIS-TRT-20_DKT31_CMA_jointfusion_labels_in_MNI152.nii.gz/Retrieve_volume_atlas/result_Retrieve_volume_atlas.pklz
value: None

provenance crashing

If I don't comment out the following line, mindboggle crashes:

config.enable_provenance()

Fundus vertices sometimes have more than two neighbors that are also fundus line vertices

Seen at b46ffe3, which I know is a couple weeks old. Let me know if there have been relevant changes since then and I'll try again.

I ran the mindboggle pipeline on Afterthought-1 (in /mnt/nfs-share/Mindboggle/subjects). The resulting left hemisphere fundi.vtk file indicates that the following vertices are all value '4':
75171, 75181, 75182

These vertices are also part of the same face.

Decimating surface mesh patches

To speed up the Zernike moments (and perhaps other) calculations, we need simpler surface mesh patches than the thousands of nodes in our label and sulcus meshes. One way is to decimate the meshes (without resorting to random deletion of vertices). Smoothing seems like a reasonable option. Is there Apache, BSD, or MIT-licensed code we could use, or how might we best implement this?

--atlases only resulting in one DataSink output

i have updated the mindboggle software, the mindboggle outputs in dropbox, and the mindboggle.info/users/README.html website with the new outputs. i am happy with the output naming except for one thing: when i run mindboggle with --atlases, where the list of atlases is appended to mindboggle's default atlas, this should result in an iterable and corresponding folders distinguishing outputs from each atlas, but doesn't. only one atlas's results make it from the working directory to the output directory.

(see lines 273 and 1554 in mindboggle)

spectrum_per_label() results are not consistent with other laplace_beltrami.py functions

for label 22, rather than getting:

([[6.3469513010430304e-18,
   0.0005178862383467463,
   0.0017434911095630772,
   0.003667561767487686,
   0.005429017880363784,
   0.006309346984678924]],
 [22])

as in the other functions, i'm getting:

eigsh() failed. Now try lobpcg.
Warning: lobpcg can produce different results from Reuter (2006) shapeDNA-tria software.
Compute linear FEM Laplace-Beltrami spectrum
Out[4]:
([[9.9512415855027014e-06,
0.037430186764067855,
0.040538097886371673,
0.044689929817993072,
0.051117433202677504,
0.060568241779909528]],
[22])

Zernike KeyError

Brian -- I don't remember this happening before, but most of the zernike_moments_per_label() calls are failing for labeled regions, and some are failing for sulcus regions. For example, when I try to run the function on the following vtk file, I get a "KeyError":

>>> vtk_file = '/home/arno/mindboggle_working/OASIS-TRT-20-3/Mindboggle/Surface_features/_hemi_lh/Sulci/sulci.vtk'

>>> zernike_moments_per_label(vtk_file, order=3, exclude_labels=[-1,0], scale_input=True, decimate_fraction=0, decimate_smooth=25)

Load "sulci" scalars from sulci.vtk
  5504 vertices for label 1
Reduced 131011 to 10516 triangular faces
Trinomial
D_CV_orig
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-7-869574256d1a> in <module>()
     40                                           order, scale_input,
     41                                           decimate_fraction,
---> 42                                           decimate_smooth)
     43 
     44             #-------------------------------------------------------------

/home/arno/anaconda/lib/python2.7/site-packages/mindboggle/shapes/zernike/zernike.pyc in zernike_moments(points,decimate_fraction, decimate_smooth)
    210     # Geometric moments:
    211     #-------------------------------------------------------------------------
--> 212     G = pl.geometric_moments_orig(points, faces, order, n_faces, n_V)
    213 
    214     #

/home/arno/anaconda/lib/python2.7/site-packages/mindboggle/shapes/zernike/multiproc/geometric_moments_orig_m.pycfacets, num_vertices)
     97 
     98     for facet in cfg.rng(1,num_facets) :
---> 99         C1[facet-1,:,:,:] = C_temp_lookup[facet][0,:,:,:] #!                                #    C1(face
    100         Vol[facet-1,0] = Vol_temp_lookup[facet] #!                                          #    Vol(fac
    101         #if cfg.mod(facet,1000) == 0 :                                         #    if mod(facet,1000) =

KeyError: 10485

How many surface labels?

I left the following message on NeuroStars (http://neurostars.org/p/2680/):

"I am preparing to release a new version of the Mindboggle software (http://mindboggle.info) and need to make a decision about output labels and am requesting feedback from the community.

As is described in Mindboggle's labeling FAQ, the volume outputs (labels, volumes, thicknesses) use the 31 cortical labels per hemisphere in the DKT31 protocol + MGH (wmparc.mgz) noncortical labels.

Currently, the surface outputs use 25 cortical labels per hemisphere, which take the 31 cortical labels above and combine the cingulate regions together, the middle frontal regions together, and the inferior frontal regions together (see "Combined / eliminated regions" in the above FAQ for the rationale). These 25 labels are intended to be consistent with the definitions of the 25 sulci and fundi generated by Mindboggle, but are inconsistent with / aggregates of the DKT31 volume labels.

I will continue to generate 25 sulci and 25 fundi per hemisphere, but do you think I should output 31 or 25 surface labels (and their shape measures) per hemisphere? I could generate both, but feel that this may confuse people. What do you think?"

mindboggle.labels.labels.extract_boundaries() ignores vertices adjacent to more than 2 labels.

The condition that

len(set(x)) == 2

when building boundary_indices ensures that vertices on the boundary of more than 2 labels will not be labelled as on a boundary. This doesn't seem like desired behavior, and contradicts the documentation for the function, which says: "Label boundaries are the set of all vertices whose neighbors do not share the same label.".

One way to fix this would be to change the condition to

len(set(x)) >= 2

but I'm not sure if this would break assumptions in the rest of MB about each boundary_label_pair really being of length 2.

TraitError: Cannot set the undefined 'invert_transform_flags'

Traceback (most recent call last):
File "/usr/local/bin/mindboggle", line 1850, in
xfm.inputs.invert_transform_flags = warp_inverse_Booleans
traits.trait_errors.TraitError: Cannot set the undefined 'invert_transform_flags' attribute of a 'ApplyTransformsInputSpec' object.

This happens on my very first run with: mindboggle template --ants_segments template/outputSegmentation.nii.gz --thickness --no_surfaces -n 8

antsCorticalThickness.sh as input?

My colleague Mohammad and I found that we get a better segmentation (and thickness measure) by combining FreeSurfer (FS) and ANTs (Atropos) outputs (see below and the new functions combine_whites_over_grays() and thickinthehead()). Given that segmentation will affect mindboggle labeling, I am considering scrapping the nipype antsRegistration call in mindboggle and simply requiring recon-all & antsCorticalThickness.sh outputs as inputs to mindboggle (and perhaps give the option to call both from within mindboggle). what do you think about this?

SUMMARY OF FINDINGS:

Segmentation:
ANTs does a better job at capturing gray matter than FS, including regions we care about for the EMBARC study (medial orbitofrontal), but sometimes extends gray matter into extra-brain tissue. FS does a better job at capturing white matter than ANTs, when its surfaces don't stop short. We can obtain best segmentation results by taking the union of FS and ANTs white matter and the union of FS and ANTs gray matter, replacing gray with white where they intersect. The prospect of painting some white voxels and erasing some gray voxels with ITK-Snap is far more attractive than trying to correct surface meshes with the repositioning tool in Freeview.

FS vs. ANTs:
I tested scan-rescan reliability of thickness measures for all 62 labels in all 40 EMBARC healthy controls. FS has very high reliability across scans for a given subject, as well as across subjects per label. ANTs has lower reliability. Replacing the segmentation that ANTs uses with the FS segmentation does not completely remedy this. We are currently analyzing how much our best hybrid segmentation approach (above) improves ANTs thickness reliability.

Simple check:
Since we are interested in average thickness values per labeled region, as a sanity check I wrote a very simple program which computes an embarrassingly simple thickness measure -- thickness is defined intuitively as volume / area (label volume divided by the average of the gray/white and gray/CSF border voxels after rescaling). Surprisingly, this simple program has a reliability comparable to FS, and may have a more accurate range of values (see below). If we try to do the same within FS, by dividing FS label volume by FS label area, we get a similar distribution, but some regions result in outliers (poles, entorhinal). FS values are also prone to surface failures, of course.

Accuracy:
I read quite a few articles about cortical thickness measures, and the closest set of regions to ours accompanied by manual measurements of MRI thickness that I came across was in Kabani, 2001. If we consider the 16 regions that map to ours (i.e., disregard the cingulate), then for the 640 labels (16 x 40 subjects), FS-generated average thickness values are within Kabani's ranges for about 40% of the 640 labels, whereas my simple program is within Kabani's ranges for almost 90% of the 640 labels.

Example output

It would be great if there example output provided (e.g., from FreeSurfer's "Bert"), so I can see what the output would look like, before setting up MindBoggle.

Content hashing for renamed working directory

When I rename the working directory from "/Users/arno/Desktop/working" to "/Users/arno/Desktop/working2" and try to rerun the command:

mindboggle OASIS-TRT-20-1 --ants_data ~/Data/antsCorticalThickness tmp -n 6 --freesurfer_data /desk/subjects --working /desk/working2 --out /desk/output2 --no_surfaces --thickness

I get an error:

"No such file or directory: '/Users/arno/Desktop/working/Mindboggle/_subject_OASIS-TRT-20-1/labels_mgh_to_nifti/wmparc.nii.gz"

related to the following line in the mindboggle file:

https://github.com/binarybottle/mindboggle/blob/master/mindboggle/mindboggle#L1771

Why is it looking in the old directory for this file?

Implementation of Koehl's method for calculating geometric moments

The current method for calculating Zernike moments depends on first calculating the geometric moments for the mesh. This is the slowest step in the algorithm, by far.

The current implementation depends on a method due to Pozo that is O(N^6). Koehl proposes a recursive method that is O(N^3).

How to combine surfaces?

I need to propagate surface labels through cortical volumes, but because mindboggle now combines ants and freesurfer segmentations and will soon accept modifications to surface labels, I don't have separate left/right cortical volume masks to fill with ants label propagation and can't use fs's label propagation. Should I create a separate workflow for each hemisphere, and combine the altered, labeled surfaces in a separate workflow prior to propagation?

mindboggle table output issues

  1. ants thickinthehead has no output (just the header row).
  2. frontomarginal sulcus is 0 for everybody
  3. the csv files are all strings (i.e. requires me to look and figure out what's a number, a label, and a list). if you are going to store numbers and lists, this should be json
  4. some of the csv files are not formatted consistently or properly, they have spaces with numbers

Clean up fundus/pits extraction

General

  • many of your functions don’t list and adequately explain all of your Parameters.
  • name consistently and according to PEP8 naming conventions!
  • replace all xrange with range (for forward compatibility with python3,
    and no need for the “0,” in range(0, blah)

libbasin.py

def fcNbrLst(FaceDB, Hemi):

you can replace this with mesh.py’s new find_adjacent_faces()

def vrtxNbrLst(VrtxNo, FaceDB, Hemi):

you can replace this with mesh.py’s find_neighbors() or find_neighbors_vertex()

def compnent(FaceDB, Basin, NbrLst, PathHeader):

i don’t understand what this function does.
rename Parameters consistently and according to PEP8 naming conventions:

  • faces, basin_faces, neighbor_list, path_header
  • (copy definitions from other functions)

rename function to something clearer:

  • find_basin_neighbors()?
  • find_basin_connected_faces()?

def judgeFace1(FaceID, FaceDB, CurvatureDB, Threshold = 0):

rename function to something clearer:

  • test_face_curvature()?
  • test_face_zero_order_curvature()?

rename Parameters consistently and according to PEP8 naming conventions:

  • face_index, faces, curvatures, threshold

def basin(FaceDB, CurvatureDB, Threshold = 0):
return Basin, Left

rename Parameters consistently and according to PEP8 naming conventions:

  • faces, curvatures, threshold
    

rename Returns:

  • basin_faces, nonbasin_faces
    

rename function to something clearer:

  • extract_basin_faces()?
    

def allTrue(List):

superfluous function -- remove

def dfsSeed(Visited, Basin):

too little to warrant a function; remove and include line of code where the function is called

def dfs(Seed, Basin, Visited, NbrLst, FaceDB):
return Visited, FcMbr, VrtxMbr

rename Parameters consistently and according to PEP8 naming conventions:

  • seed_vertex, basin_vertices?, visited_vertices?, neighbor_list

rename Returns:

  • visited_vertices?, face_member, vertex_member
    

rename function to something clearer:

  • basin_depth_first_search()?

the following appends vertices, not faces, so what is going on?:
FcMbr = [] # members that are faces of this connected component

def pmtx(Adj):

superfluous function -- remove

def all_same(items):

too little to warrant a function; remove and include line of code where the function is called

def univariate_pits(CurvDB, VrtxNbrLst, VrtxCmpnt, Thld):
return B, C, Child

rename Parameters consistently and according to PEP8 naming conventions:

  • curvatures, vertex_neighbor_list, ?, threshold
    

rename Returns to something clearer (i don’t know what they mean), and rename function to something clearer:

  • extract_univariate_pits()?
    

def clouchoux(MCurv, GCurv):

this function is called only within clouchoux_pits(), so remove this function and replace “if clouchoux(MCurv[i], GCurv[i]):” with: “if (MCurv > 0.2) and (GCurv < 0):” and include 0.2 as a Parameter.

def clouchoux_pits(Vertexes, MCurv, GCurv):

rename Parameters consistently and according to PEP8 naming conventions:

  • vertices, mean_curvatures, gaussian_curvatures
    
  • pits
    

rename function to something clearer:

  • extract_clouchoux_pits()?
    

def getBasin_and_Pits(Maps, Mesh, SulciVTK, PitsVTK, SulciThld = 0, PitsThld = 0, Quick=False, Clouchoux=False, SulciMap='depth'):

remove extraneous Parameters and code, rename Parameters consistently and according to PEP8 naming conventions, and rename function to something clearer:

  • extract_basins_pits()?
    

libfundi.py

All of the general comments for libbasin.py above apply.

Re: Prim and mst -- how do these differ from the x/min_span_tree.py code?

Minimum spanning tree is general-purpose enough that it should be in utils/paths.py

Best to remove the random from downsample() -- we want everything to be replicable.

Mindboggle installation file

The setup_mindboggle.sh bash script sometimes works and sometimes does not because one or another thing would not install unless I rerun those particular lines of the script.

Zernike refactor

  1. Legacy code should be moved into the test suite.
  2. Test suite should be updated.
  3. File proliferation should be reduced or eliminated.
  4. Code should be PEP8 compliant.
  5. Remaining MATLAB idioms should be replaced with Numpy idioms.

Anything else?

Standard space for data and measures

i have a question for all of you regarding the space in which our mindboggle data are analyzed in and shape measures are generated from.

thanks to oliver, we are now able to move our mindboggle vtk surface meshes from the original space the brains were acquired in to a "standard space" where the size and orientation of brains are taken into account for an easier comparison across individuals. an example of such a space is one of the MNI152 templates -- we have affine transforms to take our 101 brains into an MNI152 space. after transforming our brains to this space, however, not only do the average positions of our features change, but so do their surface areas and all other shape measures (i think an affine registration will even affect the lb spectra).

here is my question -- we are now generating shape tables for vtk meshes in the original space. if we wish to generate shape tables for meshes in MNI152 space as well, we should consider whether all of our processing, from depth, curvature and other shape measures to feature extraction, which will impact labeling, etc., should be performed on vtk meshes transformed to the MNI152 space. i see two options:

  1. run mindboggle on the original vtk meshes and report original shape measures + transformed coordinates
  2. run mindboggle on the MNI152-transformed meshes and report transformed shape measures and coordinates.

Installation and dependencies

Oliver Hinds (8/9/2013):

I'm installing mindboggle in a virtualenv on the cluster, which is desirable for my dev testing. Seems like the installation instructions in INSTALL are a little stale.

In addition to nibabel, mayavi, and nipype, i had to pip install numpy and networkx.

However, even after this I get:
Traceback (most recent call last):
File "setup.py", line 27, in
from nisext.py3builder import build_py
ImportError: No module named nisext.py3builder

apt and pip know nothing of nisext. Seems like we shouldn't be requiring that users get things from github to install mindboggle? Am I doing something wrong?

Unit and regression testing

All,

I am now at a point in my efforts where I need to build unit tests. It would seem, though, that the mindboggle package does not implement a comprehensive testing framework. In issue #7 there is some peripheral discussion of this topic but I thought we should consolidate it in a new thread. How would we like to proceed w.r.t testing?

Questions:
Q1) Which testing framework should we use?
Q2) How should we organize the test code?
Q3) How do we want to store test data?
(Please add others...)

My answers:
A1) I am partial to nose.
A2) See A3.
A3) Here I want to make a clear distinction between test data for regression and unit testing, and test data for the edification of mindboggle end-users. For the moment I'll refer to these as "devtest data" and "dist" (as in "distributed") data, respectively. I completely agree with @satra that dist data should be delivered in its own packages and should use the sys.prefix/share standard. However, devtest data is structured much differently than dist data. For example, devtest data for unit tests will be stored on a per-unit basis. This makes it mostly worthless to our naive end-users, and therefore it probably doesn't need to be packaged and delivered to them. My impulse is to either (a) have a separate repo for tests and devtest data or (b) put the tests in the mindboggle repo while having the testing framework download the devtest data from an FTP server.

Cheers!

Transform VTK coordinates to voxel index coordinates

The new function transform_to_volume() in io_vtk.py is intended to transform vtk coordinates to a target (RAS) volume. I tried applying an affine transform based on the pixel dimensions and qform offsets as per:
http://nifti.nimh.nih.gov/nifti-1/documentation/nifti1fields/nifti1fields_pages/qsform.html

However the surface and volume are not aligned. If you run the Example, you will be able to view the surface scalar values atop its target image volume with fslview.

ants Registration parameters

I would like to obtain an affine (as well as nonlinear) transform from each image volume to a template (yet to be supplied; currently set to the MNI152 template). For this I have included nipype's antsRegistration wrapper:
http://www.mit.edu/~satra/nipype-nightly/interfaces/generated/nipype.interfaces.ants.registration.html#examples,

but am having trouble setting the parameters according to:
https://github.com/stnava/ANTs/blob/master/Scripts/newAntsExample.sh

in particular, i don't know how to set multiple metrics for syn:
https://github.com/binarybottle/mindboggle/blob/master/mindboggle/pipeline.py#L637

and i have not been successful saving the affine transform.

nipype pipeline repeats steps when re-running mindboggle

Arno Klein:
after running the mindboggle nipype pipeline on the 101 brains, i found that when i try to re-run the nipype pipeline, it takes a long time. it skips a lot of steps, as it should, but then it gets to labeling and table-generating steps and then slows down as it repeats steps it already ran earlier.

Satrajit Ghosh:
set: workflow.config['execution']['stop_on_first_rerun'] = True
and then do workflow.run()
the node that reruns will have a diff.
this is used to debug your situation. one of the likely causes is that the output of some node is overwriting the input to another node. it creates a diff of the json hash file of a node to tell you what aspect changed. normally the contents of the node's working directory are obliterated before a run.

Defining surface patch boundaries

Martin Reuter (6/12/2013):

I think you can define a boudnary vertex as one where any of its edges is on the boundary. A boundary edge is one that only is in a single triangle.

The other way does not work: and edge with both vertices on the boundary is not necessarily a boundary edge!

Zernike descriptors are HUGE

Email exchange between Brian Rossa, Arthur Mikhno, and Arno Klein (8/5-7/2013):

Brian Rossa (8/5/2013):
The Python results agree completely with the Octave results on my side, but my results do not agree with your results. The problem is that the function D_CV_orig expects "faces" to be 1-indexed -- think MATLAB -- rather than 0-indexed. We can add an "assert" to the code to check for this error mode in the future.
The size of the descriptor that you are seeing, 6, appears to be correct insofar as Arthur's code also gives a descriptor of size 6.
The descriptor for label22.vtk has some very large values. While this may seem dubious, the Arthur's code gives extremely high values as well.
Here are the "right" answers for your two examples according to Arthur and Octave:
[ 0.03978874 0.07597287 0.08322411 0.20233902 0.11730369 0.14579667]
[ 7.56275148e+03 1.43262240e+08 1.10767079e+06 2.84879089e+10 1.12922387e+08 1.02507341e+10]

Arthur Mikhno (8/7/2013):
The length of the descriptors exponentially increases with order. I don't recall the formula for calculating the number of descriptors from order but can look into that. For ex. order 20 yields 121 descriptors while order 35 yields 342.

Generally speaking, values should be < ~1, with values >> 1 a sign of instability in calculating descriptors. Instability could be due to the way the mesh is created or by trying to calculate at an order too high given the resolution/size of the object.

BTW. One way to tell if your descriptor values are reasonable is by reconstructing using zernike moments to see if the estimated object resembles the true object. I have recon. code ready if you want to give that a try.

Arno:
thank you for the explanation, arthur. i like the idea of reconstructing the mesh to see how well it estimated the moments. please do share your code for this so we may give it a try.

the surface patches i am running the code on have hundreds to thousands of vertices, and when i run the code with order 10 i get huge descriptor values. do you think that simplifying the mesh would lead to greater stability for the zernike code? is there any theoretical rationale for choosing a given order; for example, would a given mesh size demand a minimum order?

Terms

i put mindboggle turtle terms in a wiki, but all of the indentation disappeared, so i am including mbterms.ttl here. in addition to some basic terms at the beginning, i include all of the (cortical) features and shape measures at the bottom. the rdfs label for each term is used in mindboggle's current output shape tables, but the tables include extra words. for example, one column of label_shapes.csv includes the label 'position in standard space': "mean position in standard space", and another column in the same table includes the labels 'label' and 'mean curvature': "label: mean curvature: median (weighted)"

@Prefix rdf: http://www.w3.org/1999/02/22-rdf-syntax-ns# .
@Prefix owl: http://www.w3.org/2002/07/owl# .
@Prefix skos: http://www.w3.org/2004/02/skos/core# .
@Prefix rdfs: http://www.w3.org/2000/01/rdf-schema# .
@Prefix mbterms: http://mindboggle.info/terms/ .
@Prefix dcam: http://mindboggle.info/dc/dcam/ .

http://mindboggle.info/terms/
mbterms:modified "2013-07-14"^^http://www.w3.org/2001/XMLSchema#date ;
mbterms:publisher http://mindboggle.info/ ;
mbterms:title "Mindboggle Metadata Terms"@en .

mbterms:StandardSpace
mbterms:description "A standard space is most often accompanied by a template used for image registration purposes. The preeminent standard space used in brain imaging is the 'MNI space', also called 'Talairach space' for historical reasons."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "A conventional ('common') space consisting of fiducial markers defining a coordinate frame, used for comparing, presenting, or reporting brain image data."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "standard space"@en .
rdfs:subClassOf mbterms:ShapeMeasure .

mbterms:Template
mbterms:description "A template is often used for image registration purposes. Examples include any of the Mindboggle-101 templates on http://mindboggle.info/data/"@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "A canonical or reference brain image."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "template"@en .
rdfs:seeAlso http://mindboggle.info/data/ .

mbterms:AverageTemplate
mbterms:description "The preeminent example of an average template is the MNI152 template, the result of co-registering 152 brain images."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "Intensity average of multiple, co-registered individual brain images. This average may be accompanied by a corresponding set of probability values."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "average template"@en .
rdfs:seeAlso http://mindboggle.info/data/ .
rdfs:subClassOf mbterms:Template .

mbterms:OptimalAverageTemplate
mbterms:description "Examples include any of the Mindboggle-101 optimal average templates on http://mindboggle.info/data/."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "An average template constructed by multiple iterative co-registration steps."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "average template"@en .
rdfs:seeAlso http://mindboggle.info/data/ .
rdfs:subClassOf mbterms:AverageTemplate .

mbterms:Atlas
mbterms:description "Examples include any of the Mindboggle-101 manually labeled brain images on http://mindboggle.info/data/"@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "Annotations of a template."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "atlas"@en .
rdfs:seeAlso http://mindboggle.info/data/ .

mbterms:IndividualAtlas
mbterms:description "Examples include any of the Mindboggle-101 manually labeled brain images on http://mindboggle.info/data/"@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "Manual annotations of a single individual’s brain image."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "individual atlas"@en .
rdfs:seeAlso http://mindboggle.info/data/ .
rdfs:subClassOf mbterms:Atlas .

mbterms:ProbabilisticAtlas
mbterms:description "Probability values are usually assigned to each vertex or voxel of a probabilistic atlas after registering a set of individual atlases to a template. An example is a maximum probability atlas accompanied by an image volume of probability values. See the Mindboggle-101 probabilistic atlases on http://mindboggle.info/data/"@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "Aggregate of multiple atlases, with multiple labels assigned to each vertex or voxel, or the probabilistic assignment of labels to a template."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "probabilistic atlas"@en .
rdfs:seeAlso http://mindboggle.info/data/ .
rdfs:subClassOf mbterms:Atlas .

mbterms:MaximumProbabilityAtlas
mbterms:description "Examples include any of the Mindboggle-101 maximum probability atlases on http://mindboggle.info/data/"@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "Aggregate of multiple atlases, with a single label assigned to each vertex or voxel."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "maximum probability atlas"@en .
rdfs:seeAlso http://mindboggle.info/data/ .
rdfs:subClassOf mbterms:ProbabilisticAtlas .

mbterms:Voxel
mbterms:description "A voxel may be isometric (e.g., a 1x1x1mm cube) or anisometric."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "Voxels ('volume elements) are volumetric pixels - that is, they are values in a regular grid in three dimensional space."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "voxel"@en .

mbterms:Vertex
mbterms:description "Conceptually, a vertex in a mesh is equivalent to a node in a graph."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "A point/node of a surface mesh."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "vertex"@en .

mbterms:Label
mbterms:description "Labeling is distinct from "segmentation", which usually refers to breaking up dissimilar or clustering similar, contiguous points, for example tissue class segmentation into gray matter."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "An annotation of a region of the brain, such as when delineating the anatomical boundaries of a gyrus or sulcus. This process is often called "parcellation" when labeling a surface such as the cortical surface."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "voxel"@en .
rdfs:seeAlso http://en.wikipedia.org/wiki/Voxel .

mbterms:CorticalSurface
mbterms:description "Pial surface."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "The outer surface of the cerebrum, sometimes referred to as the "pial surface" or "cerebral exterior."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "cortical surface"@en .

mbterms:CorticalFeature
mbterms:description "Examples of cortical features, or simply 'features', include a gyrus, sulcus, and cortical fold."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "Any feature or structure derived from a brain's cortical surface."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "cortical feature"@en .

mbterms:Fold
mbterms:description "A fold is sometimes loosely referred to as, but should be distinguished from, a 'sulcus'."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "A crease, infolding, fissure, crevice, or intrusion of the cortical surface."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "fold"@en .
rdfs:subClassOf mbterms:CorticalFeature .

mbterms:Sulcus
mbterms:description "A sulcus is sometimes loosely referred to as, but should be distinguished from, a cortical 'fold'. It is defined by a labeling protocol and is based on the folding pattern (topography) of the cortical surface."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "A sulcus (pl.: sulci; adj.: sulcal) is a whole or a portion of one or more cortical folds."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "sulcus"@en .
rdfs:subClassOf mbterms:CorticalFeature .

mbterms:Gyrus
mbterms:description "Examples include the superior, middle, and inferior frontal and temporal gyri."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "A gyrus (pl: gyri; adj.: gyral) is the exterior portion of a fold of the cortical surface. The outermost protrusions are sometimes referred to as a 'gyral crest' or 'gyral crown'.
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "gyrus"@en .
rdfs:subClassOf mbterms:CorticalFeature .

mbterms:Fundus
mbterms:description "A fundus is delimited by the extent of the sulcus it runs through, even if the curve continues along a fold."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "A fundus (pl: fundi; adj.: fundal) is a curve traversing the bottom of a sulcus."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "fundus"@en .
rdfs:subClassOf mbterms:CorticalFeature .

mbterms:Pit
mbterms:description "A sulcal pit is closely related to the 'sulcal root', and may arise during cortical development from a buried or annectant gyrus (a 'plis de passage')."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "Local, deep, pit-like intrusion in a fold."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "pit"@en .
rdfs:subClassOf mbterms:CorticalFeature .

mbterms:MedialSurface
mbterms:description "The medial surface for a sulcus fold lies between the banks of the sulcus without intersecting the cortical surface."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "A surface running midway between two structures."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "medial surface"@en .
rdfs:subClassOf mbterms:CorticalFeature .

mbterms:ShapeMeasure
mbterms:description "A shape measure can be computed at a single vertex/voxel, or for a cortical feature."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "A measure of shape."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "shape measure"@en .

mbterms:Area
mbterms:description "The surface area for a patch of a triangular surface mesh is the sum of the areas of triangles within the patch. For a single vertex, area is measured based on the vertex's contribution to the area of each of the faces containing it."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "Surface area."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "area"@en .
rdfs:subClassOf mbterms:ShapeMeasure .

mbterms:MeanCurvature
mbterms:description "A neighborhood size is applied to the Laplacian filtering."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "The mean curvature based on the direction of the displacement vectors during a Laplacian filtering."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "mean curvature"@en .
rdfs:subClassOf mbterms:ShapeMeasure .

mbterms:TravelDepth
mbterms:description "Travel depth was developed by Joachim Giard as part of his doctoral dissertation, and was first applied to protein molecules. It was adapted for use with cortical surfaces as part of the Mindboggle software."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "A measure of depth from an outer surface or hull that combines Euclidean distances (straight line-of-sight) and geodesic distances (along the surface)."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "travel depth"@en .
rdfs:subClassOf mbterms:ShapeMeasure .

mbterms:GeodesicDepth
mbterms:description "Approximations to geodesic depth run along edges of a surface mesh."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "Distance between two points along a surface."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "geodesic depth"@en .
rdfs:subClassOf mbterms:ShapeMeasure .

mbterms:Convexity
mbterms:description "FreeSurfer's convexity measure is used as a proxy for depth, and is a function of the distance traveled by a vertex as the surface containing it is inflated."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "FreeSurfer convexity measure."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "convexity"@en .
rdfs:subClassOf mbterms:ShapeMeasure .

mbterms:Thickness
mbterms:description "FreeSurfer's thickness measure is computed within a segmented gray matter volume ('cortical ribbon')."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "FreeSurfer cortical thickness measure."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "thickness"@en .
rdfs:subClassOf mbterms:ShapeMeasure .

mbterms:LaplaceBeltramiSpectra
mbterms:description "Martin Reuter first applied Laplace-Beltrami spectra to study the 'shape-DNA' of whole cortical surfaces [Reuter, 2006]."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "The first coefficients of the Laplace-Beltrami spectra."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "Laplace-Beltrami spectra"@en .
rdfs:subClassOf mbterms:ShapeMeasure .

mbterms:LaplaceBeltramiSpectra
mbterms:description "Martin Reuter first applied Laplace-Beltrami spectra to study the 'shape-DNA' of whole cortical surfaces [Reuter, 2006]."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "The first coefficients of the Laplace-Beltrami spectra."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "Laplace-Beltrami spectra"@en .
rdfs:subClassOf mbterms:ShapeMeasure .

mbterms:Position
mbterms:description "[x,y,z] coordinates."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "3-D coordinates."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "position"@en .
rdfs:subClassOf mbterms:ShapeMeasure .

mbterms:StandardPosition
mbterms:description "[x,y,z] coordinates in standard (MNI152) space."@en ;
mbterms:issued "2013-07-04"^^http://www.w3.org/2001/XMLSchema#date ;
a mbterms:AgentClass, rdfs:Class ;
rdfs:comment "3-D coordinates in standard space."@en ;
rdfs:isDefinedBy http://mindboggle.info/terms/ ;
rdfs:label "position in standard space"@en .
rdfs:subClassOf mbterms:ShapeMeasure .

Joint fusion atlas construction bug

There was misalignment between the subcortical and cortical labels in the individual OASIS-TRT-20 brains (http://mindboggle.info/data.html). It seems that s​ome of the brain images corresponding to the subcortical label files were assigned to the wrong cortical files, leading to misalignment. This may affect the accuracy of the joint fusion atlas constructed from these combined label volumes.

Colormap in vtkviewer?

Hal Canary and Arno Klein (8/25/2013):

Arno: vtkviewer.py works great. is there a way to change the colormap?

Hal: Look at the line:
colorMap = vtk.vtkColorTransferFunction()
You can change those next few lines to anything you like.

Hal:
Check out the most recent version of vtkviewer.py from github/master. I've modified the VTKViewer.AddPolyData() method to accept an optional colorMap argument.
If you use vtkviewer as a module, you should be able to do this: (I haven't tested this).

############3

import vtk
import vtkviewer

my_poly_data = vtk.vtkPolyData()

do something to generate your meshes and scalars:

my_color_map = vtk.vtkColorTransferFunction()
my_color_map.SetColorSpaceToLab()
my_color_map.AddRGBPoint(-1, 0,0,0) # background is black
colors = [
[0.6706, 0.2196, 0.2196],
[0.4471, 0.6706, 0.2196],
[0.2196, 0.6706, 0.6706],
[0.4471, 0.2196, 0.6706],
[1.0, 0.8706, 0.6078],
[0.5843, 0.6667, 0.6235],
[1.0, 0.9843, 0.9020],
[0.9020, 0.9020, 1.0],
[0.5, 0.5, 0.95],
[0.7451, 0.7843, 0.7098]]
# use whatever colors you like. But be aware that some are
# better than others. Or read Colin Ware's book.
for i, color in enumerate(colors):
my_color_map.AddRGBPoint(i, *color)
my_color_map.Build()

vtkviewer = vtkviewer.VTKViewer()
vtkviewer.AddPolyData(my_poly_data,my_color_map)
vtkviewer.Start()

mapnode trouble

pipeline.py lines 189-190:

flo1.connect([(atlas_reg, majority_vote,
[('output_file', 'annot_files')])])

atlas_reg's register_atlas() function and majority_vote's multilabel() function aren't connecting properly here. multilabel() is printing the annot_files list as ['U','U'] instead of the annot file names.

(commit 4fbef6e)

Laplace-Beltrami spectra of fragments?

Arno Klein, Forrest Bao, and Martin Reuter

Arno (5/31/2013):
can you run the lbo code on disconnected patched? if not, let's simply choose the largest patch.

Forrest (5/31/2013):
Yes, I run LBO code on disconnected patches (if it is not connected), as long as all vertices of them share the same label. If there are n disconnected patches, the LBS will have n near-zero eigenvalues.
I was wondering whether it is still meaningful to run LBO on disconnected patches, especially when matching folds from different subjects. For example, my fold X has 2 components while the other guys fold X is a whole piece.

Arno (5/31/2013):
it sounds like they would not be comparable, if the number of patches affects the number of near-zero eigenvalues. you might want to run this by martin to be sure, to see if there is a reasonable way of combining the patches or eigenvalues of the patches, and if not, we'll compute on just the largest patch.

Forrest to Martin (6/7/2013):
I am thinking about applying LBO onto 3D structures, where each structure may have disconnected components.
In our study, the cortical region bearing the same label may break into several disconnected components on some subjects. The attachment is one example where the 5 discrete components bear the same label.
Is there any relationship between the LB spetra of discrete components (i.e., applying LBO on each component individually) and the LB spectrum of components altogether (i.e., feeding them as a whole to LBO)? At least I know that the latter will have 5 zeros in the example case.

nki-trt-20-16_lh_fold_5

Martin (6/7/2013):
yes, it is simply the union of the spectra (sorted). But shape comparison of a bunch of connected components together is not meaningful (a single small component will add lots of values to the spectra and make comparison difficult). Better work on individual components at a time.

"Segmentation fault: 11" when trying to view VTK file created by write_vtk()

i am able to write_vtk() for a toy example:

>>> import random, os
>>> from mindboggle.utils.io_vtk import write_vtk
>>> from mindboggle.utils.plots import plot_surfaces
>>> points = [[random.random() for i in [1,2,3]] for j in range(4)]
>>> indices = [1,2,3,0]
>>> lines = [[1,2],[3,4]]
>>> faces = [[1,2,3],[0,1,3]]
>>> scalars = [[random.random() for i in range(4)] for j in [1,2]]
>>> scalar_names = ['curv','depth']
>>> output_vtk = 'write_vtk.vtk'
>>> write_vtk(output_vtk, points, indices, lines, faces, scalars, scalar_names)
>>> # View:
>>> plot_surfaces(output_vtk)

but for some reason i am now getting a "Segmentation fault: 11" when i try to view real files, such as the following:

>>> import os
>>> from mindboggle.utils.io_vtk import read_vtk, write_vtk
>>> from mindboggle.utils.plots import plot_surfaces
>>> path = os.environ['MINDBOGGLE_DATA']
>>> input_vtk = os.path.join(path, 'arno', 'shapes', 'lh.pial.mean_curvature.vtk')
>>> faces, lines, indices, points, npoints, scalars, scalar_names, input_vtk = read_vtk(input_vtk)
>>> output_vtk = 'write_vtk.vtk'
>>> write_vtk(output_vtk, points, indices, lines, faces, scalars, scalar_names)
>>> # View:
>>> plot_surfaces(output_vtk)

Feature-based labeling

there are three ways i've been thinking about labeling brains with mindboggle:

  1. surface-based registration with freesurfer using the dkt40 or dkt100.
    this is already implemented in mindboggle's nipype. as satra pointed out, given the DKT-40 atlas inclusion in the new freesurfer, this doesn't add anything new -- we might as well inherit freesurfer's labels, and reduce the 31 cortical labels to 25 if we want to subscribe to the DKT25 protocol. mindboggle would then be used primarily for shape analysis, which is the primary contribution in the original r01 proposal.
  2. one of my original goals with mindboggle is feature-based labeling. i was hoping to identify fundi independent of labels based on position and shape, and then either drive atlas (volume/surface) registrations to label, or fill the cortex with labels between these delimiting landmarks. however, breaking up / combining fundi so that there is a one-to-one matching across brains turns out to be a harder problem than i thought it would be.
  3. we are currently developing a hybrid of #1 and #2. the fundi are currently segmented/identified/labeled based on initial automated labels (#1), and we would use label propagation to "fix" the label boundaries to the fundi (#2).

the realign_label_boundary() is the method to realign the label boundaries. it calls:

    self.initialize_seed_labels(init='label_boundary', 
       output_filename = label_boundary_filename)

which would simply be set to the initial automated labels or to the consensus set of labels, where there is extremely high probability that the label assignments are correct.

there is a lot of redundancy between rebound.py's functions and the rest of mindboggle's. it would be good to simply extract what is relevant from rebound.py, out of its current oop framework. the next method would be replaced by labels/labels.py's extract_borders() function, then separate same-label-pair segments:

    self.find_label_boundary_segments()

this is the heart of the label propagation:​

    self.graph_based_learning(realign=True, max_iters=max_iters)
    self.assign_realigned_labels(filename = output_file_regions)

f​inally, ​extract_borders() would replace:​

    self.find_label_boundary(realigned_labels=True,
        output_filename = output_file_boundaries)

404 Error when installing Mindboggle

I receive a 404 error when trying to install Mindboggle. The 404 error is associated with:

    install: Downloading: http://mindboggle.info/mindboggle_20140214.box
    An error occurred while downloading the remote file. The error
    message, if any, is reproduced below. Please fix this error and try
    again.

    The requested URL returned error: 404 Not Found

Eigensolver stability for Laplace-Beltrami measures

Arno Klein (5/5/2013):

i asked satra what he thought about smoothing the surface before measuring lb spectra, and he reminded us that we have to be careful because the spectra will be affected by changes in curvature, and that we should first check to see if our errors are arising from eigensolver.

satra also provided the following links for a relevant section from scikit-learn:

https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/manifold/spectral_embedding.py#L231

​and for surface smoothing:"should be very easy to code up to test and then cythonized if performance is required.this is used to inflate surfacebut inflation is a way of doing surface smoothing"
https://github.com/satra/SLT/blob/master/SurfTools/private/edgemotionMEX.c​

​forrest -- could you please confirm this as soon as possible? because if it turns out we do need to smooth these surfaces, this could cost us some time when we really need to wrap things up!​

OASIS-TRT-20 noncortical labels in different space

Email exchange between Arno Klein and Andrew Worth of Neuromorphometrics (7/8-10/2013):

Arno:
the subcortex-labeled oasis-trt-20 brains have much bigger dimensions than the original oasis brains (e.g., 256x256x287 vs. 256x256x160) and are oriented differently. can you recommend a way to put the labels in the original space of the oasis scans?

Andy:
We don't specifically pad. We do it the same way as in Cardviews, but I had to make the code more robust to handle additional cases where the original scan was acquired differently than what the CMA was used to. Here's what the code says we do (it's been a long while since I looked at this):

The following is based on three 3D landmarks chosen by the operator; Anterior Commisure (AC), Posterior Commisure (PC) and "Mid-Sagittal" Point (MS).

First, find oms, the closest point on the ac-pc line to the ms point.
Then, nvec is calculated as the cross product between vec (ms-oms) and pvec (ac-oms), i.e. "up" crossed with "front".

To re-orient the scan, we first rotate nvec about the Z axis to make it point towards the +X direction (the subject's left). This is the Z rotation. Then we rotate NVec about Y so it points exactly in the +X direction to get the Y rotation. Then we Rotate vec towards into the -Z direction, to get the X rotation.

Arno:
oh, no. they've been rotated as well as translated, so what i've been trying won't work. i guess then i will have to resort to registration.

Andy:
Yes, they are rotated, translated, AND interpolated at a different resolution. I don't like the way this messes with the original data, but it was an attempt to match what was already present from the CMA when I originally wrote NVM.

I think I figured out where the padding comes from. The old slice resolution, which is normally less than the in-plane resolution, is used to calculate the number of slices for the new scan. For the OASIS scans, columns become slices and this changes the resolution when it is re-sliced after the rotation. However, the OASIS scans are 1x1x1mm, so this makes it less of a problem. However however, in NVM, we always blow up the scan to be 512x512 in-plane (the better to see the anatomy, the CMA always said). So this does have the effect of changing the resolutions and adding the padding.

For the usual CMA scans, this was not really a problem because the scans were acquired coronally so the rotations and translations were small.

Missing sulci in the Mindboggle-101 brains

I used mindboggle to extract folds from the (manually labeled) Mindboggle-101 brains. A label boundary defined as a sulcus in the DKT labeling protocol must be in a mindboggle-extracted fold to be detected as a sulcus. When Elias Chaibub Neto analyzed shape measures in these brains, he found that all callosal sulci were missing from all hemispheres, and that the following four sulci were also missing:
- MMRR-21-9: left frontomarginal
- NKI-RS-22-2: left paracentral
- NKI-RS-22-14: left frontomarginal
- OASIS-TRT-20-5: right frontomarginal

Travel vs. geodesic depth

Here is a medial view of the left hemisphere of one subject with red indicating high travel depth (top) and high geodesic depth (bottom). They are very similar, but given the red band in the middle of the largest fold, I believe that travel depth is more suitable for extracting the deepest features such as fundi:

travel_vs_geodesic_depth_20130515

Mean curvature values very different from FreeSurfer's

Mindboggle mean curvature values are all negative while FreeSurfer values are all positive, and with very different absolute values (ex: subject OASIS-TRT-20-1). Travel depth values are above 1 but all geodesic depth values are between 0 and 1. I'll need help figuring these things out.

curvature() is run here:
https://github.com/binarybottle/mindboggle/blob/master/mindboggle/mindboggle#L776
which calls this:
https://github.com/binarybottle/mindboggle/blob/master/mindboggle/shapes/shape_tools.py#L111
which in turn calls CurvatureMain.

The curvature values are written to a table here:
https://github.com/binarybottle/mindboggle/blob/master/mindboggle/utils/io_table.py#L389

argument ordering of read_vtk and write_vtk differ

I would hope/expect that the order of values output by read_vtk allows them to be fed back to write_vtk. Instead,

faces, lines, indices, points, npoints, scalars, scalar_names, input_vtk = read_vtk(...)

def write_vtk(output_vtk, points, indices, lines, faces, scalars, scalar_names, scalar_type)

I would suggest changing one to match the other, so I can do:

vtk_data = read_vtk(filename)
# some manip of vtk_data
write_vtk(filename, *vtk_data[:-1])

Data and doc string Examples

Throughout the Mindboggle code base, I've included Examples in the documentation with lines like the following:

>>> path = os.environ['MINDBOGGLE_DATA']
>>> sulci_file = os.path.join(path, 'arno', 'features', 'sulci.vtk')

where MINDBOGGLE_DATA is an environment variable set according to the instructions in http://mindboggle.info/users/installation.html.

​Is this reasonable, or is there a better way for users to try out functions?

I also created these examples with the goal of testing code within doc strings with sphinx, and of carefully unit testing everything, but haven't had time to do this.​

Python 3.0?

Question: should we gear our first release for Python 3.x vs. 2.7? I believe all the dependencies are supported by Python 3 and Raja had trouble importing vtk with Python 2.6 but not by 2.7. I'm concerned about supporting two versions of Mindboggle, even with conversion utilities such as 2to3...

Curvature methods

Joachim Giard (3/4/2013):

CurvatureMain default is 0.

-m 2 is not slower but it is less exact and does not correspond exactly to the definition of curvature, but it's a good approximation.
The 3 methods take more or less the same time but it would be a lot slower to consider a larger neighborhood for -m 0 (I can't tell exactly how slower).
So, I would recommend -m 0 if you have a low resolution or if you just want to localize local peaks.
-m 1 is not well tested and the filtering is done using Euclidean distances, so it's good only for fast visualization.
-m 2 is a good approximation but very large curvatures (negative or positive) are underestimated (saturation effect).

Zernike memory issue

Mindboggle is crashing on many subjects with a memory limit set to 2G:

Satrajit Ghosh (2/15/2015):
150215-15:17:45,874 workflow INFO:
Executing node Zernike_sulci.a1 in dir: /om/scratch/Tue/ps/MB_work/734db8e05f6be469df79c1419f253ad7/Mindboggle/Surface_feature_shapes/_hemi_rh/Zernike_sulci
Load "sulci" scalars from sulci.vtk
8329 vertices for label 1
Reduced 160076 to 15921 triangular faces
srun: Exceeded job memory limit

Regions used in mindboggle

In the FAQ (http://www.mindboggle.info/faq/labels.html), there is some rationale outlined for combining some of the regions that are delineated in the DKT atlas, e.g., combining the three inferior frontal regions into a single region. However, in the example mindboggle output, it appears that the three regions are not combined (e.g., thickinthehead_per_freesurfer_cortex_label.csv). I can see that it is currently an open question of which is better, i.e., 31 or 25 cortical regions (https://neurostars.org/p/2680/). However, that post does indicate that mindboggle currently outputs 25 regions.

Is this an discrepancy in the current output vs. FAQ due to a change to using 31 regions instead? Perhaps a solution to this would be to default to one set of regions (31 vs. 25), but allow the user to specify if they want the other as a flag when running mindboggle? In either case, the FAQ should be updated to match the current output of mindboggle.

Part of my interest in using mindboggle was to see how cortical thickness estimates would look in the DKT-25 protocol, so I wanted to see an example output that included both sets (i.e., FreeSurfer would use the 31 regions still, mindboggle would provide the 25 regions, calculated from the same participant.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.