Giter Site home page Giter Site logo

cast's Introduction

CAST-multi-scale-CNN-for-segmentation-of-hippocamal-subfields

Instruction

1. Required libraries

2. Preprocessing

  • Affine transformation In the analysis, all subjects used for training and segmentation are required to roughly in the same space. If not, affine transformation should be used. We use ANTS (https://github.com/ANTsX/ANTs) for the purpose. Other softwares (e.g. FSL, SPM) can also be used.
ANTS 3 –m MI[…,…,1,32] –i 0

For the UMC dataset used in our paper, because all subjects were acquired with the same acquistion protocol and roughly in the same orientation, no affine transformation is used.

  • Image cropping To reduce the computational cost, instead of using the entire image for training or segmentation, small cropped images with hippocampus included are used to the deep neural network. MATLAB script MNI_cropimage.m under folder CAST/PreandPostProcess is for cropping images, which can be revised for other dataset.

3. Training and segmentation

Both training and segmentation pipelines is carried out by running the command CastRun. CastRun [options]

  • -path [pathdir] directory of the model configuration.

  • -model [modelconffile] directory or filenames of the configuration file for the deep neural network. The parameters used to setup the network architecture are included in this file.

  • -train [trainconffile] directory or filenames of the configuration file for the training pipeline. The parameters used to setup the training scheme are included in this file.

  • -test [testconffile] directory or filenames of the configuration file for the testing pipeline. The parameters used to setup the testing scheme are included in this file.

  • -load [trainedparametersfile] directory or filenames of the trained parameters.

  • -dev [cudaname] cudaname

    If [pathdir] is specified, then only the filenames should be specified for option [modelconffile], [trainconffile], [testconffile] and [trainedparametersfile]. [pathdir] can be absolute directory or relative directory. In this case, model configuratopm file is under the directory [pathdir]/configFiles/deepMedic/model/[modelconffile], training configuration file is under the directory [pathdir]/configFiles/deepMedic/train/[trainconffile], testing configuration file is under the directory [pathdir]/configFiles/deepMedic/test/[testconffile], trained parameters is under the directory [pathdir]/output/saved_models/trainSessionDm/[trainedparametersfile]

    If [pathdir] is not specified, then the relative or absolute directory for [modelconffile], [trainconffile], [testconffile] and [trainedparametersfile] should be included.

    With the shared example model, new models can be generated by following the script deepMedic_generateSetting_mni_5F.m under folder CAST/PreandPostProcess.

    3.1. Training pipeline

    In the training pipeline, besides [trainconffile] files, the paths to the input images and the manual segmentation maps should be specified. In the mni_seg model, since there are two input imaging modalities for hippocampal subfield segmentation, there are two files (trainChannels_t1w.cfg and trainChannels_t2w.cfg) to specify the paths to these images. In addition, the paths of manual segmentation maps are saved in trainGtLabels.cfg. Similar files are also required for validation. The names of these files are in trainConfig.cfg, thus the algorithm can find the data to train the network. These files can be easily revised for another dataset.

    Training a new model by running

./CastRun -model ./mni_seg/configFiles/deepMedic/model/modelConfig.cfg \
          -train ./mni_seg/configFiles/deepMedic/train/trainConfig.cfg -dev cuda0

or

./CastRun -path ./mni_seg -model modelConfig.cfg \
          -train trainConfig.cfg -dev cuda0

cuda0 can be revised to the names of other available GPUs in your workstation, such as cuda1, cuda2, etc. GPU is required becase of the intense computation in the training pipeline. -load option can also be used for training pipeline, which is useful if training was previously interrupted or you want to fine tune the network.

3.2. Segmentation pipeline

Similar to the training pipeline, a text file having the paths to the input images are required and the filename is included in testConfig.cfg file. The text file for the manual segmentation is not required, if it does exist, CAST outputs the probability map for each label and the hard segmentation map. The dice coefficient for each label will also be calculated when performing segmentation. If it does not exist, then certainly segmentation pipeline does not calculate dice coefficient.

Segmenting new subject(s) by running

./CastRun -model ./mni_seg/configFiles/deepMedic/model/modelConfig.cfg \
        -train ./mni_seg/configFiles/deepMedic/test/testConfig.cfg -load [trainedparametersfile] -dev cuda0

or

./CastRun -path ./mni_seg -model modelConfig.cfg \
          -test testConfig.cfg -load [trainedparametersfile] -dev cuda0

GPU (e.g. -dev cuda0) is not required for the segmentation pipeline. With GPU disabled, CAST segments a new subject in MNI dataset in one minute. With GPU enabled, CAT segments a new subject in MNI dataset in 10 seconds (for Tesla K40c GPU card).

4. Postprocessing

In case that the segmentation map is required to be transformed back to subject's native space (e.g. affine transformation is used as a preprocessing step), instead of directly transforming the hard threshold segmentation map. We suggest to transform the probabilistic map for each label and then select the label for each voxel as the label having the highest value in the transformed probabilistic maps. The script for this purpose is MNI_TransformProbMap.m under CAST/PreandPostProcess.

5. Reference

Yang et al.(2020), CAST: A multi-scale convolutional neural network based automated hippocampal subfield segmentation toolbox. NeuroImage 218 116947. https://doi.org/10.1016/j.neuroimage.2020.116947.

cast's People

Contributors

cclrcbh-bic avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.