Carotid Vessel Wall Segmentation Through Domain Aligner, Topological Learning, and Segment Anything for Sparse Annotation in MRI Images
This is the official PyTorch implementation of the paper:
Carotid Vessel Wall Segmentation Through Domain Aligner, Topological Learning, and Segment Anything for Sparse Annotation in MRI Images
X. Li, X. Ouyang, J. Zhang, Z. Ding, Y. Zhang, Z. Xue, F. Shi, and D. Shen
To install the required dependencies, run the following command in your terminal:
pip install -r requirements.txt.
-
Download the COSMOS2022 dataset and extract the files. Since our CTA data is restricted, you will need to use your own external dataset.
-
Organize your data in the following structure:
- DATA_ROOT - COSMOS - train_data - 3 - 4 - ... - CTA_dataset - image - **.nii.gz ... - mask - **.nii.gz ...
-
Create a
invalid.txt
file within the COSMOS folder with the following content. The images are divided down the center to create two halves, and there are no annotations on the specific half sides of these images:
28_R_image.nii.gz
29_R_image.nii.gz
42_L_image.nii.gz
43_R_image.nii.gz
47_R_image.nii.gz
52_R_image.nii.gz
7_L_image.nii.gz
- For external validation on CARE-II, organize the data similarly and copy the test data into the train data folder for cross-validation. The content for
invalid.txt
for external validation would be:
0_P176_U_R_image.nii.gz
0_P204_U_L_image.nii.gz
0_P204_U_R_image.nii.gz
0_P252_U_L_image.nii.gz
0_P448_U_R_image.nii.gz
0_P460_U_L_image.nii.gz
0_P759_U_R_image.nii.gz
0_P955_U_R_image.nii.gz
- Set the DATA_ROOT environment variable as follows:
export DATA_ROOT=/data/
-
Run the to_nii to convert DICOM files to NII format and generate interpolated annotations if needed for external validation.
-
Follow the sam_annotation notebook to install SAM and generate SAM interpolated masks. The files will be automatically saved under
$DATA_ROOT/COSMOS(or careII)/preprocessed/mri_nii_raw
. Use tools such as ITK-SNAP to manually examine the generated masks and eliminate clearly failed slices. -
Run the generate_folds to generate corresponding json files for each fold.
Configure the training parameters for your training stage in the configs folder.
Then run train.py
with your configuration file, like this:
python train.py --config mtnet
Execute the eval_lumen.ipynb notebook to evaluate lumen segmentation using MT-Net. Additionally, run eval_wall.ipynb to assess the vessel wall segmentation using our two-stage framework.
This repository is built using MONAI and Lightning.
This project is released under the Apache license. Please see the LICENSE file for more information.
If our work is useful for your research, please consider citing:
@ARTICLE{10589423,
author={Li, Xibao and Ouyang, Xi and Zhang, Jiadong and Ding, Zhongxiang and Zhang, Yuyao and Xue, Zhong and Shi, Feng and Shen, Dinggang},
journal={IEEE Transactions on Medical Imaging},
title={Carotid Vessel Wall Segmentation Through Domain Aligner, Topological Learning, and Segment Anything Model for Sparse Annotation in MR Images},
year={2024},
volume={},
number={},
pages={1-1},
keywords={Image segmentation;Annotations;Biomedical imaging;Accuracy;Lumen;Medical diagnostic imaging;Transfer learning;Transfer Learning;Image Segmentation;Domain Adaptation;Segment Anything Model},
doi={10.1109/TMI.2024.3424884}
}