This repo is an implementation of the paper: Learning Pseudo Text Prompt for Improved Vertebrae Segmentation.
The detailed structure of the PseTNet model is shown in the following figure:
Dataset | Download | Comment |
---|---|---|
Spineweb-16 | Link | Train and val: random split, Test: official split |
Composite | Link | Random split |
Please prepare the dataset in the following format to facilitate the use of the code:
├── datasets
├── Spineweb-16
│ ├── image
| | ├── train
| | ├── val
| | └── test
│ └── mask
| ├── train
| ├── val
| └── test
|
└── Composite
├── image
......
Please download the pre-training parameters for the CLIP model to the /model/PseTNet/text_part/CLIP folder.
python train.py
- CUDA/CUDNN
- pytorch>=1.7.1
- torchvision>=0.8.2