Giter Site home page Giter Site logo

onurboyar / retinal-vessel-segmentation Goto Github PK

View Code? Open in Web Editor NEW
60.0 3.0 16.0 431.51 MB

Retinal Vessel Segmentation using U-Net architecture. DRIVE and STARE datasets are used.

Python 1.86% Shell 0.01% Jupyter Notebook 98.14%
image-segmentation retinal-vessel-segmentation u-net data-augmentation

retinal-vessel-segmentation's Introduction

PWC

Retinal Vessel Segmentation

Successful segmentation of the retinal vessel segmentation has widely studied and it is still one of the hot research areas. Various different architectures are tailored for this specific problem and numerous existed deep learning architectures are used in order to perform the segmentation task. The scarcity of the annotated data pushed researchers to use data augmentation to the certain amount in order to avoid the overfitting problem. However, the usage of the data augmentation is limited in these studies. If data augmentation strategy can address the problems of the input data, successful segmentation model can be obtained. In our study, we are looking for the performance gains that can be obtained by the excessive data augmentation using U-Net architecture for retinal vessel segmentation problem. We use DRIVE and STARE dataset that has become one of the standard benchmarks in the retinal vessel segmentation studies.

This repository includes the implementation of our paper. You can acces the paper via the following link: https://arxiv.org/abs/2105.09365

Please note that .py files needs to be used to run our model. Notebooks are provided as additional sources.

Documentation

Path Structure

DRIVE

"LOG_PATH": "./data/logs", (contains log file as pickle)
"RESULT_PATH": "./data/test_results", (contains results as a folder named $save_name, final results are in here.
If you want to make a submission, use images in"/download" folder)
"MODEL_PATH": "./data/models", (contains model checkpoints)
"TRAIN_PATH": "./data/train",  (contains directories /images and /labels with related images)
"VAL_PATH": "./data/test", (contains directories /images and /labels with related images) 
"TEST_PATH": "./data/test", (contains directories /images and /labels with related images)
"TMP_TRAIN": "./data/tmp_train", (contains padded training images and labels, script creates all automatically)
"TMP_TEST": "./data/tmp_test", (contains padded test images and labels, script creates all automatically)
"TMP_VAL": "./data/tmp_val", (contains padded validation images and labels, script creates all automatically)
"TMP_RESULT": "./data/tmp_result", (contains raw predictions, script creates all automatically)

STARE

"TRAIN_PATH": "./data/train",  (contains directories /images and /labels with related images)
"VAL_PATH": "./data/test", (contains directories /images and /labels with related images) 
"TEST_PATH": "./data/test", (contains directories /images and /labels with related images)
"KFOLD_TEMP_TRAIN": "./kfold/temp_train", (contains padded training images and labels, script creates all automatically)
"KFOLD_TEMP_TEST": "./kfold/temp_test", (contains padded test images and labels, script creates all automatically)
"LOG_PATH_KFOLD": "./kfold/logs", (contains log file as pickle)
"CKPTS_PATH_KFOLD": "./kfold/checkpoints", (contains model checkpoints)
"RESULTS_PATH_KFOLD": "./kfold/results" (contains results as a folder named $save_name, final results are in here)

Training DRIVE

Images are preprocessed (padding, normalizing etc.) in training function, so executing training files is enough. If you want to train DRIVE: Digital Retinal Images for Vessel Extraction dataset follow these steps:

  • Since DRIVE dataset gives training images with ".tif" and labels with ".gif" extension, you must give ".png" files to model.
  • Other image preprocessing is done at training loop (binary masking, RGB to gray scale etc.).
  • Your images must be multiples of 32 (our choice is 608x576 since DRIVE has resolution 584x565). If your images have been already padded, give --already_padded=True.
  • If you want to save models at each epoch, give --train_at_once=False, else only the best model will be saved.
python3 train_drive.py --train_at_once True \
                       --save_name "experiment_1" \
                       --initial_model_path "/path/to/ckpts.hdf5" \
                       --model_name "vanilla" \
                       --epochs 15 \
                       --train_batch 3 \
                       --val_batch 3 \
                       --already_padded False

Traning STARE

If you want to train STARE: STructured Analysis of the Retina dataset follow these steps:

  • Since STARE dataset gives training images with ".ppm" and labels with ".ppm" extension, you must give ".png" files to model.
  • STARE dataset doesn't give you test images, so we follow k-fold procedure while training.
  • STARE scripts doesn't contain padding script, so you must give padded images to training (examples are shown in preprocess_for_DRIVE_dataset.ipynb).
  • If you interrupt k-fold training, you can specify starting fold with --start, after.
  • If you specified starting fold, give --show_samples False.
python3 train_stare.pt --initial_model_path "/path/to/ckpts.hdf5"\
                       --model_name "vanilla" \
                       --epochs 15 \
                       --train_batch 3 \
                       --val_batch 3 \
                       --n_fold 5 \
                       --start_fold 5 \
                       --show_samples False

Augmentations

test image size

We used imgaug, CLoDSA, albumentations, imagecorruptions libraries for excessive augmentations. Before using augmentation scripts, run setup_augmentation.sh file "/augmentations". Since we used diverse augmentation techniques, we couldn't provide arguments for python files to augmentations. We provide this diverse augmentation techniques in methods.py file, you must edit augmentation_main.py file before you run.

Results On DRIVE

... Accuracy AUC Mean Dice Coef Challenge Ranking
Rotation (30*k) And Flipping 0,970 0,971 0,809 621
+ Zoom Out 0,971 0,983 0,820 323
+ White Noise/Elastic Deformations/Shift 0,970 0,985 0,822 257
+ Gamma Correction/Random Crop/Grid and Optical Distortion 0,971 0,983 0,824 171
+ Blurring/Dropout/Eq. Histogram 0,971 0,985 0,826 127
Method AUC Accuracy
UNet (2018*) 0,9752 0,9555
Residual UNet (2018) 0,9779 0,9553
IterNet (2019) 0,9816 0,9571
SUD-GAN (2020) 0,9786 0,9560
RV-GAN (2020) 0,9887 0,9790
Ours (2021) 0,9848 0,9712

* UNet was proposed in 2015, DRIVE results in 2018.

Authors

  • Enes Sadi Uysal
  • M. Şafak Bilici
  • Billur Selin Zaza
  • Mehmet Yiğit Özgenç
  • Onur Boyar

retinal-vessel-segmentation's People

Contributors

bselin avatar enesadi avatar onurboyar avatar safakkbilici avatar yigitozgenc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

retinal-vessel-segmentation's Issues

Test only 27 images

@onurboyar

Thank you for your code, it is beneficial to me.

I have my private dataset, and my U-net code runs correctly (train and test) on the full dataset.
I want to test the model on only 27 images. When I execute evaluate function, I meet this error :

**InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: slice index 1 of dimension 2 out of bounds.
[[{{node strided_slice_1}}]]
[[IteratorGetNext]]
[[IteratorGetNext/_4]]
(1) Invalid argument: slice index 1 of dimension 2 out of bounds.
[[{{node strided_slice_1}}]]
[[IteratorGetNext]]
0 successful operations.
0 derived errors ignored. [Op:__inference_test_function_12429]

Function call stack:
test_function -> test_function**

Can you please help me?
thank you

Pretrained Weights

Thanks for sharing your work.
Please share your Pretrained Weights for this work for further analysis, as it would help me and verify your results.
I really appreciate it if you answer these questions. Thanks very much.

I got empty png result

Hi,

My result are empty png as below. I used drive dataset and I convert all images to png before traninig.

1 (1)

How can I fix it?

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.