Giter Site home page Giter Site logo

lxmwust / fewshot_gan-unet3d Goto Github PK

View Code? Open in Web Editor NEW

This project forked from arnab39/fewshot_gan-unet3d

0.0 0.0 0.0 1.85 MB

Tensorflow implementation of our paper: Few-shot 3D Multi-modal Medical Image Segmentation using Generative Adversarial Learning

License: MIT License

Python 100.00%

fewshot_gan-unet3d's Introduction

Few-shot 3D Multi-modal Medical Image Segmentation using Generative Adversarial Learning

This repository contains the tensorflow implementation of the model we proposed in our paper of the same name: Few-shot 3D Multi-modal Medical Image Segmentation using Generative Adversarial Learning, submitted in Medical Image Analysis, October 2018.

Requirements

  • The code has been written in Python (3.5.2) and Tensorflow (1.7.0)
  • Make sure to install all the libraries given in requirement.txt (You can do so by the following command)
pip install -r requirement.txt

Dataset

iSEG 2017 dataset was chosen to substantiate our proposed method. It contains the 3D multi-modal brain MRI data of 10 labeled training subjects and 13 unlabeled testing subjects. We split the 10 labeled training data into training, validation and testing images for both the models.(Eg- 2,1 and 7) Rest of the 13 unlabeled testing images are only used for training the GAN based model.

How to use the code?

  • Download the iSEG-2017 data and place it in data folder. (Visit this link to download the data. You need to register for the challenge.)
  • To perform image wise normalization and correction( run preprocess_static once more by changing dataset argument from "labeled" to "unlabeled"):
$ cd preprocess
$ python
>>> from preprocess import preprocess_static
>>> preprocess_static("../data/iSEG", "../data/iSEG_preprocessed",dataset="labeled")
  • The preprocessed images will be stored in iSEG_preprocessed folder
  • If you fail to install ANTs N4BiasFieldCorrection you can also skip the above preprocessing step and continue working with the original dataset. (Just change the data_directory flag to the original data directory while running the models)
  • You can run standard 3D U-Net & our proposed model(both Feature matching GAN and bad GAN) with this code and compare their performance.

How to run 3D U-Net?

$ cd ../unet3D
  • Configure the flags according to your experiment.
  • To run training
$ python main_unet.py --training
  • This will train your model and save the best checkpoint according to your validation performance.
  • You can also resume training from saved checkpoint by setting the load_chkpt flag.
  • You can run the testing to predict segmented output which will be saved in your result folder as ".nii.gz" files.
  • To run testing
$ python main_unet.py --testing
  • This version of code only compute dice coefficient to evaluate the testing performance.( Once the output segmented images are created you can use them to compute any other evaluation metric)
  • Note that the U-Net used here is modified according to the U-Net used in proposed model.(To stabilise the GAN training)
  • To use the original U-Net you need to change the replace network_dis with network(both networks are provided) in build_model function of class U-Net(in model_unet.py).

How to run GAN based 3D U-Net?

$ cd ../proposed_model
  • Configure the flags according to your experiment.
  • To run training
$ python main.py --training
  • By default it trains Feature Matching GAN based model. To train the bad GAN based model
$ python main.py --training --badGAN
  • To run testing
$ python main.py --testing
  • Once you run both the model you can compare the difference between their perfomance when 1 or 2 training images are available.

Proposed model architecture

The following shows the model architecture of the proposed model. (Read our paper for further details)



Some results from our paper

  • Visual comparison of the segmentation by each model, for two test subjects of the iSEG-2017 dataset, when training with different numbers of labeled examples.

  • Segmentation of Subject 10 of the iSEG-2017 dataset predicted by different GAN-based models, when trained with 2 labeled images. The red box highlights a region in the ground truth where all these models give noticeable differences.


* More such results can be found in the paper.

Contact

You can mail me at: [email protected]
If you use this code for your research, please consider citing the original paper:

fewshot_gan-unet3d's People

Contributors

arnab39 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.