Giter Site home page Giter Site logo

vinairesearch / point-unet Goto Github PK

View Code? Open in Web Editor NEW
31.0 2.0 6.0 9.88 MB

Point-Unet: A Context-aware Point-based Neural Network for Volumetric Segmentation (MICCAI 2021)

License: Apache License 2.0

Python 65.58% Shell 0.04% C++ 33.26% Cython 0.91% C 0.22%
3d-medical-imaging-segmentation point-cloud-segmentation

point-unet's Introduction

Table of contents

  1. Introduction
  2. Usage
  3. Result

Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

This repository contains the implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation. Point-Unet is a point-based volumetric segmentation framework with three main modules: the saliency attention, the context-aware sampling, and the point-based segmentation module. The saliency attention module takes a volume as input and predicts an attentional probability map that guides the context-aware point sampling in the subsequent module to transform the volume into a point cloud. The point-based segmentation module then processes the point cloud and outputs the segmentation, which is finally fused back to the volume to obtain the final segmentation results.

DETR

Details of the Point-Unet model architecture and experimental results can be found in our following paper.

@inproceedings{ho2021point,
  title={Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation},
  author={Ho, Ngoc-Vuong and Nguyen, Tan and Diep, Gia-Han and Le, Ngan and Hua, Binh-Son},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={644--655},
  year={2021},
  organization={Springer}
}

Please CITE our paper when Point-Unet is used to help produce published results or incorporated into other software

Usage

Installation

The code is based on Tensorflow. It has been tested with Python 3.6.9, CUDA 9.0 on Ubuntu 18.04 (NVIDIA V100).

Install required python packages

$ conda env create -f environment.yml
$ sh PointSegment/compile_op.sh

Data preparation

Download and organize BraTS18, BraTS19, BraTS20 and Pancreas datasets from their official website. The dataset will be stored at the structure below with nii.gz format

dataset/BraTS2018/
  training/
  validation/

dataset/BraTS2019/
  training/
  validation/

dataset/BraTS2020/
  training/
  validation/

dataset/Pancreas/
  ct/
  seg/

Training and Evaluation

Available arguments to setup our source code:

optional arguments:
  --logdir LOGDIR             path to the log directory
  --gpu                       GPU ID
  --fold                      fold ID
  --n_epoch                   number of epoch during training process
  --n_point                   number of point cloud during sample data
  --threshold 0.9             threshold to sample data
  --mode                      model of process [train,test]
  --inPros_path               path to the attention maps
  --outBinary_path            path to the output binary segmentation
  --data_PC_path              path to the Point Cloud data
  --data_3D_path              path to the 3D Volumes data
  --checkpoint_path           path to the checkpoint of model
  --results_path              path to the results of Point Cloud segmentation
  --outSegment_path           path to the final results on 3D Volumes

Setup on Pancreas

  • Saliency Attention Map

    Train attention maps:

    $ python3 SaliencyAttention/train.py \
              --logdir SaliencyAttention/model_logs/Pancreas/ \
              --gpu 0

    Predict attention maps:

    $ python3 SaliencyAttention/train.py \
              --checkpoint_path SaliencyAttention/model_logs/Pancreas/model-50000 \
              --gpu 0 \
              --predict
  • Context-Aware Sampling

    Generate Binary Map:

    $ python3 utils/genBinaryMap.py \
              --inPros_path dataset/Pancreas/attention_maps/ \
              --outBinary_path dataset/Pancreas/binary_maps/ \
              --threshold 0.9

    Generate Point Cloud data (Output results are saved with the *.ply, *pkl, *.npy format):

    $ python3 PointSegment/utils/dataPreparePancreas.py \
              --n_point 180000 \
              --data_3D_path dataset/Pancreas/ \
              --outPC_path dataset/Pancreas/PC_data
  • PointSegment

    Training model:

    $ python PointSegment/runPancreas.py \
             --gpu 0 \
             --mode train \  
             --fold 3 \
             --n_epoch 100 \ 
             --data_PC_path dataset/Pancreas/PC_data \
             --logdir PointSegment/model_logs/Pancreas/fold3

    Evaluation model:

    $ python3 PointSegment/runPancreas.py \ 
              --gpu 0 \
              --mode test \  
              --fold 3 \
              --data_PC_path dataset/Pancreas/PC_data \
              --data_3D_path dataset/Pancreas/ct \
              --checkpoint_path PointSegment/model_logs/Pancreas/fold3/snap-497 \
              --results_path dataset/Pancreas/Results 

    Generate Segmentation Results (Output results are saved with the *.nii.gz format):

    $ python3 utils/genSegmentationPancreas.py \
              --inPros_path dataset/Pancreas/predict_npy/ \
              --outSegment_path dataset/Pancreas/predict_nii \

Setup on BraTS

  • Saliency Attention Map

    Train Attention maps:

    $ python3 SaliencyAttention/train.py \
              --logdir SaliencyAttention/model_logs/BraTS/ \
              --gpu 0

    Predict Attention maps:

    $ python3 SaliencyAttention/train.py \
              --checkpoint_path SaliencyAttention/model_logs/BraTS/model-50000 \
              --gpu 0 
              --predict
  • Context-Aware Sampling

    Generate Binary Map:

    $ python3 utils/genBinaryMap.py \
              --inPros_path dataset/BraTS2020/attention_maps/  \
              --outBinary_path dataset/BraTS2020/binary_maps/ \
              --threshold 0.9
    

    Generate Point Cloud data (Output results are saved with the *.ply, *pkl, *.npy format):

    $ python3 PointSegment/utils/dataPrepareBraTS.py \
          --n_point 365000 \
          --data_3D_path dataset/BraTS20/ \
          --outPC_path dataset/BraTS20/PC_data
  • PointSegment

    Training model :

    $ python3 PointSegment/runBraTS.py \
              --gpu 0 \
              --mode train \ 
              --n_epoch 100 \
              --logdir PointSegment/model_logs/BraTS20 \
              --data_PC_path dataset/BraTS2020

    Evaluation model :

    $ python3 PointSegment/runBraTS.py \
        --gpu 0 \
        --mode test \ 
        --data_PC_path dataset/BraTS2020 \ 
        --checkpoint_path PointSegment/model_logs/BraTS20/snapshots/snap-8261 \
        --results_path dataset/BraTS2020/predict_npy

    Generate Segmentation Results (Output results are saved with the *.nii.gz format):

    $ python3 utils/genSegmentationBraTS.py \
              --inPros_path dataset/BraTS2020/predict_npy/ \
              --outSegment_path dataset/BraTS2020/predict_nii \

Results

  • Offline validation set

    Dataset Dice ET Dice ET Dice ET Average Dice Average HD95
    BraTS18 80.76 90.55 87.09 86.13 6.01
    BraTS19 85.56 91.18 90.10 88.98 4.92
    BraTS20 76.43 89.67 82.97 83.02 8.26
  • Online validation set

    Dataset Dice ET Dice ET Dice ET Average Dice Average HD95
    BraTS18 80.97 90.50 84.11 85.19 6.30
    BraTS19 79.01 87.63 79.70 82.11 10.39
    BraTS20 78.98 89.71 82.75 83.81 11.73

Acknowledgement

This repository is borrow a lot of codes from 3DUnet-Tensorflow-Brats18 and RandLA-Net.

point-unet's People

Contributors

vvuonghn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

point-unet's Issues

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.