Giter Site home page Giter Site logo

holgerroth / 3dunet_abdomen_cascade Goto Github PK

View Code? Open in Web Editor NEW
102.0 10.0 41.0 285.44 MB

Home Page: http://www.holgerroth.com

License: Other

CMake 1.16% Makefile 0.27% HTML 0.08% CSS 0.10% Jupyter Notebook 56.32% C++ 33.67% Shell 0.29% Python 5.24% Cuda 2.49% MATLAB 0.35% Dockerfile 0.03%
segmentation deep-learning fully-convolutional-networks computed-tomography semantic-segmentation multi-organ-segmentation computer-aided-detection convolutional-neural-networks

3dunet_abdomen_cascade's Introduction

3Dunet_abdomen_cascade

This repository provides the code and models files for multi-organ segmentation in abdominal CT using cascaded 3D U-Net models. The models are described in:

"Hierarchical 3D fully convolutional networks for multi-organ segmentation" Holger R. Roth, Hirohisa Oda, Yuichiro Hayashi, Masahiro Oda, Natsuki Shimizu, Michitaka Fujiwara, Kazunari Misawa, Kensaku Mori https://arxiv.org/abs/1704.06382

This work is based on the open-source implementation of 3D U-Net: https://lmb.informatik.uni-freiburg.de/resources/opensource/unet.en.html We thank the authors for providing their implementation.

Olaf Ronneberger, Philipp Fischer & Thomas Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer, LNCS, Vol.9351, 234--241, 2015 DOI Code and Özgün Çiçek, Ahmed Abdulkadir, S. Lienkamp, Thomas Brox & Olaf Ronneberger. 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. Medical Image Computing and Computer-Assisted Intervention (MICCAI), Springer, LNCS, Vol.9901, 424--432, Oct 2016

3D U-Net is based on Caffe. To compile, follow the Caffe instructions: http://caffe.berkeleyvision.org/installation.html#prequequisites

To run the segmentation algorithm on a new case use: python run_full_cascade_deploy.py Note, please update the paths in run_full_cascade_deploy.py

You might have to add a -2000 offset to win_min/max1/2 in deploy_cascade.py if your images are in Hounsfield units.

For training, please follow the 3D U-Net instruction. prepare_data.py can be useful for converting nifti images and label images to h5 containers which can be read by caffe.

Reference

Roth, Holger R., Hirohisa Oda, Xiangrong Zhou, Natsuki Shimizu, Ying Yang, Yuichiro Hayashi, Masahiro Oda, Michitaka Fujiwara, Kazunari Misawa, and Kensaku Mori. "An application of cascaded 3D fully convolutional networks for medical image segmentation." Computerized Medical Imaging and Graphics 66 (2018): 90-99. https://arxiv.org/pdf/1803.05431.pdf

Visceral model

We also provide a model fine-tuned from the abdominal model based on the VISCERAL data set [1]. All related code and models are provided in the "VISCERAL" subfolder. This folder also contains *.sh scripts for fine-tuning the different stages of the cascade. train.sh is for training the model from scratch. The data list files in models/3dUnet_Visceral_with_BN.prototxt need to be updated accordingly. For more details, please refer to VISCERAL/JAMIT2017_rothhr_manuscript.pdf

Please contact Holger Roth ([email protected]) for any questions.

[1] Jimenez-del-Toro, O., Müller, H., Krenn, M., Gruenberg, K., Taha, A. A., Winterstein, M., et al. Kontokotsios, G. (2016). Cloud-based evaluation of anatomical structure segmentation and landmark detection algorithms: VISCERAL anatomy benchmarks. IEEE Transactions Imaging, 35(11), 2459-2475. (http://www.visceral.eu/benchmarks/anatomy3-open/)

3dunet_abdomen_cascade's People

Contributors

holgerroth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3dunet_abdomen_cascade's Issues

Regarding how to train the model

I'm still trying to understand how the prepare data works. Do you run the prepare_data.py twice? Once for stage 1 and a second time for stage 2? In the code, is MASK referring to stage 1's candidate region?
Thanks!

How to merge the segmentation?

In your code dealing with visceral dataset, you use the merged segmentation label file. However, I only have separated segmentation files from different organs. So what should I do to merge the segmentation file and make your code work?

blob size exceeds INT_MAX: did you have had this error?

Hi, I tried to run the code in 3D-Unet with your pretrained model.
I am asking this question

I0625 14:51:07.611187 11147 net.cpp:400] conv_d0b-c -> d0c
F0625 14:51:07.612490 11147 blob.cpp:33] Check failed: shape[i] <= 0x7fffffff / count_ (385 vs. 89) blob size exceeds INT_MAX
*** Check failure stack trace: ***
    @     0x7f483a8105cd  google::LogMessage::Fail()
    @     0x7f483a812433  google::LogMessage::SendToLog()
    @     0x7f483a81015b  google::LogMessage::Flush()
    @     0x7f483a812e1e  google::LogMessageFatal::~LogMessageFatal()
    @     0x7f483aed0e60  caffe::Blob<>::Reshape()
    @     0x7f483afbe205  caffe::BaseConvolutionLayer<>::Reshape()
    @     0x7f483aebe48f  caffe::Net<>::Init()
    @     0x7f483aebfd11  caffe::Net<>::Net()
    @     0x7f483ae2014a  caffe::Solver<>::InitTrainNet()
    @     0x7f483ae214b7  caffe::Solver<>::Init()
    @     0x7f483ae2185a  caffe::Solver<>::Solver()
    @     0x7f483ae35823  caffe::Creator_SGDSolver<>()
    @           0x40a6d8  train()
    @           0x4075a8  main
    @     0x7f48392c4830  __libc_start_main
    @           0x407d19  _start
    @              (nil)  (unknown)
Aborted (core dumped)

Did you face with this error? I tried to apply your pre-trained model on my data. If yes, could you please guide me? I have installed 3d-Unet patch on caffe. Then I tried to install opencl-caffe that I was not successful. Thanks

CUDA 10 Support

Is it possible to build up the 3d Unet Caffe package in CUDA 10.0 with cuDNN 7.4.2?

about softmax_loss layer

hi~
It is very appreciated that you release this code. It is very helpful for my research. but I have the question about the softmax_loss layer. in some situation that the batch data has just C class which C < K, K is the total class in the networks. and it will LOG error that: "sum of pixel-wise loss weights is zero"
Do you have ever happened to this? Is it matter with the results?

thank you very much in advance!

overplaying tiles

Hi,
I'm wondering if the overlapping tiles is used in the code, from the result i believe is non-overlapping, is there a flag to control overlapping or non-overlapping?

Caffe prototxt

Hi, can you provide this 3Dunet caffe prototxt? I've being trying hard to change from tensorflow architecture to caffe prototxt. Thanks!

Understanding the framework

Hello,
How many patches did you create per image and how many epochs did you train?

How did you maintain the high resolution and upsampled prediction concatenation? Did you reconstruct the low res segmentation of the entire image volume(subject) then upsample to the original size then cropping patches from the high-res image and corresponding (voxel as center) predicted segmentation?

How to use Pancreas-CT data?

Thanks for your wonderful work. How to use [Pancreas-CT][https://wiki.cancerimagingarchive.net/display/Public/Pancreas-CT] as an example?

patch training

hi~
I wonder if this can support the patch training, not the whole image?

About syncedmem.cpp:56, check failed: error == cudaSuccess (2 vs. 0) out of memory

Hi Dr.Roth,

I am Peter from Vanderbilt University and thank you for the great work. I want to implement this code and directly run with my clinical datasets, because it is interesting and useful for my multi-organ segmentation or liver segmentation pipeline. My clinical dataset's size is around 512 x 512 x 100-150, which can be easily run through with Nvidia GPU 2080 Ti according to the paper. However, it shows the above error when the first stage model generate prediction. Is this related to caffe that I compiled with? I have followed the steps that the readme shows for implementing 3D UNet in caffe.

Thanks a lot!

Best,
Peter

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.