Giter Site home page Giter Site logo

yulequan / heartseg Goto Github PK

View Code? Open in Web Editor NEW
88.0 88.0 52.0 16.44 MB

code of DenseVoxNet and 3D-DSN

Home Page: http://appsrv.cse.cuhk.edu.hk/~lqyu/DenseVoxNet/index.html

Shell 1.88% MATLAB 97.52% Objective-C 0.60%
3d-cnn deep-learning segmentaion volumetric

heartseg's People

Contributors

yulequan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

heartseg's Issues

About test prototxt and choosing the loss_weight

Thanks for sharing your great work!!

  • The Fig. 2 shows the learning curves of the 3D DSN. In which, you shows the 3D DSN validation loss bigger than 3D DSN training. Do you use loss_weight (auxiliary classifier) for the validation .prototxt? In my knowledge, the loss_weight only uses in training .prototxt.
  • One more thing, how do you choose the loss weight value for each auxiliary classifier? In your work "Volumetric-ConvNet-for-prostate-segmentation", the loss_weight is smaller than 1 but it is bigger than 1 in here.

Padding the Segmentation

Hello Lequan,

I am curious about a padding operation here:

new_seg = padarray(new_seg,pading_size,100);

If my understanding is correct, you pad the ROI if the size is smaller than the crop size. Why would you pad the segmentation ground truth to be 100? Should we pad it as 0s as the background truth label or is 100 a flag of padding?

Thank you~ :)

Best,
Zhuotun

Error when loading h5 data

Hi, I'm trying to run your code and follow your instructions to generate .h5 file.
But errors came out when I try to train the model.
_20180520124405
_20180520125744

I have no idea what caused this problem.
(By the way, I changed the train.list file so that it only contains one file. The error also appeared when there are eight files.)
Also, I checked the .h5 file, and found out that the size of the 'data' and 'label' was 141*127*205*1*1. I wonder whether this size is corrent.
Thank you very much if you could tell me what went wrong!

prepare_h5_data() function not as it should be

Hi
Thanks for sharing your nice work. I tried to train your model on our tissue cleared dataset but unfortunately the prepare_h5_data() function is not creating 64x64x64 patches. I debugged code and realised that its creating ROI's and patches with the same size of the given image.
Could you please explain where the problem is?
I debugged it on two files "training_axial_crop_pat0.nii.gz" and "training_axial_crop_pat0-label.nii.gz" with the size 141x127x207. The generated patches are also the same size. Shouldn't the patches be 64x64x64?
regards

Loss Function

Hello,
I am fused in why you choose softmaxWithLoss as Loss Function? It is related to the training label?

Elaborate Details of the DenseVoxNet

Hello Lequan,

Nice work and thank you for your sharing! :) Basically, I would like to reproduce the results reported in your DenseVoxNet paper.

I was able to run some preliminary results of your proposed DenseVoxNet and get the testing results from the online challenge server. I should adopt all the default parameters of your published code. The training DICE is really good while the testing results seem a little bit wired. From the test server, I got the mean DICE of 72.8 (myocardium) and 91.7(blood pool) on testing splits. The testing result of the myocardium is ~10% less than the reported 82.1. I definitely missed something here. So I double checked your paper and code. I list all the settings here, which are extracted directly from your code and should remain unchanged. Please point out if I made some major difference or mistakes.

  1. Data pre-processing + augmentation
    patchSize: [64, 64, 64]
    zero mean + unit variance + no resampling (use_isotropic = 0)
    create_roi: aug_r = 20
    rotation: 90, 180, 270
    flip: only at the axial plane

  2. Training settings
    train_densenet.prototxt as you published on github
    poly learning strategy with base_lr of 0.05, and 0.9 power
    weight decay of 0.0005 and 0.9 momentum
    max_iteration: 15000

  3. Testing settings
    deploy.prototxt as you published on github
    parameters are set as default as in your test_model.m
    overlap: 4, use_isotropic 0, tr_mean = 0
    RemoveMinorCC: 0.2

Apart from these settings, I have two more puzzles.
First, I notice one thing in your paper that "we implement a kind of long skip connection to connect the transition layer to the output layer with 2x2x2 Deconv layer". Does this "long skip connection" reflect on your prototxt? I haven't found a kernel size 2x2x2 of Deconv layer. Maybe I missed this part in your prototxt.

Second, your DenseVoxNet is reported to have 1.8M parameters while the 3D-Caffe (https://github.com/yulequan/3D-Caffe/tree/3D-Caffe) saved a ~18M caffemodel on my desktop (TiTan X, CUDA 7.5 and CuDNN 5.1). So I am wondering maybe there is another major different output of my experiments from yours and I was not aware how it came out.

Thank you so so so much! :) I really appreciate it!

Best,
Zhuotun

Check failed: axis_index < num_axes() (4 vs. 4) axis 4 out of range for 4-D Blob with shape

Hi, Thank you for sharing the code.
I have compiled 3D-Caffe for pycaffe, and I created the .h files in python. At first since I saw that the input shape based on 3D-Caffe should be in this order NxCxDxHxW

I0721 02:11:07.162717  6153 hdf5_data_layer.cpp:117] Number of HDF5 files: 90
I0721 02:11:07.163275  6153 hdf5.cpp:32] Datatype class: H5T_FLOAT
I0721 02:11:07.326478  6153 hdf5.cpp:35] Datatype class: H5T_INTEGER
I0721 02:11:07.592453  6153 net.cpp:150] Setting up loaddata
I0721 02:11:07.592517  6153 net.cpp:157] Top shape: 1 1 104 256 256 (6815744)
I0721 02:11:07.592532  6153 net.cpp:157] Top shape: 1 1 104 256 256 (6815744)
I0721 02:11:07.592540  6153 net.cpp:165] Memory required for data: 54525952
I0721 02:11:07.592558  6153 layer_factory.hpp:77] Creating layer conv_d0a-b
I0721 02:11:07.592628  6153 net.cpp:106] Creating Layer conv_d0a-b
I0721 02:11:07.592651  6153 net.cpp:454] conv_d0a-b <- data
I0721 02:11:07.592684  6153 net.cpp:411] conv_d0a-b -> d0b
F0721 02:11:07.593014  6153 blob.hpp:140] Check failed: num_axes() <= 4 (5 vs. 4) Cannot use legacy accessors on Blobs with > 4 axes.
*** Check failure stack trace: ***
    @     0x7f3376c885cd  google::LogMessage::Fail()
    @     0x7f3376c8a433  google::LogMessage::SendToLog()
    @     0x7f3376c8815b  google::LogMessage::Flush()
    @     0x7f3376c8ae1e  google::LogMessageFatal::~LogMessageFatal()
    @     0x7f33772170d5  caffe::Blob<>::LegacyShape()
    @     0x7f337721d6a3  caffe::MSRAFiller<>::Fill()
    @     0x7f33772a275e  caffe::BaseConvolutionLayer<>::LayerSetUp()
    @     0x7f3377370a45  caffe::Net<>::Init()
    @     0x7f3377372301  caffe::Net<>::Net()
    @     0x7f337734c7ea  caffe::Solver<>::InitTrainNet()
    @     0x7f337734cd57  caffe::Solver<>::Init()
    @     0x7f337734d0ea  caffe::Solver<>::Solver()
    @     0x7f3377205c93  caffe::Creator_SGDSolver<>()
    @           0x40c48f  train()
    @           0x409214  main
    @     0x7f33756da830  __libc_start_main
    @           0x4099b9  _start
    @              (nil)  (unknown)

Then I changed the code to only have CxHxWxD:

        img = img.reshape(1,img.shape[0],img.shape[1], img.shape[2])
        lbl = img.reshape(1,lbl.shape[0],lbl.shape[1],lbl.shape[2])

shape of h5 files for example for a volume with grayscale images is (1, 256, 256, 104) for both data and label, which is in order CxHxWxD (channel, height,width,depth). Once I want to run it in 3D-Caffe, it is showing this error, which is related to Concat layer,

0721 04:24:50.512578  8927 net.cpp:157] Top shape: 1 512 60 22 (675840)
I0721 04:24:50.512579  8927 net.cpp:165] Memory required for data: 168148992
I0721 04:24:50.512581  8927 layer_factory.hpp:77] Creating layer concat_d2c_u2a-b
I0721 04:24:50.512584  8927 net.cpp:106] Creating Layer concat_d2c_u2a-b
I0721 04:24:50.512588  8927 net.cpp:454] concat_d2c_u2a-b <- scaleu2a
I0721 04:24:50.512589  8927 net.cpp:454] concat_d2c_u2a-b <- scaled2c_relu_d2c_0_split_1
I0721 04:24:50.512593  8927 net.cpp:411] concat_d2c_u2a-b -> u2b
F0721 04:24:50.512604  8927 concat_layer.cpp:42] Check failed: top_shape[j] == bottom[i]->shape(j) (60 vs. 64) All inputs must have the same shape, except at concat_axis.
*** Check failure stack trace: ***
    @     0x7ff92fdc75cd  google::LogMessage::Fail()
    @     0x7ff92fdc9433  google::LogMessage::SendToLog()
    @     0x7ff92fdc715b  google::LogMessage::Flush()
    @     0x7ff92fdc9e1e  google::LogMessageFatal::~LogMessageFatal()
    @     0x7ff9303b6a71  caffe::ConcatLayer<>::Reshape()
    @     0x7ff9304afa54  caffe::Net<>::Init()
    @     0x7ff9304b1301  caffe::Net<>::Net()
    @     0x7ff93048b7ea  caffe::Solver<>::InitTrainNet()
    @     0x7ff93048bd57  caffe::Solver<>::Init()
    @     0x7ff93048c0ea  caffe::Solver<>::Solver()
    @     0x7ff930344c93  caffe::Creator_SGDSolver<>()
    @           0x40c48f  train()
    @           0x409214  main
    @     0x7ff92e819830  __libc_start_main
    @           0x4099b9  _start
    @              (nil)  (unknown)
Aborted (core dumped)

But the problem is that the number of slices is different from patient to patient. Then, then based on what I saw in HeartSeg, I added
transform_param { crop_size_l : 64 crop_size_h : 64 crop_size_w : 64 } to HDF5Data layer, But am still getting an error:

I0```
721 04:29:23.488734 8998 hdf5_data_layer.cpp:117] Number of HDF5 files: 90
I0721 04:29:23.489279 8998 hdf5.cpp:32] Datatype class: H5T_FLOAT
I0721 04:29:23.506300 8998 hdf5.cpp:35] Datatype class: H5T_INTEGER
F0721 04:29:23.520934 8998 blob.hpp:122] Check failed: axis_index < num_axes() (4 vs. 4) axis 4 out of range for 4-D Blob with shape 1 256 256 104 (6815744)
*** Check failure stack trace: ***
@ 0x7fb7ed6565cd google::LogMessage::Fail()
@ 0x7fb7ed658433 google::LogMessage::SendToLog()
@ 0x7fb7ed65615b google::LogMessage::Flush()
@ 0x7fb7ed658e1e google::LogMessageFatal::~LogMessageFatal()
@ 0x7fb7edbea0d8 caffe::Blob<>::CanonicalAxisIndex()
@ 0x7fb7edcd0cf9 caffe::HDF5DataLayer<>::HDF5DataTransform()
@ 0x7fb7edccec49 caffe::HDF5DataLayer<>::LoadHDF5FileData()
@ 0x7fb7edccda81 caffe::HDF5DataLayer<>::LayerSetUp()
@ 0x7fb7edd3ea45 caffe::Net<>::Init()
@ 0x7fb7edd40301 caffe::Net<>::Net()
@ 0x7fb7edd1a7ea caffe::Solver<>::InitTrainNet()
@ 0x7fb7edd1ad57 caffe::Solver<>::Init()
@ 0x7fb7edd1b0ea caffe::Solver<>::Solver()
@ 0x7fb7edbd3c93 caffe::Creator_SGDSolver<>()
@ 0x40c48f train()
@ 0x409214 main
@ 0x7fb7ec0a8830 __libc_start_main
@ 0x4099b9 _start
@ (nil) (unknown)
Aborted (core dumped)


How to solve the issue? what is the reason that problem coming from? 
Should I change the order of channels?

Generate the Prediction Label in the Right Format for the Challenge Server

Hi Lequan,

Thank you for your nice work! Can you share the code to generate the prediction label in the right format (maybe nii?) that can be directly evaluated by the challenge server?

I evaluated the DICE on the training set by comparing the predicted and given the label in ".mat" format, which seemed reasonable. Then, I tried to generate the NII format from ".mat" using the make_nii function in the NIfTI toolbox. And upload it to the challenge testing server, it always output -inf.

I would really appreciate it!

Best,
Zhuotun

when run sh

hdf5_data_layer.cpp:76] Check failed: !this->layer_param_.has_transform_param() HDF5Data does not transform data.
when i run the strart_train.sh ,this have some error

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.