Giter Site home page Giter Site logo

yipenghu / label-reg Goto Github PK

View Code? Open in Web Editor NEW
117.0 117.0 32.0 1019 KB

(This repo is no longer up-to-date. Any updates will be at https://github.com/DeepRegNet/DeepReg/) A demo of the re-factored label-driven registration code, based on "Weakly-supervised convolutional neural networks for multimodal image registration"

License: Apache License 2.0

Python 100.00%
convolutional-neural-networks deep-learning image-guided-interventions medical-image-registration tensorflow weakly-supervised-learning

label-reg's Issues

multi-channel input bug

  1. The provided inference function fails to correctly warp multi-channel inputs (as required for label images). We have written a bug-fix for this, using pytorch's grid_sample function and would be happy to provide this to the interested community.

Gradients of local displacement energy

Dear @YipengHu,

thank you for sharing this code with us!
I'm trying to understand your local_displacement_energy function. In your paper you refer to (Rueckert et al. 1999) where bending energy is clearly defined. Implementation of L1, L2 gradient norms are also pretty straightforward. But I still cannot get the intuition behind your gradient_dx/y/z functions.

It's clear to me why gradient_dx/y/z returns the vol shape less by 2 for x,y,z, axes, e.g. the input shape [batch, 90, 90, 90, 3] will result into shape [batch, 88, 88, 88, 3] for dx, dy, dz. If we want to preserve the original shape we should append zeroes accordingly as it states in this tf.doc image_gradients:

Both output tensors have the same shape as the input: [batch_size, h, w, d]. The gradient values are organized so that [I(x+1, y) - I(x, y)] is in location (x, y). That means that dy will always have zeros in the last row, and dx will always have zeros in the last column.

  • Could you please elaborate why do you prefer to lose some information, instead of appending zeroes to preserve the shape?
  • Why do we need to divide the gradients by two?

I’m eager to receive your response!

Loss isn't convergent

I use the example data to train this networks and find loss isn't convergent. Why is this happening?

The detail Running Environment

It's a really great job!
I am trying to run my own labeled 3D-image data on the Nvidia 1080GPU.
And I found the 3D-CNN is really time and memory consuming. could you please tell me the size of 3D-image that you had fed into the neural network.
Thx

Support for multimodal input image volumes

This adaptation deals with the case that the multiple already co-registered moving (and/or fixed) images are available for predicting the DDFs. Therefore, the label similarity shall not be affected.

training for larger deformations on public datasets

First of all, thanks a lot for making your research code publicly available. The installation is flawless and running the inference on the provided examples works fine. However, training the models for new datasets has so far yielded disappointing results. A number of PhD students in my group have e.g. attempted to run the code for whole heart MR to CT registration using the public MMWHS dataset without achieving any meaningful improvement over an initial affine alignment.

  1. Would it be possible to check the training code for bugs or is it the case that larger deformations cannot be sufficiently estimated using LabelReg? We have spent a lot of time on trying to resolve these issues with different preprocessing and settings without success and feel it is important to get feedback from your side on how to best train on new data.
  2. In addition there is an issue with the 'composite' option (both global and local transforms), which throws the following error:
Traceback (most recent call last):
  File "label-reg/training.py", line 35, in <module>
    image_fixed=input_fixed_image)
  File "~/label-reg/labelreg/networks.py", line 13, in build_network
    return CompositeNet(**kwargs)
  File "~/label-reg/labelreg/networks.py", line 89, in __init__
    image_moving=global_net.warp_image(),
TypeError: warp_image() missing 1 required positional argument: 'input_' 
  1. The provided inference function fails to correctly warp multi-channel inputs (as required for label images). We have written a bug-fix for this, using pytorch's grid_sample function and would be happy to provide this to the interested community.
    Thanks a lot for your time.

image size

I used my image data(the sizes of the moving images and the fixed images are all 117x100x255 ) to train the network, and occurred the following problem:

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[1,117,100,255,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: local_up_0/additive_upsampling/resize_volume/transpose_1 = Transpose[T=DT_FLOAT, Tperm=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](local_up_0/additive_upsampling/resize_volume/Reshape_3, local_up_0/additive_upsampling/resize_volume/transpose/perm)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

This problem still existed even I changed the minibatch_size to 1. Do I have to change the size of the images?

dataset?

I am new to the image registration?Can you open all of the dataset in your paper?Thank you.

The loss didn't decrease

I try to use the framework to register fundus images.Because there's very little elastic transformation,I only use the globlenet rather than the composed net.,with vessels as label.However,the loss didn't decrease from the very beginning.

input bug in composit option

2. In addition there is an issue with the 'composite' option (both global and local transforms), which throws the following error:

Traceback (most recent call last):
  File "label-reg/training.py", line 35, in <module>
    image_fixed=input_fixed_image)
  File "~/label-reg/labelreg/networks.py", line 13, in build_network
    return CompositeNet(**kwargs)
  File "~/label-reg/labelreg/networks.py", line 89, in __init__
    image_moving=global_net.warp_image(),
TypeError: warp_image() missing 1 required positional argument: 'input_' 

Accept anisotropic voxel dimension images

Currently, the code assumes isotropic image inputs. It is mainly because the displacement regulariser, e.g. bending energy, and the image reading/writing functions also assume isotropic data. This can be adapted to more general cases

Dice loss problems

When I run the program with example downloaded data,I found there was a problem with label_indices =1 and label_indices =2 ,because the dice were almost zero for these condition,What's the reason for the case? Thanks!

the batch_norm layer problem

return tf.nn.relu(tf.contrib.layers.batch_norm(tf.nn.conv3d_transpose(input_, w, shape_out, strides, "SAME")))

Hi HU,
I notices that the is_training parameter of batch_norm layers(tf.contrib.layers.batch_norm) in your project didn't set to False during the inference period.
But according to the helper document of TensorFlow. this parameter should be set to False during the inference time.
https://www.tensorflow.org/api_docs/python/tf/contrib/layers/batch_norm

Global/Affine registration dramatic overfitting

Dear Yipeng,

in my project, I'm trying to follow your approach stacking sequentially the global and local model parts. Currently, I'm facing the problem with global registration, whereby the local registration is fully functional. When I'm learning affine transformation my model is either dramatically overfitted or cannot learn anything.
In principle, the model looks like this: conv(4) > conv(8) > conv(16) > conv(32) > dence(12) > affine_ddf with stride of 2. I want to start simple and based on this develop the architecture further. I believe this configuration tends to overfit because of the dense layer with too many parameters. Another solution would be to replace dence layer with global pooling to reduce the number of parameters in this place. But in this case, the model learns nothing.
Overfitted case (with a lot parameters in dense):
Screenshot_1
Model isn't learning (i.e. global pooling):
Screenshot_2
I tried various combinations like to make the model deeper or wider (add more conv filters) and I always stuck in one of these results without any hint of how to improve the model.

Could you please share your experience training the affine transormation? Have you tried to train it separately? If it works together with the local model, then it should work separately? Am I missing something?

I hope I stated my problem clearly. I'd appreciate all the help I can get.

training for larger deformations on 2D Mul-Models datasets

thanks for your open-source code! I'm a first-year graduate students from china.
I'd like to try this approach on 2D Mul-Models datasets(image type is .tif and .jpg), and my label is .mhd/.raw (NDims=3) ,may you give me some advise?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.