Giter Site home page Giter Site logo

medsynthesisv1's People

Contributors

ginobilinie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

medsynthesisv1's Issues

Error when importing gauss

An error pops up in utils file on line "import gauss".
I tried "pip install gauss" but that did not solve the error.
Any other suggestions??

Default normalization method for train and test on 2D image

Hi @ginobilinie ,
Thank you so much because of your great works.
I am working on the medical image translation project and I am a newbie on this topic. Your codes help me a lot.
When I read your code, I have been stucked on normalizing method for data. I saw that for 2D data you use 6 as default method for training and testing and this method use the max and min values of single image to normalize the image. But when the case that we only have MRI image, this method seem can not be applied. So Which normalize method should we use for training and testing phase?
Could you please correct this question for me?
Thank you so much for the time.

Why loss is rising

Hellow,
I am very interested in your work.This is a case of training with my own data.The loss is increasing all the time. Have you ever been in this situation? Is there something wrong with my revision


time now is: Mon Dec 23 16:31:41 2019
average running loss for generator between iter [201, 300] is: 559.80874
lossG_G is 810.52466 respectively.
loss for GDL loss is 1224.602905
loss_real is -273623008.0 loss_fake is -1381496520704.0
loss for discriminator is 1381222842368.000000 D cost is -624456368128.000000
lossG_D for generator is -68901937152.000000
cost time for iter [201, 300] is 1535.73


trainMRCT_snorm_64_0711reg.h5
y shape (2112, 64, 64, 1)


time now is: Mon Dec 23 16:57:57 2019
average running loss for generator between iter [301, 400] is: 1166.54661
lossG_G is 1582.49902 respectively.
loss for GDL loss is 6225.074219
loss_real is -347660448.0 loss_fake is -4028874358784.0
loss for discriminator is 4028526755840.000000 D cost is -1904541564928.000000
lossG_D for generator is -188310470656.000000
cost time for iter [301, 400] is 1575.69

Some problems about the data loader

Hi, I'm trying to use this network to deal with 3T-to-7T. But I find there is a function in "extract23DPatchMultiModalImg.py" called extractPatch4OneSubject(matLPET, matCT, matSPET, maskimg, fileID,dSeg,step,rate). I wonder in my case, what do I need for matLPET, matCT, matSPET, masking. Thank you!

Why you are using conv2d in the Discriminator?

These codes are full of bugs, e.g., Discriminator is implementing conv2d, and the UNetUpBlock.center_crop misses the last dimension of a tensor. Hope they will be fixed in a future version.

TypeError: 'float' object cannot be interpreted as an integer

Thank you very much for the great repo! I am facing a problem when running the extract23DPatch4MultiModalImg.py file:

Traceback (most recent call last):
File "", line 95, in extractPatch4OneSubject
matFAOut=np.zeros([row+2marginD[0],col+2marginD[1],leng+2*marginD[2]],dtype=np.float16)
TypeError: 'float' object cannot be interpreted as an integer

Is there any way I can fix this problem? Thank you!

MRI datasets synthetic CT datasets

Thank you very much for your excellent work. I am very interested in your work. Now there are some labeled MRI datasets. Can you help me to synthesize the corresponding labeled CT datasets? Look forward to your reply!

when run ur code,I met a mistake

when run your code 'extract23DPatch4SingleModalImg.py', I met a mistake
'OSError: Unable to create file (unable to open file: name = './trainMRCT_snorm_64_IXI-T1\IXI002-Guys-0828-T1.nii.gz.h5', errno = 2, error message= 'No such file or directory', flags = 13, o_flags = 302)',
is it because of I using the wrong type of file'IXI-T1\IXI002-Guys-0828-T1.nii.gz' or something else?

the problem when test one subject

Hello, What is the difference between 'sub1_sourceCT.nii.gz', 'sub1_extraCT.nii.gz', and 'sub1_targetCT.nii.gz' in runCTRecon3d.py when test one subject?

Where can I get the target CT data for MR->CT?

Hi, I tried to use the code for MR to CT synthesis. I downloaded the IXI dataset you mentioned, but it seems that there's only MR images,no target CT images. So how should I use this dataset for MR->CT synthesis?

[Question] How many images can we synthesize, relative to the number of training images?

Hi @ginobilinie ,
Thank you for this repository! I am experimenting with medical image synthesis and I have a general question on synthesizing images using GANs. I would appreciate any inputs on it.

I am assuming there should be some limit on the number of different images we can generate using a trained generator. Is that number a ratio of the number of trainset images? For example, if we train the GAN with 1000 MRIs, can we synthesize 2000 (twice as much) new MRIs?

Is there a procedure (or a hack) to estimate the number of images that can be synthesized?

Thanks in advance!

Data preprocessing

Dear Dr. Nie
Thank you for your great job.
I'm working on medical image synthesis too. But I am new.
I don't how to do preprocessing for CT values. Should I normalize it to 0-255 or just let be original
CT value.
Look forward to your reply.

lucia

Using different modality Data set

Hello,
First of all Thank you for your work.
I am also intrested in doing medical image synthesis but for PET and CT, instead of MRI.
Can you tell me ,what could be possible issues I should keep in mind if I implement your code for my dataset?

The inputKey and outoutKey parameters

hi, i want to know the meaning of the inputKey and outoutKey parameters, the file in path_patients are all h5 file, can't find object '3T' or '7T'.

some questions of running runCTRecon.py

Hi. I am running the code : python runCTRecon.py . some questions confused me.
for example, "prefixModelName": is it mean a path to store trained model?
and what is "prefixPredictedFN", can I change it?
what is "test_input_file_name" ?which kind image it store?

and I know"path_test indicates path for .nii.gz file", but how much datases path_test store? all the dataset or only
training set+validation set?

Implementation of ACM

Hello,
I got through your both paper you have listed on read me as well as both code in tensorflow and pytorch. For Auto Context you need multiple GAN model during training but i haven't seen such type of training loop in your codes. If one have to write custom code for ACM and take this as references,can you suggest how to write training loop for ACM ? if possible can you upload your code ? I want to try to replicate your experiment on my RIRE datasets.
Thank you
Best Regards.

lambda parameter settings

lambda_AD lambda_gdl lossBase lambda_D_WGAN_GP
How to set these parameters? Is it by setting these parameters that the loss is in the same order of magnitude?

Generator_3D_patches

hi, i want to know where to define the function Generator_3D_patches in runCTRecon3d.py, because i have some issues when i use my own datasets, i want to know the meaning of the inputKey and outoutKey parameters in Generator_3D_patches so i can replace it with my own parameters, ths.

240*240*155->240*240*155

Hi! If it is 240240155(input)->240240155(output), which parts of the program should I modify?

RuntimeError Output size is too small

Hi,
Thanks a lot for sharing your work. I am trying to run the code but I keep getting some errors. There are some libraries missing e.g. Gauss and it seems some stuff is out of date. I already tried many things but nothing works. Have you tested the exact code recently?
It would be nice if you provide a working demo e.g. that runs with any free dataset or even some random numbers.

Updating how to run the pytorch code instruction

Hi again,

Some stuff is confusing:

  1. single vs multi-modality meaning: assuming input MRI and output is CBCT why this is a single modality?

  2. batch generating: What do you mean by " Put all these h5 files into two folders (training, validation), and remember the path to these h5 files". A assuming input 3 pair of [mri,cbct] images e.g.

            [ [mri1.nii cbct1.nii],[mri2.nii cbct2.nii],[mri3.nii cbct3.nii] ]
    

    extract23DPatch4SingleModalImg generates six .h5 files e.g. 2 for each image e.g.

          img1.h5, img1r.h5, img2.h5, img2r.h5, img3.h5, and img3r.h5
    

    img1.h5 contains batches of [mri1.nii cbct1.nii], it seems the other file img1r.h5 contains the exact data. My questions:

    • What is the difference between the two files img.h5 and imgr.h5? why their size is different?
    • Shall I put 3 files in the training and the other 3 (ending with "r") in the validation? or copy all 6 files into two different folders e.g. training and validation?
    • Why the different sizes of h5 files e.g. img1.h5 and img2.h5 even though all input images are the same size?

Suggestion: since usually one works with either 2D or 3D, it would be nice to separate the code into two different folders/repositories one for 2d and another for 3D.

run “extract23Dpatches4SingleImg.py”

Hellow ,I encountered some difficulties
File "F:/PycharmProjects/untitled1/extract23DPatch4SingleModalImg.py", line 183, in extractPatch4OneSubject
trainFA[cubicCnt, 0, :, :, :] = volFA
ValueError: could not broadcast input array from shape (5,64,41) into shape (5,64,64)

Patch Exctraction Doubt

Hello,
The patches which are extracted are 2D ptches or 3D patches. Also Since I am working on PET to CT conversion, I will be using patch extraction for Multimodality right?

3D MR to CT synthesis

Hello,
I am using paired MR and CT data for MR-to-CT synthesis. My data size is 172x220x156 in .nii format. Since I will be using 3D network I have to create patches.
Could you please let me know which codes shall I use to create 3D patches for .nii file format instead of HDF5?
Shouldn't we also generate patches in pairs too? like the same input and target patch in the same location and in the same subject.

MRI data

Hello ,can you tell me Where can I get these MRI data?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.