ginobilinie / medsynthesisv1 Goto Github PK
View Code? Open in Web Editor NEWThis is a copy of package for medical image synthesis work with LRes-ResUnet and GAN (wgan-gp) in pytorch framework
License: MIT License
This is a copy of package for medical image synthesis work with LRes-ResUnet and GAN (wgan-gp) in pytorch framework
License: MIT License
An error pops up in utils file on line "import gauss".
I tried "pip install gauss" but that did not solve the error.
Any other suggestions??
maxPercentPET, minPercentPET = np.percentile(mrnp, [99.5, 0])
99.5 0 ?
Hi @ginobilinie ,
Thank you so much because of your great works.
I am working on the medical image translation project and I am a newbie on this topic. Your codes help me a lot.
When I read your code, I have been stucked on normalizing method for data. I saw that for 2D data you use 6 as default method for training and testing and this method use the max and min values of single image to normalize the image. But when the case that we only have MRI image, this method seem can not be applied. So Which normalize method should we use for training and testing phase?
Could you please correct this question for me?
Thank you so much for the time.
Hellow,
I am very interested in your work.This is a case of training with my own data.The loss is increasing all the time. Have you ever been in this situation? Is there something wrong with my revision
time now is: Mon Dec 23 16:31:41 2019
average running loss for generator between iter [201, 300] is: 559.80874
lossG_G is 810.52466 respectively.
loss for GDL loss is 1224.602905
loss_real is -273623008.0 loss_fake is -1381496520704.0
loss for discriminator is 1381222842368.000000 D cost is -624456368128.000000
lossG_D for generator is -68901937152.000000
cost time for iter [201, 300] is 1535.73
trainMRCT_snorm_64_0711reg.h5
y shape (2112, 64, 64, 1)
time now is: Mon Dec 23 16:57:57 2019
average running loss for generator between iter [301, 400] is: 1166.54661
lossG_G is 1582.49902 respectively.
loss for GDL loss is 6225.074219
loss_real is -347660448.0 loss_fake is -4028874358784.0
loss for discriminator is 4028526755840.000000 D cost is -1904541564928.000000
lossG_D for generator is -188310470656.000000
cost time for iter [301, 400] is 1575.69
Hi, I am trying to use your network for MR to CT synthesis. My original MR data are in the same modality, so shall I run “extract23Dpatches4SingleImg.py” for creating patches? Or I should run "extract23Dpatches4MultiImg.py" and let matLHPET =np.zero() ?
Hi, I'm trying to use this network to deal with 3T-to-7T. But I find there is a function in "extract23DPatchMultiModalImg.py" called extractPatch4OneSubject(matLPET, matCT, matSPET, maskimg, fileID,dSeg,step,rate). I wonder in my case, what do I need for matLPET, matCT, matSPET, masking. Thank you!
These codes are full of bugs, e.g., Discriminator is implementing conv2d, and the UNetUpBlock.center_crop misses the last dimension of a tensor. Hope they will be fixed in a future version.
Thank you very much for the great repo! I am facing a problem when running the extract23DPatch4MultiModalImg.py file:
Traceback (most recent call last):
File "", line 95, in extractPatch4OneSubject
matFAOut=np.zeros([row+2marginD[0],col+2marginD[1],leng+2*marginD[2]],dtype=np.float16)
TypeError: 'float' object cannot be interpreted as an integer
Is there any way I can fix this problem? Thank you!
Thank you very much for your excellent work. I am very interested in your work. Now there are some labeled MRI datasets. Can you help me to synthesize the corresponding labeled CT datasets? Look forward to your reply!
when run your code 'extract23DPatch4SingleModalImg.py', I met a mistake
'OSError: Unable to create file (unable to open file: name = './trainMRCT_snorm_64_IXI-T1\IXI002-Guys-0828-T1.nii.gz.h5', errno = 2, error message= 'No such file or directory', flags = 13, o_flags = 302)',
is it because of I using the wrong type of file'IXI-T1\IXI002-Guys-0828-T1.nii.gz' or something else?
Hello, What is the difference between 'sub1_sourceCT.nii.gz', 'sub1_extraCT.nii.gz', and 'sub1_targetCT.nii.gz' in runCTRecon3d.py when test one subject?
Hi, I tried to use the code for MR to CT synthesis. I downloaded the IXI dataset you mentioned, but it seems that there's only MR images,no target CT images. So how should I use this dataset for MR->CT synthesis?
Hellow,
I want to ask how the image is registered about 3T to 7T.
Hi @ginobilinie ,
Thank you for this repository! I am experimenting with medical image synthesis and I have a general question on synthesizing images using GANs. I would appreciate any inputs on it.
I am assuming there should be some limit on the number of different images we can generate using a trained generator. Is that number a ratio of the number of trainset images? For example, if we train the GAN with 1000 MRIs, can we synthesize 2000 (twice as much) new MRIs?
Is there a procedure (or a hack) to estimate the number of images that can be synthesized?
Thanks in advance!
Dear Dr. Nie
Thank you for your great job.
I'm working on medical image synthesis too. But I am new.
I don't how to do preprocessing for CT values. Should I normalize it to 0-255 or just let be original
CT value.
Look forward to your reply.
lucia
Hello,
First of all Thank you for your work.
I am also intrested in doing medical image synthesis but for PET and CT, instead of MRI.
Can you tell me ,what could be possible issues I should keep in mind if I implement your code for my dataset?
hi, i want to know the meaning of the inputKey and outoutKey parameters, the file in path_patients are all h5 file, can't find object '3T' or '7T'.
Hi. I am running the code : python runCTRecon.py . some questions confused me.
for example, "prefixModelName": is it mean a path to store trained model?
and what is "prefixPredictedFN", can I change it?
what is "test_input_file_name" ?which kind image it store?
and I know"path_test indicates path for .nii.gz file", but how much datases path_test store? all the dataset or only
training set+validation set?
Hello,
I got through your both paper you have listed on read me as well as both code in tensorflow and pytorch. For Auto Context you need multiple GAN model during training but i haven't seen such type of training loop in your codes. If one have to write custom code for ACM and take this as references,can you suggest how to write training loop for ACM ? if possible can you upload your code ? I want to try to replicate your experiment on my RIRE datasets.
Thank you
Best Regards.
lambda_AD lambda_gdl lossBase lambda_D_WGAN_GP
How to set these parameters? Is it by setting these parameters that the loss is in the same order of magnitude?
hi, i want to know where to define the function Generator_3D_patches in runCTRecon3d.py, because i have some issues when i use my own datasets, i want to know the meaning of the inputKey and outoutKey parameters in Generator_3D_patches so i can replace it with my own parameters, ths.
Hi! If it is 240240155(input)->240240155(output), which parts of the program should I modify?
Hi,
Thanks a lot for sharing your work. I am trying to run the code but I keep getting some errors. There are some libraries missing e.g. Gauss and it seems some stuff is out of date. I already tried many things but nothing works. Have you tested the exact code recently?
It would be nice if you provide a working demo e.g. that runs with any free dataset or even some random numbers.
Hello,
Can you tell me which lines of the code, reconstruct the output patches back to the entire image?
How have you implemented this.
Thank you
Best Regards
Hi again,
Some stuff is confusing:
single vs multi-modality meaning: assuming input MRI and output is CBCT why this is a single modality?
batch generating: What do you mean by " Put all these h5 files into two folders (training, validation), and remember the path to these h5 files". A assuming input 3 pair of [mri,cbct] images e.g.
[ [mri1.nii cbct1.nii],[mri2.nii cbct2.nii],[mri3.nii cbct3.nii] ]
extract23DPatch4SingleModalImg generates six .h5 files e.g. 2 for each image e.g.
img1.h5, img1r.h5, img2.h5, img2r.h5, img3.h5, and img3r.h5
img1.h5 contains batches of [mri1.nii cbct1.nii], it seems the other file img1r.h5 contains the exact data. My questions:
Suggestion: since usually one works with either 2D or 3D, it would be nice to separate the code into two different folders/repositories one for 2d and another for 3D.
Hello ,can you tell me how can I run this code on CPU
Hellow ,I encountered some difficulties
File "F:/PycharmProjects/untitled1/extract23DPatch4SingleModalImg.py", line 183, in extractPatch4OneSubject
trainFA[cubicCnt, 0, :, :, :] = volFA
ValueError: could not broadcast input array from shape (5,64,41) into shape (5,64,64)
Hello,
The patches which are extracted are 2D ptches or 3D patches. Also Since I am working on PET to CT conversion, I will be using patch extraction for Multimodality right?
hi, @ginobilinie could you please upload the pytorch vision if it is still available?
Thanks in advance!
I tried to apply for data on ADNI, but I haven't responded to me yet. I am more anxious to do related research. Can you send the data to the [email protected]? Many thanks
Hello,
I am using paired MR and CT data for MR-to-CT synthesis. My data size is 172x220x156 in .nii format. Since I will be using 3D network I have to create patches.
Could you please let me know which codes shall I use to create 3D patches for .nii file format instead of HDF5?
Shouldn't we also generate patches in pairs too? like the same input and target patch in the same location and in the same subject.
Hello ,can you tell me Where can I get these MRI data?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.