yuanjunchai / ikc Goto Github PK
View Code? Open in Web Editor NEWImplementation of 'Blind Super-Resolution With Iterative Kernel Correction' (CVPR2019)
License: Apache License 2.0
Implementation of 'Blind Super-Resolution With Iterative Kernel Correction' (CVPR2019)
License: Apache License 2.0
In table 1, you proposed the ablation study. Could you give the sample code for direct concatenation in the intermediate layers? I am not sure how to concatenate the kernel_code in the intermediate layers.
hello,Where did the pca_matrix.pth come from?which paper and which author?
I want to generate one,thanks!
Hi, I appreciate your work but I am confused why are you using GT in your parameters optimization while testing. Am I missing something?
Hi, thanks for your code! I am wondering how the PCA matrix was trained. Is it possible to include the code of this part as well? Thank you! @yuanjunchai
Thanks for your nice work!
I don't understand why you choose the non-blind method CARN for comparison?
And why not compare with more classic methods such as EDSR or RCAN?
Have you ever done this?
Looking forward to your reply~
Hello, I'm a beginner. I have a few superficial questions to ask you. My code foundation is poor. It's convenient for me to learn code by asking a few small questions. Please forgive me if you feel uncomfortable. What are the labels of the three structures and how to calculate the loss respectively. What are the concepts of SR factor 2, 3 and 4 and kenel width mentioned in the paper? Thank you again.
Sir,
Have you haven the random_batch_noise function committed? thank you!
Hi, thanks for your great job! But there is no reference in the paper about Historic Dataset? Could you please provide the image '1967 Vietnam war protest'?
Thanks a lot for your excellent work.
While utilizing the SFTMD pre-training model provided in this project to jointly train the Predictor and Corrector models on the Flicklr2K dataset, my model behaved badly.
Did I make a mistake in the configuration or miss any training tricks.🎃🤕
Hi, when I train IKC with a pertained SFTMD, the loss of kernel maps will get smaller and smaller, which indicates that the estimation of kernel is getting more accurate, but the validation result on Set5 will first get better and then get worser. It seems that roughly accurate kernel estimation will lead to better SR result that more accurate kernel estimation would.
I wonder if you come up with the same problem during training.
Thanks
Is this trained in a supervised/unsupervised way?
Hi.
How would I try IKC to apply Super Resolution to an image of mine -- for example, to scale it up 4 times?
I have installed the repo and am getting the datasets. And I see the "Train" and "Test" instructions. But if my image is, for example, "C:\Users\Scott\Desktop\my_image.png", where do I plug that in to upscale it 4x, and then where would I find the output?
I hope this is clear. I appreciate your time.
Hi, thanks for your job! I want to test the model on a dataset of myself, what should I do?
I tried to change the path in
IKC-master/codes/options/test/test_SFTMD.yml,
IKC-master/codes/options/test/test_Predictor.yml,
and IKC-master/codes/options/test/test_Corrector.yml
19-11-15 17:31:03.120 - INFO: ----Average PSNR/SSIM results for div2k100----
PSNR: 23.598937 dB; SSIM: 0.645178
19-11-15 17:31:03.120 - INFO: ----Y channel, average PSNR/SSIM----
PSNR_Y: 25.071245 dB; SSIM_Y: 0.682182
My input images were the validation set of div2k(801-900) and all of them are blured with an isotropic gaussian kernel with width 2.0, then downsampled with bicubic interpolation. All the pictures were processed with opencv 4.1.0. The samples of input and the SR output are shown blew
input
output
the log information
19-11-15 16:08:18.763 - INFO: step: 1, img:0802 - PSNR: 27.946776 dB; SSIM: 0.775806; PSNR_Y: 29.448376 dB; SSIM_Y: 0.818709.
19-11-15 16:08:25.778 - INFO: step: 2, img:0802 - PSNR: 27.992938 dB; SSIM: 0.776571; PSNR_Y: 29.497938 dB; SSIM_Y: 0.819453.
19-11-15 16:08:32.779 - INFO: step: 3, img:0802 - PSNR: 28.035925 dB; SSIM: 0.777074; PSNR_Y: 29.545070 dB; SSIM_Y: 0.820215.
19-11-15 16:08:39.848 - INFO: step: 4, img:0802 - PSNR: 28.079307 dB; SSIM: 0.777602; PSNR_Y: 29.592643 dB; SSIM_Y: 0.820966.
19-11-15 16:08:46.881 - INFO: step: 5, img:0802 - PSNR: 28.124418 dB; SSIM: 0.778161; PSNR_Y: 29.642248 dB; SSIM_Y: 0.821761.
19-11-15 16:08:53.952 - INFO: step: 6, img:0802 - PSNR: 28.174223 dB; SSIM: 0.778820; PSNR_Y: 29.696932 dB; SSIM_Y: 0.822650.
19-11-15 16:09:00.975 - INFO: step: 7, img:0802 - PSNR: 28.228933 dB; SSIM: 0.779559; PSNR_Y: 29.757185 dB; SSIM_Y: 0.823617.
19-11-15 16:09:00.976 - INFO: step: 7, img:0802 - average PSNR: 28.083217 dB; SSIM: 0.777656; PSNR_Y: 29.597199 dB; SSIM_Y: 0.821053.
19-11-15 16:09:00.976 - INFO: step: 7, img:0802 - max PSNR: 28.228933 dB; SSIM: 0.779559; PSNR_Y: 29.757185 dB; SSIM_Y: 0.823617.
what did I miss? Thanks in advance!
In the generation of anisotropic gaussian kernel, see
Line 244 in b1103bd
why y is set as y = np.clip(np.random.random() * scaling * x, sig_min, sig_max)
? Why don't choose the same way as x = np.random.random() * (sig_max - sig_min) + sig_min
?
Where to set these two values ? sigma_LR and sigma_SR
Since I just find a single value sig = 2.6
Hi,
Thanks for providing the code!
I tried testing for scale 2 by modifying the scale parameter in test_Predictor.yml and test_Corrector.yml,
but the network keeps on upscaling the image by scale 4.
Could you provide some more details for upscaling with scale factor 2?
Thanks!
Hi. I want to train my own data. But the LR in my dataset is not aligned with the HR image pixels,when I use this network training, do I need to run this file "generate_mod_LR_bic.py"? Thanks.
19-10-08 11:06:23.167 - INFO: Start training from epoch: 0, iter: 0
C:\Users\mayn\Anaconda3\envs\pytorch\lib\site-packages\torch\optim\lr_scheduler.py:82: UserWarning: Detected call of lr_scheduler.step()
before optimizer.step()
. In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step()
before lr_scheduler.step()
. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
19-10-08 11:12:06.405 - INFO: <epoch: 0, iter: 100, lr:1.000e-04> l_pix: 1.0065e-01
as the title, I can not find the pre-trained models' link~~~~
I generate the test dataset according to your code, and test the provided pretrained model on it. The PNSR results are roughly the same with reported results. However, the SSIM results are much lower than that of your reported one. (0.8808 and 0.9278 on Set5).
When compare with other methods such as CARN, did you use your own degradation dataset to retrain the network?
Thank you for your great job to share. I have made some test for my synthetic image data(some remote sensing image).And it worked well . I've got 2x,4x model of F,P,C model.
I want to apply it to super resolution to actual data. And I tried to modify the code of test_IKC , while the mode of 'LQ','LQGTker',‘SRker' need kernel_map or HR image except for LR image.
I wonder if you can give me some suggestion for the test of IKC when I have LR image only.
I noticed that the niter in the SFTMD.yaml file is 500000 and the batch_size is 32, so it takes about 10 days to run. Is it normal to only train the SFTMD for 10 days?
and the data source dir is '/mnt/yjchai/SR_data/Set5' . I need to create the SR_data dir by myself?
I am new to the blind super-resolution.
and i am confused about the data generate.
may i ask is the blur image patch is generated by first get patch then blur?
and if i bluring the whole image and the get patch? is that suitable?
thank you!
where is the code?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.