Giter Site home page Giter Site logo

ikc's People

Contributors

yuanjunchai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

ikc's Issues

Confused with dataroot_GT

Hi, I appreciate your work but I am confused why are you using GT in your parameters optimization while testing. Am I missing something?

PCA Matrix Generation

Hi, thanks for your code! I am wondering how the PCA matrix was trained. Is it possible to include the code of this part as well? Thank you! @yuanjunchai

About methods comparison

Thanks for your nice work!
I don't understand why you choose the non-blind method CARN for comparison?
And why not compare with more classic methods such as EDSR or RCAN?
Have you ever done this?
Looking forward to your reply~

how to understand codes

Hello, I'm a beginner. I have a few superficial questions to ask you. My code foundation is poor. It's convenient for me to learn code by asking a few small questions. Please forgive me if you feel uncomfortable. What are the labels of the three structures and how to calculate the loss respectively. What are the concepts of SR factor 2, 3 and 4 and kenel width mentioned in the paper? Thank you again.

More accurate kernel estimation, but worse SR result

Hi, when I train IKC with a pertained SFTMD, the loss of kernel maps will get smaller and smaller, which indicates that the estimation of kernel is getting more accurate, but the validation result on Set5 will first get better and then get worser. It seems that roughly accurate kernel estimation will lead to better SR result that more accurate kernel estimation would.

I wonder if you come up with the same problem during training.

Thanks

How to test on my own image?

Hi.

How would I try IKC to apply Super Resolution to an image of mine -- for example, to scale it up 4 times?

I have installed the repo and am getting the datasets. And I see the "Train" and "Test" instructions. But if my image is, for example, "C:\Users\Scott\Desktop\my_image.png", where do I plug that in to upscale it 4x, and then where would I find the output?

I hope this is clear. I appreciate your time.

test on my own dataset

Hi, thanks for your job! I want to test the model on a dataset of myself, what should I do?
I tried to change the path in

IKC-master/codes/options/test/test_SFTMD.yml,
IKC-master/codes/options/test/test_Predictor.yml,
and IKC-master/codes/options/test/test_Corrector.yml

and got a pretty low average psnr(23.598937) and ssim(0.645178)

19-11-15 17:31:03.120 - INFO: ----Average PSNR/SSIM results for div2k100----
PSNR: 23.598937 dB; SSIM: 0.645178
19-11-15 17:31:03.120 - INFO: ----Y channel, average PSNR/SSIM----
PSNR_Y: 25.071245 dB; SSIM_Y: 0.682182

My input images were the validation set of div2k(801-900) and all of them are blured with an isotropic gaussian kernel with width 2.0, then downsampled with bicubic interpolation. All the pictures were processed with opencv 4.1.0. The samples of input and the SR output are shown blew
input
0802x4
output
0802_7
the log information

19-11-15 16:08:18.763 - INFO: step: 1, img:0802 - PSNR: 27.946776 dB; SSIM: 0.775806; PSNR_Y: 29.448376 dB; SSIM_Y: 0.818709.
19-11-15 16:08:25.778 - INFO: step: 2, img:0802 - PSNR: 27.992938 dB; SSIM: 0.776571; PSNR_Y: 29.497938 dB; SSIM_Y: 0.819453.
19-11-15 16:08:32.779 - INFO: step: 3, img:0802 - PSNR: 28.035925 dB; SSIM: 0.777074; PSNR_Y: 29.545070 dB; SSIM_Y: 0.820215.
19-11-15 16:08:39.848 - INFO: step: 4, img:0802 - PSNR: 28.079307 dB; SSIM: 0.777602; PSNR_Y: 29.592643 dB; SSIM_Y: 0.820966.
19-11-15 16:08:46.881 - INFO: step: 5, img:0802 - PSNR: 28.124418 dB; SSIM: 0.778161; PSNR_Y: 29.642248 dB; SSIM_Y: 0.821761.
19-11-15 16:08:53.952 - INFO: step: 6, img:0802 - PSNR: 28.174223 dB; SSIM: 0.778820; PSNR_Y: 29.696932 dB; SSIM_Y: 0.822650.
19-11-15 16:09:00.975 - INFO: step: 7, img:0802 - PSNR: 28.228933 dB; SSIM: 0.779559; PSNR_Y: 29.757185 dB; SSIM_Y: 0.823617.
19-11-15 16:09:00.976 - INFO: step: 7, img:0802 - average PSNR: 28.083217 dB; SSIM: 0.777656; PSNR_Y: 29.597199 dB; SSIM_Y: 0.821053.
19-11-15 16:09:00.976 - INFO: step: 7, img:0802 - max PSNR: 28.228933 dB; SSIM: 0.779559; PSNR_Y: 29.757185 dB; SSIM_Y: 0.823617.

what did I miss? Thanks in advance!

Detail of anisotropic gaussian kernel

In the generation of anisotropic gaussian kernel, see

def anisotropic_gaussian_kernel(l, sigma_matrix, tensor=False):

why y is set as y = np.clip(np.random.random() * scaling * x, sig_min, sig_max)? Why don't choose the same way as x = np.random.random() * (sig_max - sig_min) + sig_min?

Testing for scale 2

Hi,

Thanks for providing the code!

I tried testing for scale 2 by modifying the scale parameter in test_Predictor.yml and test_Corrector.yml,
but the network keeps on upscaling the image by scale 4.
Could you provide some more details for upscaling with scale factor 2?

Thanks!

How to train my own data

Hi. I want to train my own data. But the LR in my dataset is not aligned with the HR image pixels,when I use this network training, do I need to run this file "generate_mod_LR_bic.py"? Thanks.

UserWarning:Detected call of `lr_scheduler.step()` before `optimizer.step()`.

19-10-08 11:06:23.167 - INFO: Start training from epoch: 0, iter: 0
C:\Users\mayn\Anaconda3\envs\pytorch\lib\site-packages\torch\optim\lr_scheduler.py:82: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
19-10-08 11:12:06.405 - INFO: <epoch: 0, iter: 100, lr:1.000e-04> l_pix: 1.0065e-01

SSIM is much lower than that is reported in your paper

I generate the test dataset according to your code, and test the provided pretrained model on it. The PNSR results are roughly the same with reported results. However, the SSIM results are much lower than that of your reported one. (0.8808 and 0.9278 on Set5).

Test for real image

Thank you for your great job to share. I have made some test for my synthetic image data(some remote sensing image).And it worked well . I've got 2x,4x model of F,P,C model.
I want to apply it to super resolution to actual data. And I tried to modify the code of test_IKC , while the mode of 'LQ','LQGTker',‘SRker' need kernel_map or HR image except for LR image.
I wonder if you can give me some suggestion for the test of IKC when I have LR image only.

about the blur patch gerneate

I am new to the blind super-resolution.
and i am confused about the data generate.
may i ask is the blur image patch is generated by first get patch then blur?
and if i bluring the whole image and the get patch? is that suitable?
thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.