Giter Site home page Giter Site logo

rnan's People

Contributors

yulunzhang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rnan's Issues

Training Datasets

Thank you for the public implementation. Could you provide the links to the training data used for demosaicing, denoising and Compression Artifact Removal tasks.

the meaning of dir_demo,dir_test in option.py

Hi, excuse me. I have some questions that may need your help.

  1. I know the dir_data is DIV2K 800 training images path. Is dir_demo set5,set14 test set path?
    Could you give a explanation of dir_demo ?

  2. data_test: The data_test is default 'DIV2K'. And how many valid images used to test? Is it same as EDSR , i.e 0801.png to 0810.png?
    In other words , whether it lacks the parameter of data_range in option.py?

Best regards

Do we need `MSDataLoader`?

Hola friend, great code! is there a reason we need MSDataLoader instead of torch.utils.data.DataLoader?

For pytorch>=1.0

Hi, thanks for your great work.
If it is possible, could you open the source code of pytorch >=1.0.0 that used more often.

Best regards.

hello,one small question about code

thanks for good work.
图片

In this code,why “modules_body.append(conv(n_feats, n_feats, kernel_size))“ need to be add?if we remove it,the peformance will decrease?

some questions about parameters for training

Hi! Thanks for your wonderful work and open sources.
I'd like to reproduce the result of denoising. Here are some questions to consult:

  1. What dataset is used for validation and how many iamges are used? (The readme mentioned that the validation set is div2k_val100, but in the options.py in the open source code, the default parameter shows that only the first 5 images of div2k_val_100 0801 ~ 0805 as used in the validation set)

  2. What is the number of training epoches in the grayscale denoising task? How many days does the training take? (The parameter in options.py defaults to 1300 epoch, considering that the train set in each epoch is repeated 20 times, and the actual number of epochs is 26,000. We found that training 1/1300 epoch takes about 2 hours)

  3. How to obtain the model corresponding the performance in paper? Did the test result shown in the paper use the best model on validation, or the model obtained from the last epoch?

  4. Are the other parameters of the experiment to reproduce the result in paper the same as those given by options.py?

Looking forward to your apply. Thanks a lot!

Regards

training time and number of GPUs

Hi, thanks for your wonderful work and opening source.
Could you please tell me how long did you train the model , the kind of GPU and number of GPUs?

Best regards

How much memory is required?

Thank you for your work. But when I try to run this code, it occur that "out of memory", I would like to know how much memory is required for this.

Thank you!

Can anyone reproduce the reported result?

Hi @yulunzhang ,

Thanks for your amazing work.
Could you please kindly provide the community more detail on how to reproduce your result?

We attempt to reproduce the experiment in gray image denoising with noise level equal to 50.
We use the same setting as you mention in the readme file and train the model three times.
However, the best result we can obtain is 26.35 instead of 26.48.

Another comment: the training time is around 8 days on one 1080Ti.

Problems about Kodak24.

Thanks for your impressive work! I want to know how did you get gray images of Kodak24? I trry opencv and matlab but can not get the same test results on Kodak24. Looking forward to your reply.

RuntimeError: CUDA out of memory.

I used the 1080Ti 11gb.
but when I run the t.test(),sr = self.model(lr, idx_scale)
I got the RuntimeError: CUDA out of memory.
how can I fix this? thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.