Giter Site home page Giter Site logo

alterzero / dbpn-pytorch Goto Github PK

View Code? Open in Web Editor NEW
560.0 16.0 164.0 443.38 MB

The project is an official implement of our CVPR2018 paper "Deep Back-Projection Networks for Super-Resolution" (Winner of NTIRE2018 and PIRM2018)

Home Page: https://alterzero.github.io/projects/DBPN.html

License: MIT License

Python 100.00%
super-resolution back-projection

dbpn-pytorch's Introduction

NEWS

  • Apr 1, 2020 -> NEW paper on Space-Time Super-Resolution STARnet (to appear in CVPR2020)
  • Jan 10, 2019 -> Added model used for PIRM2018, and support Pytorch >= 1.0.0
  • Mar 25, 2019 -> Paper on Video Super-Resolution RBPN (CVPR2019)
  • Apr 12, 2019 -> Added Extension of DBPN paper and model.

Deep Back-Projection Networks for Super-Resolution (CVPR2018)

Winner (1st) of NTIRE2018 Competition (Track: x8 Bicubic Downsampling)

Winner of PIRM2018 (1st on Region 2, 3rd on Region 1, and 5th on Region 3)

Project page: https://alterzero.github.io/projects/DBPN.html

We also provide original Caffe implementation

Pretrained models and Results

Pretrained models (DBPNLL) and results can be downloaded from this link! https://drive.google.com/drive/folders/1ahbeoEHkjxoo4NV1wReOmpoRWbl448z-?usp=sharing

Dependencies

  • Python 3.5
  • PyTorch >= 1.0.0

Model types

  1. "DBPN" -> use T = 7
  2. "DBPNLL" -> use T = 10
  3. PIRM Model -> "DBPNLL" with adversarial loss
  4. "DBPN-RES-MR64-3" -> improvement of DBPN with recurrent process + residual learning

##########HOW TO##########

#Training

   python3    main.py    

#Testing

   python3    eval.py    

#Training GAN for PIRM2018

   python3    main_gan.py    

#Testing GAN for PIRM2018

   python3    eval_gan.py    

DBPN

Citations

If you find this work useful, please consider citing it.

@inproceedings{DBPN2018,
  title={Deep Back-Projection Networks for Super-Resolution},
  author={Haris, Muhammad and Shakhnarovich, Greg and Ukita, Norimichi},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2018}
}

@article{DBPN2019,
  title={Deep Back-Projection Networks for Single Imaage Super-Resolution},
  author={Haris, Muhammad and Shakhnarovich, Greg and Ukita, Norimichi},
  journal={arXiv preprint arXiv:1904.05677},
  year={2019}
}

dbpn-pytorch's People

Contributors

alterzero avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dbpn-pytorch's Issues

GAN Training

Hello,

Are there any published details on the GAN setting of the model? Specific hyperparameters and comparison with non-GAN setting?

Thanks,
Ankit

The question about train.

Hello!
Recently,i read the paper of DBPN,i think it's a perfect job. Then i try to run your code to achieve your result at the paper but still a little gap approximately 1.5dB.
So i have a question which hope you can teach me something about it .
I want to know which the train dataset you use for this work : DIV2K? Flickr? ImageNet dataset? or use all of them ?
Best wishes to you!
Thankyou!

runtimeerror

    RuntimeError: 
    An attempt has been made to start a new process before the
    current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

I got this error when run python eval.py, I tried both in torch 1.0.0 and torch 0.3.1, the issue is in

    for batch in testing_data_loader:

which call

    testing_data_loader = DataLoader(dataset=test_set, num_workers=opt.threads, batch_size=opt.testBatchSize, shuffle=False)

again and again...

jpg problem

Hi, do you try training with jpg files, I just try it. found the result is very bad!

Evaluate test data

Thanks for your source code. When I try eval code with 2x, 4x, and 8x rescale parameters. I found that 2x, 4x results have some problems in color. I attached SR result. I am appreciate you if you reply me.

For 2x:
woman_x2

For 4x:
woman_x4

For 8x:

woman_x8

question about bicubic

Hello, thanks for your excellent work!

I am a rookie, and I have a very simple question when trianing the network. In your train code, the bicubic is added to predict before calculate the loss. I thought it means the network want to learn the residual. But in your test code (calculate the pnsr), you use predict without adding bicubic to calculate the mse. This makes me confused. In my oponion, the predict should mean residual, and why don't add the bicubic to the predict during testing.

Thanks for your reply!

about the subtractor and the minuend

Hello, I am a fresh learner about the super-resolution,I read the _Deep Back-Projection Networks For Super-Resolution_carefully,and I want to know why is the changed feature map minus the original feature map in the residual process? thanks!

there is an error...

===> Loading datasets
===> Building model
/home/amax/hao/DBPN-Pytorch-master/dbpn_v1.py:53: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.
torch.nn.init.kaiming_normal(m.weight)
/home/amax/hao/DBPN-Pytorch-master/dbpn_v1.py:57: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.
torch.nn.init.kaiming_normal(m.weight)
Pre-trained SR model is loaded.
Do you know this error when I run eval.py

Question in main.py

def train(epoch):
epoch_loss = 0
model.train()
for iteration, batch in enumerate(training_data_loader, 1):
       input, target, bicubic = Variable(batch[0]), Variable(batch[1]), Variable(batch[2]),

What does 'input, target, bicubic = Variable(batch[0]), Variable(batch[1]), Variable(batch[2])' mean?
I tried to add some layers in your model,But it shows 'RuntimeError:The size of Tensor a(160) must match the size of tensor b(80) at non-singleton dimension 3'.I am confused.I will be thankful if you can answer me.Thanks.

fast dbpn

Excuse me
Can you share model data for fast DNPN? File dbpns.py

why i can't reproduce the effect with provided trained model

I try to test the model on Set 5 X4 with provided trained models.When i calculate the psnr in RGB channel i got psnr =30.xxx which seems not right.When i calculate the psnr in Y channel,i got psnr = 31.xxx. Both of them seems lower than psnr presented in the paper(32.47)。Is there anyone got the results presented in the paper?

ImportError:cannot import name 'Resize'

Thank you for such excellent work,but I got a stuck following.

File "../DBPN-Pytorch-master/data.py", line 5, in
from torchvision.transforms import Compose, CenterCrop, ToTensor, Resize
ImportError: cannot import name 'Resize'

My torch version is 0.4.0, and I found out 'torchvision.transforms' in this version including the function 'Resize' in the website https://pytorch.org/docs/stable/torchvision/transforms.html
So I want to know why can't I import the 'Resize',hope for reply,thanks.

What's class D_DownBlock(torch.nn.Module) mean?What's num_stages mean?

class D_DownBlock(torch.nn.Module):
def init(self, num_filter, kernel_size=8, stride=4, padding=2, num_stages=1, bias=True, activation='prelu', norm=None)
super(D_DownBlock, self).init()
self.conv = ConvBlock(num_filter*num_stages, num_filter, 1, 1, 0, activation, norm=None)
self.down_conv1 = ConvBlock(num_filter, num_filter, kernel_size, stride, padding, activation, norm=None)
self.down_conv2 = DeconvBlock(num_filter, num_filter, kernel_size, stride, padding, activation, norm=None)
self.down_conv3 = ConvBlock(num_filter, num_filter, kernel_size, stride, padding, activation, norm=None)

Why it started with conv,then down_conv1,down_conv2,down_conv3?What's difference between  DownBlock?What's num_stages mean?Thank you.

Pre model not work

python eval.py --upscale_factor 2
Namespace(chop_forward=True, gpu_mode=True, gpus=1, input_dir='Input', model='models/NTIRE2018_x8.pth', model_type='DBPNLL', output='Results/', seed=123, self_ensemble=True, testBatchSize=1, test_dataset='DIV2K_valid_LR_x8', threads=1, upscale_factor=2)
===> Loading datasets
===> Building model
/home/kenwood/DBPN-Pytorch/dbpn_v1.py:53: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.
  torch.nn.init.kaiming_normal(m.weight)
/home/kenwood/DBPN-Pytorch/dbpn_v1.py:57: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.
  torch.nn.init.kaiming_normal(m.weight)
Traceback (most recent call last):
  File "eval.py", line 63, in <module>
    model.load_state_dict(torch.load(opt.model, map_location=lambda storage, loc: storage))
  File "/root/.pyenv/versions/3.5.3/lib/python3.5/site-packages/torch/nn/modules/module.py", line 721, in load_state_dict
    self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for DataParallel:
	While copying the parameter named "module.up1.up_conv1.deconv.weight", whose dimensions in the model are torch.Size([64, 64, 6, 6]) and whose dimensions in the checkpoint are torch.Size([64, 64, 12, 12]).

Training

Hello,

Thanks for your impressive work here.
I am interested in training a DBPN model with my own data. So could you please tell a few details on the setup? Such as where I put my training data (LR and HR), the format of data.

Thank you

KeyError: 'unexpected key "module.feat0.conv.weight" in state_dict'

Using pytorch 0.3.1 and torchvision 0.2.0, I am getting the following error when running eval.py:

Namespace(chop_forward=True, gpu_mode=False, gpus=1, input_dir='Input', model='models/NTIRE2018_x8.pth', model_type='DBPNLL', output='Results/', seed=123, self_ensemble=True, testBatchSize=1, test_dataset='DIV2K_valid_LR_x8', threads=1, upscale_factor=8)
===> Loading datasets
===> Building model
Traceback (most recent call last):
  File "eval.py", line 64, in <module>
    model.load_state_dict(torch.load(opt.model, map_location=lambda storage, loc: storage))
  File "/usr/lib/python3.6/site-packages/torch/nn/modules/module.py", line 522, in load_state_dict
    .format(name))
KeyError: 'unexpected key "module.feat0.conv.weight" in state_dict'

I am using the default parameters for the commandline:

python eval.py

Any ideas?

running bug_ from: too many arguments

Hi,

Thanks for your fantastic work!

When I was running the codes, it continues appearing the sentence which is 'from: too many arguments'. I have googled this but unfortunately I didn't find useful solutions to this problem.

Anyone also come with this problem?
Does anyone has some ideas to deal with this problem?

Many thanks

Alex

one argument seems a little bit misleading in 'main.py'

parser.add_argument('--patch_size', type=int, default=40, help='Size of cropped HR image')
I read the code of functions related to this argument, Is 'patch_size' the size of the net input? For example, for DBPNx4 model, if 'patch_size' is the input size '40x40', the hr_train_dataset should be a set of images sized 160x160 . Is it right?

cannot download zip file in any way

I cann't download the zip file, I have trid different browers and different networks. So I am wondering that if the zip itself has broken or got something wrong.Can anybody help me?

set14——test

There is a picture (bridge) in set 14 that is single-channel, but the trained model requires that the input picture be three-channel. How did you solve this problem in the experiment?

Error when using eval.py on pretrained models: size mismatch for module.output_conv.conv.weight: copying a param with shape torch.Size([3, 448, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 640, 3, 3]).

Hi,

I am just trying your pre-trained models.
This commands works fine for me:
python eval.py --model './DBPNLL_x8.pth'

But using any other model, with or without changing the parameter "upscale_factor" leads to an error:
python eval.py --upscale_factor 4 --model './DBPN_x4.pth'
or
python eval.py --model './DBPN_x4.pth'
or
python eval.py --model './DBPN_x8.pth'

==>
RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.down7.conv.conv.weight", "module.down7.conv.conv.bias", "module.down7.conv.act.weight", "module.down7.down_conv1.conv.weight", "module.down7.down_conv1.conv.bias", "module.down7.down_conv1.act.weight", "module.down7.down_conv2.deconv.weight", "module.down7.down_conv2.deconv.bias", "module.down7.down_conv2.act.weight", "module.down7.down_conv3.conv.weight", "module.down7.down_conv3.conv.bias", "module.down7.down_conv3.act.weight", "module.up8.conv.conv.weight", "module.up8.conv.conv.bias", "module.up8.conv.act.weight", "module.up8.up_conv1.deconv.weight", "module.up8.up_conv1.deconv.bias", "module.up8.up_conv1.act.weight", "module.up8.up_conv2.conv.weight", "module.up8.up_conv2.conv.bias", "module.up8.up_conv2.act.weight", "module.up8.up_conv3.deconv.weight", "module.up8.up_conv3.deconv.bias", "module.up8.up_conv3.act.weight", "module.down8.conv.conv.weight", "module.down8.conv.conv.bias", "module.down8.conv.act.weight", "module.down8.down_conv1.conv.weight", "module.down8.down_conv1.conv.bias", "module.down8.down_conv1.act.weight", "module.down8.down_conv2.deconv.weight", "module.down8.down_conv2.deconv.bias", "module.down8.down_conv2.act.weight", "module.down8.down_conv3.conv.weight", "module.down8.down_conv3.conv.bias", "module.down8.down_conv3.act.weight", "module.up9.conv.conv.weight", "module.up9.conv.conv.bias", "module.up9.conv.act.weight", "module.up9.up_conv1.deconv.weight", "module.up9.up_conv1.deconv.bias", "module.up9.up_conv1.act.weight", "module.up9.up_conv2.conv.weight", "module.up9.up_conv2.conv.bias", "module.up9.up_conv2.act.weight", "module.up9.up_conv3.deconv.weight", "module.up9.up_conv3.deconv.bias", "module.up9.up_conv3.act.weight", "module.down9.conv.conv.weight", "module.down9.conv.conv.bias", "module.down9.conv.act.weight", "module.down9.down_conv1.conv.weight", "module.down9.down_conv1.conv.bias", "module.down9.down_conv1.act.weight", "module.down9.down_conv2.deconv.weight", "module.down9.down_conv2.deconv.bias", "module.down9.down_conv2.act.weight", "module.down9.down_conv3.conv.weight", "module.down9.down_conv3.conv.bias", "module.down9.down_conv3.act.weight", "module.up10.conv.conv.weight", "module.up10.conv.conv.bias", "module.up10.conv.act.weight", "module.up10.up_conv1.deconv.weight", "module.up10.up_conv1.deconv.bias", "module.up10.up_conv1.act.weight", "module.up10.up_conv2.conv.weight", "module.up10.up_conv2.conv.bias", "module.up10.up_conv2.act.weight", "module.up10.up_conv3.deconv.weight", "module.up10.up_conv3.deconv.bias", "module.up10.up_conv3.act.weight".
size mismatch for module.output_conv.conv.weight: copying a param with shape torch.Size([3, 448, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 640, 3, 3]).

Thank you in advance for your help.

RuntimeError: cuda runtime error (2) : out of memory

The environment is pytorch 0.3.1, torchvision 0.2.0, cuda 9.0, cudnn 7, Tesla K80. I think that there is no problem with the image size and batch size being handled, but it causes a memory error. How can I solve it?

(miniconda3-latest) kuroyanagi@p-b-s-r001:~/DBPN-Pytorch$ python main.py
Namespace(batchSize=1, data_augmentation=True, data_dir='./Input', gpu_mode=True, gpus=1, hr_test_dataset='VOC2012_valid_HR', hr_train_dataset='VOC2012_train_HR', lr=0.0001, model_type='DBPN', nEpochs=2000, patch_size=32, prefix='dbpn', pretrained=False, pretrained_sr=None, save_folder='weights/', seed=123, snapshots=100, testBatchSize=5, test_dataset='VOC2012_valid_LR_x2', threads=1, train_dataset='VOC2012_train_LR_x2', upscale_factor=2)
===> Loading datasets
===> Building model  DBPN
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1518244421288/work/torch/lib/THC/generic/THCStorage.cu line=58 error=2 : out of memory
Traceback (most recent call last):
  File "main.py", line 117, in <module>
    model = torch.nn.DataParallel(model, device_ids=gpus_list)
  File "/home/[email protected]/.pyenv/versions/miniconda3-latest/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 64, in __init__
    self.module.cuda(device_ids[0])
  File "/home/[email protected]/.pyenv/versions/miniconda3-latest/lib/python3.6/site-packages/torch/nn/modules/module.py", line 216, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "/home/[email protected]/.pyenv/versions/miniconda3-latest/lib/python3.6/site-packages/torch/nn/modules/module.py", line 146, in _apply
    module._apply(fn)
  File "/home/[email protected]/.pyenv/versions/miniconda3-latest/lib/python3.6/site-packages/torch/nn/modules/module.py", line 146, in _apply
    module._apply(fn)
  File "/home/[email protected]/.pyenv/versions/miniconda3-latest/lib/python3.6/site-packages/torch/nn/modules/module.py", line 146, in _apply
    module._apply(fn)
  File "/home/[email protected]/.pyenv/versions/miniconda3-latest/lib/python3.6/site-packages/torch/nn/modules/module.py", line 152, in _apply
    param.data = fn(param.data)
  File "/home/[email protected]/.pyenv/versions/miniconda3-latest/lib/python3.6/site-packages/torch/nn/modules/module.py", line 216, in <lambda>
    return self._apply(lambda t: t.cuda(device))
  File "/home/[email protected]/.pyenv/versions/miniconda3-latest/lib/python3.6/site-packages/torch/_utils.py", line 69, in _cuda
    return new_type(self.size()).copy_(self, async)
RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1518244421288/work/torch/lib/THC/generic/THCStorage.cu:58
(miniconda3-latest) kuroyanagi@p-b-s-r001:~/DBPN-Pytorch$

How to use correctly

Am using my own input data and using your predefined weights and it does not work in my context. I trying to read texts from the image. I think is it because your data is trained on different dataset that r not text type that is why it is different result? Do the input data need to be the same resolution and size as the one used for your training? One example like reading pixelated number plate.

Is there some conditions that need to fulfill or consider on what the model can work well? Thanks

DBPN_x8.pth load error

Hello, when I try to use DBPN_x8 model below error occurs
RuntimeError: invalid argument 2: sizes do not match at /pytorch/torch/lib/THC/generic/THCTensorCopy.c:48

DBPN_x8.pth is it not for x8, any difference with NTIRE2018_x8.pth?

normalization for DBPNITER

why is it that you only normalize using vgg mean/std for DBPNLL and DBPN but not DBPNITER? i can't find anything in the paper about that?

Which one is better method for upblock/downblock?

Hello,
Thank you for your work!
I am wondering that which one is better for upblock or downblock??

  • UpBlock vs UpBlockPix?
  • DownBlock vs DownBlockPix?

UpBlock and DownBlock are used in your code (base_networks.py),
Are they chosen because they are better than another?

Error with pretrained model

Running the code without making any change gives following error:

RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.down7.conv.conv.weight", "module.down7.conv.conv.bias", "module.down7.conv.act.weight", "module.down7.down_conv1.conv.weight", "module.down7.down_conv1.conv.bias", "module.down7.down_conv1.act.weight", "module.down7.down_conv2.deconv.weight", "module.down7.down_conv2.deconv.bias", "module.down7.down_conv2.act.weight", "module.down7.down_conv3.conv.weight", "module.down7.down_conv3.conv.bias", "module.down7.down_conv3.act.weight", "module.up8.conv.conv.weight", "module.up8.conv.conv.bias", "module.up8.conv.act.weight", "module.up8.up_conv1.deconv.weight", "module.up8.up_conv1.deconv.bias", "module.up8.up_conv1.act.weight", "module.up8.up_conv2.conv.weight", "module.up8.up_conv2.conv.bias", "module.up8.up_conv2.act.weight", "module.up8.up_conv3.deconv.weight", "module.up8.up_conv3.deconv.bias", "module.up8.up_conv3.act.weight", "module.down8.conv.conv.weight", "module.down8.conv.conv.bias", "module.down8.conv.act.weight", "module.down8.down_conv1.conv.weight", "module.down8.down_conv1.conv.bias", "module.down8.down_conv1.act.weight", "module.down8.down_conv2.deconv.weight", "module.down8.down_conv2.deconv.bias", "module.down8.down_conv2.act.weight", "module.down8.down_conv3.conv.weight", "module.down8.down_conv3.conv.bias", "module.down8.down_conv3.act.weight", "module.up9.conv.conv.weight", "module.up9.conv.conv.bias", "module.up9.conv.act.weight", "module.up9.up_conv1.deconv.weight", "module.up9.up_conv1.deconv.bias", "module.up9.up_conv1.act.weight", "module.up9.up_conv2.conv.weight", "module.up9.up_conv2.conv.bias", "module.up9.up_conv2.act.weight", "module.up9.up_conv3.deconv.weight", "module.up9.up_conv3.deconv.bias", "module.up9.up_conv3.act.weight", "module.down9.conv.conv.weight", "module.down9.conv.conv.bias", "module.down9.conv.act.weight", "module.down9.down_conv1.conv.weight", "module.down9.down_conv1.conv.bias", "module.down9.down_conv1.act.weight", "module.down9.down_conv2.deconv.weight", "module.down9.down_conv2.deconv.bias", "module.down9.down_conv2.act.weight", "module.down9.down_conv3.conv.weight", "module.down9.down_conv3.conv.bias", "module.down9.down_conv3.act.weight", "module.up10.conv.conv.weight", "module.up10.conv.conv.bias", "module.up10.conv.act.weight", "module.up10.up_conv1.deconv.weight", "module.up10.up_conv1.deconv.bias", "module.up10.up_conv1.act.weight", "module.up10.up_conv2.conv.weight", "module.up10.up_conv2.conv.bias", "module.up10.up_conv2.act.weight", "module.up10.up_conv3.deconv.weight", "module.up10.up_conv3.deconv.bias", "module.up10.up_conv3.act.weight".
size mismatch for module.output_conv.conv.weight: copying a param with shape torch.Size([3, 448, 3, 3]) from checkpoint, the shape in current model is torch.Size([3, 640, 3, 3]).

Had to change model_type to DBPN on line number 34 and line number 36.

training dataset

Hello, could you share x8 training dataset? default DIV2k has no x8 images

PIRM2018 Models

Thank you for your impressive work
Could I ask for PIRM2018 Models?
Do you use adversarial training?
Thank you

wrong output of pretrained model ‘DBPN-RES-MR64-3’

Hi! I test my trained model(x4) and your pertained model(x2 x4 x8) which of type 'DBPN-RES-MR64-3' on Set5. But I got results like this:
image
Here is my test settings:
parser = argparse.ArgumentParser(description='PyTorch Super Res Example') parser.add_argument('--upscale_factor', type=int, default=2, help="super resolution upscale factor") parser.add_argument('--testBatchSize', type=int, default=1, help='testing batch size') parser.add_argument('--gpu_mode', type=bool, default=True) parser.add_argument('--self_ensemble', type=bool, default=False) parser.add_argument('--chop_forward', type=bool, default=False) parser.add_argument('--threads', type=int, default=1, help='number of threads for data loader to use') parser.add_argument('--seed', type=int, default=123, help='random seed to use. Default=123') parser.add_argument('--gpus', default=1, type=int, help='number of gpu') parser.add_argument('--input_dir', type=str, default='Input') parser.add_argument('--output', default='Results/', help='Location to save checkpoint models') parser.add_argument('--test_dataset', type=str, default='Set5_LR_x2') parser.add_argument('--model_type', type=str, default='DBPN-RES-MR64-3') parser.add_argument('--residual', type=bool, default=False) #parser.add_argument('--model', default='weights/DIV2K_train_HR_size160_step800maiDBPN-RES-MR64-3tpami_residual_filter8_epoch_99.pth', help='sr pretrained base model') parser.add_argument('--model', default='models/DBPN-RES-MR64-3_2x.pth', help='sr pretrained base model')

But I test other model like 'DBPNLL' 'DBPN' with different factor, I can get correct result. Could you please help me out? Your early reply will be appreciated. Thank you very much!

Out of Memory for upscale factor 2 on images 256X256

I'm running on GTX 1080TI - i can upscale 2X images of 128X128 but results are quite bad - even colors are all off; but on 256X256 i'm running out of memory

python eval.py --upscale_factor=2 --model_type=DBPN --test_dataset=dset --model=models/DBPN_x2.pth

what are the GPU RAM expectations? if i add another 1080 will it solve the problem?

thank you

RuntimeError: cuda runtime error (30) : unknown error at ..\aten\src\THC\THCGeneral.cpp:87

I tried to run eval.py but got this error:

python eval.py
Namespace(chop_forward=False, gpu_mode=True, gpus=1, input_dir='Input', model='models/DBPN_x8.pth', model_type='DBPN', output='Results/', residual=False, seed=123, self_ensemble=False, testBatchSize=1, test_dataset='MySet', threads=1, upscale_factor=8)
===> Loading datasets
===> Building model
THCudaCheck FAIL file=..\aten\src\THC\THCGeneral.cpp line=87 error=30 : unknown error
Traceback (most recent call last):
  File "eval.py", line 64, in <module>
    model = torch.nn.DataParallel(model, device_ids=gpus_list)
  File "C:\Users\127051\AppData\Local\Programs\Python\Python35\lib\site-packages\torch\nn\parallel\data_parallel.py", line 131, in __init__
    _check_balance(self.device_ids)
  File "C:\Users\127051\AppData\Local\Programs\Python\Python35\lib\site-packages\torch\nn\parallel\data_parallel.py", line 18, in _check_balance
    dev_props = [torch.cuda.get_device_properties(i) for i in device_ids]
  File "C:\Users\127051\AppData\Local\Programs\Python\Python35\lib\site-packages\torch\nn\parallel\data_parallel.py", line 18, in <listcomp>
    dev_props = [torch.cuda.get_device_properties(i) for i in device_ids]
  File "C:\Users\127051\AppData\Local\Programs\Python\Python35\lib\site-packages\torch\cuda\__init__.py", line 298, in get_device_properties
    init()  # will define _get_device_properties and _CudaDeviceProperties
  File "C:\Users\127051\AppData\Local\Programs\Python\Python35\lib\site-packages\torch\cuda\__init__.py", line 144, in init
    _lazy_init()
  File "C:\Users\127051\AppData\Local\Programs\Python\Python35\lib\site-packages\torch\cuda\__init__.py", line 162, in _lazy_init
    torch._C._cuda_init()

os : windows 10
python : 3.5.4
torch : 1.0.1

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.