python train.py --train-file "/BLAH_BLAH/SR/91-image_x3.h5" --eval-file "/BLAH_BLAHSR/Set5_x3.h5" --outputs-dir "/BLAH_BLAH/model_run/SR_outputs" --scale 2 --lr 1e-3 --batch-size 16 --num-epochs 200 --num-workers 8 --seed 123
epoch: 0/199: 0%| | 0/2688 [00:00<?, ?it/s]
***/code/ESPCN-pytorch/venv/lib/python3.7/site-packages/torch/nn/modules/loss.py:445: UserWarning: Using a target size (torch.Size([16, 1, 51, 51])) that is different to the input size (torch.Size([16, 1, 34, 34])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
Traceback (most recent call last):
File "train.py", line 79, in
loss = criterion(preds, labels)
File "//code/ESPCN-pytorch/venv/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(input, kwargs)
File "//code/ESPCN-pytorch/venv/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 445, in forward
return F.mse_loss(input, target, reduction=self.reduction)
File "//code/ESPCN-pytorch/venv/lib/python3.7/site-packages/torch/nn/functional.py", line 2647, in mse_loss
expanded_input, expanded_target = torch.broadcast_tensors(input, target)
File "/***/code/ESPCN-pytorch/venv/lib/python3.7/site-packages/torch/functional.py", line 65, in broadcast_tensors
return _VF.broadcast_tensors(tensors)
RuntimeError: The size of tensor a (34) must match the size of tensor b (51) at non-singleton dimension 3
I assume that somewhere in the .h5
file there is a hard-coded image that was created using a scale factor of 3. Is that correct? Because it seems that some images are not scaling properly with the dataset provided in the README.md