rajeevyasarla / umsn-face-deblurring Goto Github PK
View Code? Open in Web Editor NEWDeblurring Face Images using Uncertainty Guided Multi-Stream Semantic Networks
License: MIT License
Deblurring Face Images using Uncertainty Guided Multi-Stream Semantic Networks
License: MIT License
Hello, first of all, thank you for your work.I repeated your experiment on a 2080TI, 11G memory card.The PSNR/SSIM results on the test set Helen and celebA mentioned in your paper are 26.66/0.804 and 25.61/0.763 respectively, which are quite different from the results in the paper. What is the reason for this?How can I achieve a result similar to your paper?
This is my training message,
Zhang celebA train_datasets: 10000
batch_size:4
Epoch: 60
The other parameters are unchanged and are the default values in the code.Looking forward to your reply. Thank you very much
Makes the python files fail because of some binary errors.
Hello, during the test, I use the Helen test dataset, network and optimal model you provided, but the final average PSNR I got is only 20.15dB. To reach the PSNR in the paper, Do I need to pay attention Some other questions?
Thank you!
Dear Author
I was recently trying to contrast your thesis approach, but encountered two difficulties along the way.
I hope to seek your help, and at the same time I hope to accurately and truly show the performance of the predecessor model, so as to make a little contribution to the field of deblurring, thank you!
PS: I'd love to get the relevant code if I can, as I can't seem to find it on github. my mail address is [email protected]
I installed the dependencies, but am not able to run. This is the log:
Traceback (most recent call last):
File "test_face_deblur.py", line 13, in <module>
import torchvision.utils as vutils
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torchvision/__init__.py", line 1, in <module>
from torchvision import models
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torchvision/models/__init__.py", line 12, in <module>
from . import detection
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torchvision/models/detection/__init__.py", line 1, in <module>
from .faster_rcnn import *
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torchvision/models/detection/faster_rcnn.py", line 13, in <module>
from .rpn import AnchorGenerator, RPNHead, RegionProposalNetwork
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torchvision/models/detection/rpn.py", line 8, in <module>
from . import _utils as det_utils
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torchvision/models/detection/_utils.py", line 74, in <module>
@torch.jit.script
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/__init__.py", line 364, in script
graph = _script_graph(fn, _frames_up=_frames_up + 1)
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/__init__.py", line 359, in _script_graph
ast = get_jit_ast(fn)
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 132, in get_jit_ast
return build_def(SourceRangeFactory(source), py_ast.body[0])
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 151, in build_def
build_stmts(ctx, body))
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 123, in build_stmts
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 123, in <listcomp>
stmts = [build_stmt(ctx, s) for s in stmts]
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 140, in __call__
return method(ctx, node)
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 205, in build_Assign
rhs = build_expr(ctx, stmt.value)
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 140, in __call__
return method(ctx, node)
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 314, in build_Call
func = build_expr(ctx, expr.func)
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 140, in __call__
return method(ctx, node)
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 300, in build_Attribute
value = build_expr(ctx, expr.value)
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 140, in __call__
return method(ctx, node)
File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 422, in build_Subscript
raise NotSupportedError(base.range(), "slicing multiple dimensions at the same time isn't supported yet")
torch.jit.frontend.NotSupportedError: slicing multiple dimensions at the same time isn't supported yet
proposals (Tensor): boxes to be encoded
"""
# perform some unpacking to make it JIT-fusion friendly
wx = weights[0]
wy = weights[1]
ww = weights[2]
wh = weights[3]
proposals_x1 = proposals[:, 0].unsqueeze(1)
~~~~~~~~~ <--- HERE
proposals_y1 = proposals[:, 1].unsqueeze(1)
proposals_x2 = proposals[:, 2].unsqueeze(1)
proposals_y2 = proposals[:, 3].unsqueeze(1)
reference_boxes_x1 = reference_boxes[:, 0].unsqueeze(1)
reference_boxes_y1 = reference_boxes[:, 1].unsqueeze(1)
reference_boxes_x2 = reference_boxes[:, 2].unsqueeze(1)
reference_boxes_y2 = reference_boxes[:, 3].unsqueeze(1)
would be great help if you can update the documentation to something that'll work directly or make a Dockerfile or something.
While running the test_face_deblur.py , i see AttributeError:'module' object has no attribute 'interpolate' in line 236. The problem is from function 'torch.nn.functional.interpolate'
Hi, I am trying to train with my own data, but I am lost in the process. In the documentation says that the code creates the blurry images as it is training, so only sharp images are required,right?
So the dataroot and valDataroot are folders with sharp images?
what is the format I need to use for this? Does it only work with %05d.png files?
Thank you !
Hi, can you provide ssim code? I use ssim code from scikit-learn library. But the result of ssim value is quite different from what you reported in paper. thank you.
when I am testing the blur image,I have meet some question.
(1)Is the valDataroot blur images or deblur images?
(2)valDataroot is different with dataroot?and valDataroot has what work?
When I trained this network,I found the kernels.mat in files is zero.How should I use this blur kernel?
Thanks for your reply
Code to Reproduce:
python test_face_deblur.py --dataroot ./facades/github/ --valDataroot ./Testh/ --netG ./pretrained_models/Deblur_epoch_Best.pth
Error:
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Thank you for sharing the code! I ran into this error and would appreciate any guidance on how to resolve this. Thanks!
Hello, could you please upload a copy of training code of the segmentation network?thank you very much
down from github:
input: python test_face_deblur.py --dataroot ./facades/github/ --valDataroot <path_to_test_data> --netG ./pretrained_models/Deblur_epoch_Best.pth
the error code as below:
ImportError: bad magic number in 'myutils': b'\x03\xf3\r\n'
any wrong with the setting? thanks
hello,I train with code"python train_face_deblur.py --dataroot <path_to_train_data> --valDataroot ./facades/github/ --exp ./face_deblur --batchSize 10",use sharp images for train 1000 epoches, then I use this saved model to test blur face image,but this result is very bad.The test image is more blur than original image,so I want to know this reason is my epoches is not enough or other reason? Thanks
Hello, Thanks for you share how a good paper to deblurring!
I am a student and I am currently working on finding ways to optimize the deblurring of human faces.
and I want your code to generate the blur kernels, refer to it.
Thanks for you~
Hello, first of all, thank you for providing the code of this paper for us to learn. About blur kernels , I saw that there was a Kernel.mat file in your code. After checking, I found that the size is 29x29x25000. Does this .mat file include all the blur kernels sizes ranging from 13 × 13
to 29×29 mentioned in your paper? If not, can I get other types of blur kernels? Looking forward to your reply. Thank you very much.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.