Giter Site home page Giter Site logo

umsn-face-deblurring's People

Contributors

gupta-keshav avatar mowshon avatar rajeevyasarla avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

umsn-face-deblurring's Issues

How to train with custom data?

Hi, I am trying to train with my own data, but I am lost in the process. In the documentation says that the code creates the blurry images as it is training, so only sharp images are required,right?
So the dataroot and valDataroot are folders with sharp images?
what is the format I need to use for this? Does it only work with %05d.png files?

Thank you !

test questions

Hi, can you provide ssim code? I use ssim code from scikit-learn library. But the result of ssim value is quite different from what you reported in paper. thank you.

About blur kernels

Hello, first of all, thank you for providing the code of this paper for us to learn. About blur kernels , I saw that there was a Kernel.mat file in your code. After checking, I found that the size is 29x29x25000. Does this .mat file include all the blur kernels sizes ranging from 13 × 13
to 29×29 mentioned in your paper? If not, can I get other types of blur kernels? Looking forward to your reply. Thank you very much.

Question about kernels generation code

Hello, Thanks for you share how a good paper to deblurring!
I am a student and I am currently working on finding ways to optimize the deblurring of human faces.
and I want your code to generate the blur kernels, refer to it.
Thanks for you~

Train question

hello,I train with code"python train_face_deblur.py --dataroot <path_to_train_data> --valDataroot ./facades/github/ --exp ./face_deblur --batchSize 10",use sharp images for train 1000 epoches, then I use this saved model to test blur face image,but this result is very bad.The test image is more blur than original image,so I want to know this reason is my epoches is not enough or other reason? Thanks

The lower test results

Hello, during the test, I use the Helen test dataset, network and optimal model you provided, but the final average PSNR I got is only 20.15dB. To reach the PSNR in the paper, Do I need to pay attention Some other questions?
Thank you!

Blur kernels wrong

When I trained this network,I found the kernels.mat in files is zero.How should I use this blur kernel?
Thanks for your reply

The training results

Hello, first of all, thank you for your work.I repeated your experiment on a 2080TI, 11G memory card.The PSNR/SSIM results on the test set Helen and celebA mentioned in your paper are 26.66/0.804 and 25.61/0.763 respectively, which are quite different from the results in the paper. What is the reason for this?How can I achieve a result similar to your paper?
This is my training message,
Zhang celebA train_datasets: 10000
batch_size:4
Epoch: 60
The other parameters are unchanged and are the default values in the code.Looking forward to your reply. Thank you very much

test result with 'empty'

hello ,using default model to test recommended images ,but the result is not we expected, the prediction result likes below :
3

,please tell me how to resolve the problem,thanks

RuntimeError when testing

Code to Reproduce:
python test_face_deblur.py --dataroot ./facades/github/ --valDataroot ./Testh/ --netG ./pretrained_models/Deblur_epoch_Best.pth

Error:

RuntimeError:
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.

Thank you for sharing the code! I ran into this error and would appreciate any guidance on how to resolve this. Thanks!

FaceRecognition score too high and The deblurganv2 test indicator is too poor

Dear Author
I was recently trying to contrast your thesis approach, but encountered two difficulties along the way.

  1. FaceRecognization score is too high compared to your paper display (Top-1 90%)
    MyWay: For each of the 8000 restored images, I use ten clear images as the galley set (containing the corresponding clear images), use the tool to calculate the distance and show Top1~5.
    image
    image
  1. The recurrence effect of DeblurGanV2 is poor (Shen' Celeba_8000 test set PSNR indicator only 23.06)
    MyWay: Resize the 128 resolution of Shen's Celeba_test to 256, resize it to 128 after restoration, and then use gt_128 as an indicator.
    image

I hope to seek your help, and at the same time I hope to accurately and truly show the performance of the predecessor model, so as to make a little contribution to the field of deblurring, thank you!
PS: I'd love to get the relevant code if I can, as I can't seem to find it on github. my mail address is [email protected]

unable to test

I installed the dependencies, but am not able to run. This is the log:

Traceback (most recent call last):
  File "test_face_deblur.py", line 13, in <module>
    import torchvision.utils as vutils
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torchvision/__init__.py", line 1, in <module>
    from torchvision import models
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torchvision/models/__init__.py", line 12, in <module>
    from . import detection
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torchvision/models/detection/__init__.py", line 1, in <module>
    from .faster_rcnn import *
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torchvision/models/detection/faster_rcnn.py", line 13, in <module>
    from .rpn import AnchorGenerator, RPNHead, RegionProposalNetwork
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torchvision/models/detection/rpn.py", line 8, in <module>
    from . import _utils as det_utils
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torchvision/models/detection/_utils.py", line 74, in <module>
    @torch.jit.script
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/__init__.py", line 364, in script
    graph = _script_graph(fn, _frames_up=_frames_up + 1)
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/__init__.py", line 359, in _script_graph
    ast = get_jit_ast(fn)
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 132, in get_jit_ast
    return build_def(SourceRangeFactory(source), py_ast.body[0])
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 151, in build_def
    build_stmts(ctx, body))
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 123, in build_stmts
    stmts = [build_stmt(ctx, s) for s in stmts]
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 123, in <listcomp>
    stmts = [build_stmt(ctx, s) for s in stmts]
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 140, in __call__
    return method(ctx, node)
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 205, in build_Assign
    rhs = build_expr(ctx, stmt.value)
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 140, in __call__
    return method(ctx, node)
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 314, in build_Call
    func = build_expr(ctx, expr.func)
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 140, in __call__
    return method(ctx, node)
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 300, in build_Attribute
    value = build_expr(ctx, expr.value)
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 140, in __call__
    return method(ctx, node)
  File "/home/geekodour/.virtualenvs/h3/lib/python3.7/site-packages/torch/jit/frontend.py", line 422, in build_Subscript
    raise NotSupportedError(base.range(), "slicing multiple dimensions at the same time isn't supported yet")
torch.jit.frontend.NotSupportedError: slicing multiple dimensions at the same time isn't supported yet
        proposals (Tensor): boxes to be encoded
    """

    # perform some unpacking to make it JIT-fusion friendly
    wx = weights[0]
    wy = weights[1]
    ww = weights[2]
    wh = weights[3]

    proposals_x1 = proposals[:, 0].unsqueeze(1)
                   ~~~~~~~~~ <--- HERE
    proposals_y1 = proposals[:, 1].unsqueeze(1)
    proposals_x2 = proposals[:, 2].unsqueeze(1)
    proposals_y2 = proposals[:, 3].unsqueeze(1)

    reference_boxes_x1 = reference_boxes[:, 0].unsqueeze(1)
    reference_boxes_y1 = reference_boxes[:, 1].unsqueeze(1)
    reference_boxes_x2 = reference_boxes[:, 2].unsqueeze(1)
    reference_boxes_y2 = reference_boxes[:, 3].unsqueeze(1)

would be great help if you can update the documentation to something that'll work directly or make a Dockerfile or something.

Hi, i get a AttributeError.

While running the test_face_deblur.py , i see AttributeError:'module' object has no attribute 'interpolate' in line 236. The problem is from function 'torch.nn.functional.interpolate'

Issues request

down from github:

input: python test_face_deblur.py --dataroot ./facades/github/ --valDataroot <path_to_test_data> --netG ./pretrained_models/Deblur_epoch_Best.pth

the error code as below:

ImportError: bad magic number in 'myutils': b'\x03\xf3\r\n'

any wrong with the setting? thanks

Test and Train

when I am testing the blur image,I have meet some question.
(1)Is the valDataroot blur images or deblur images?
(2)valDataroot is different with dataroot?and valDataroot has what work?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.