Giter Site home page Giter Site logo

deepfuse.pytorch's Introduction

DeepFuse.pytorch

The re-implementation of ICCV 2017 DeepFuse paper idea

Packagist Packagist Packagist

Abstraction

Multi-exposure fusion is a critical issue in computer vision. Additionally, this technique can be adopt in smart phone to demonstrate the image with high lighting quality. However, the original author didn't release the official implementation. In this repository, we try to re-produce the idea of DeepFuse [1], and fuse the under-exposure image and over-exposure image with appropriate manner.

Result

The above image shows the training result. The most left sub-figure is the under-exposure image. The second sub-figure is the over-exposure image. The third one is the rendered result, and the most right figure is the ground truth which is compute by MEF-SSIM loss concept. As you can see, the rough information of dark region and light region can be both remained. The following image is another example.

Idea

You should notice that this is not the official implementation. There are several different between this repository and the paper:

  1. Since the dataset that author used cannot be obtained, we use HDR-Eye dataset [2] which can also deal with multiple exposure fusion problem.
  2. Rather use 64*64 patch size, we set the patch size as 256*256.
  3. We only train for 20 epochs. (30000 iteration for each epoch)
  4. The calculation of y^hat is different. The detail can be found in here.

Usage

The detail of parameters can be found here. You can just simply use the command to train the DeepFuse:

python3 train.py --folder ./SunnerDataset/HDREyeDataset/images/Bracketed_images --batch_size 8 --epoch 15000 

Or you can download the pre-trained model here. Furthermore, inference with two image:

python3 inference.py --image1 <UNDER_EXPOSURE_IMG_PATH> --image2 <OVER_EXPOSURE_IMG_PATH> --model train_result/model/latest.pth --res result.png

Notice

After we check for several machine, we found that the program might get stuck at cv2.cvtColor function. We infer the reason is that the OpenCV cannot perfectly embed in the multiprocess mechanism which is provided by Pytorch. As the result, we assign num_worker as zero here to avoid the issue. If your machine doesn't encounter this issue, you can add the number to accelerate the loading process.

Reference

[1] K. R. Prabhakar, V. S. Srikar, and R. V. Babu. Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs. In 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, pages 4724โ€“4732, 2017.
[2] H. Nemoto, P. Korshunov, P. Hanhart, and T. Ebrahimi. Visual attention in ldr and hdr images. In 9th International Workshop on Video Processing and Quality Metrics for Consumer Electronics (VPQM), number EPFL-CONF-203873, 2015.

deepfuse.pytorch's People

Contributors

sunnerli avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

deepfuse.pytorch's Issues

incorrect display

I tested your code with pytorch0.4.1 in windows, but the fusion result cannot display correctly. Any idea what caused this issue and how to fix it?
result2

may be incorrect loss funtion?

hey there, I notice that your code for loss function in ssim_loss_function.py is quite similar to the code I found in https://stackoverflow.com/questions/39051451/ssim-ms-ssim-for-tensorflow

However, this seems to be SSIM LOSS function instead of MEF SSIM LOSS function. In
K. Ma, K. Zeng, and Z. Wang. Perceptual quality assessment for multi-exposure image fusion. IEEE Transactions on Image Processing, 24(11):3345โ€“3356, 2015. , the author mentioned that "Direct use of the SSIM algorithm [27], however, is impossible, which requires a single perfect quality reference image. "

The paper Deepfuse's work is exactly on MEF(multi-exposure image fusion), so I would think there should be differences between SSIM LOSS and MEF SSIM LOSS, in other words, the loss function in your code may be incorrect?

Look forward to your reply, thx

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.