Giter Site home page Giter Site logo

qingsenyangit / ahdrnet Goto Github PK

View Code? Open in Web Editor NEW
127.0 127.0 36.0 13.2 MB

Attention-guided Network for Ghost-free High Dynamic Range Imaging

Python 6.03% MATLAB 31.18% M 0.06% Makefile 0.90% C++ 19.64% Cuda 30.75% C 1.07% Shell 1.67% TeX 8.01% CSS 0.29% JavaScript 0.04% HTML 0.38%

ahdrnet's People

Contributors

donggong1 avatar qingsenyangit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ahdrnet's Issues

exposure time

hi, thanks for your work.

as mentioned in your paper, the training HDR (LDR^2.2/exposure_t) is generated.
But I check the dataset from Kalantari there is 0.0 value in exposure_t. What did you do to deal with that?

Thanks!

Can you give more train details or release train code

Hi. Thanks for your great work!
I realized the train code based on your code, set lr=1e-5, batch_size=8, train 100 epochs, but the PSNR is worse than that in your paper. Can you give more train details or release train code. Thanks!

Using the given model,PSNR-u is lower than the cvpr repoerted

def range_compressor_tensor(x):
const_1 = torch.from_numpy(np.array(1.0)).cuda()
const_5000 = torch.from_numpy(np.array(5000.0)).cuda()
return (torch.log(const_1 + const_5000 * x)) / torch.log(const_1 + const_5000)
Is this fuction not correct?
Thanks

Washed-out colors

Hello,
I can't seem to get the kind of beautiful saturated colors from the paper using this code.. I understand that the main contribution of your paper is a ghost-free image, and dealing with motion, not colors. Does this mean you used some additional color mapping or maybe tweaked the colors manually using some image editing software just for the paper? Or am I missing something obvious?

CUDA out of memory

Have you ever met the problem that CUDA out of memory? I have two gpus and both is Tesla T4 16g, even I set the batchsize to 1 ,the problem still happens. Follow is the error code:
Traceback (most recent call last):
File "script_testing.py", line 63, in
loss = testing_fun(model, testimage_dataset, args)
File "/root/AHDRNet/running_func.py", line 185, in testing_fun
output = model.forward(data1, data2, data3)
File "/root/miniconda2/envs/hdr/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/root/miniconda2/envs/hdr/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/root/miniconda2/envs/hdr/lib/python2.7/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
raise output
RuntimeError: CUDA out of memory. Tried to allocate 2.86 GiB (GPU 0; 14.76 GiB total capacity; 13.19 GiB already allocated; 873.75 MiB free; 11.17 MiB cached)
Thank you

how to show the attention maps

i want to dump the attention maps just showed in the paper.
in paper, it looks like 3-channels feature
how to convert the 64-channels attention maps to 3-channels maps?
just sum it up?

How to get qualitative results in this paper ?

Hi, I have some questions about the qualitative results(Figure 7 and Figure 8) in your paper.

Were the tone-mapping results in Figure 7 and Figure 8 obtained using Photomatix Pro? If so, can you share your preset(an XMP file) for your tone-mapping scheme?

Thank you very much!

training code

Can you tell me how many times the training code has been trained

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.