mohitlamba94 / restoring-extremely-dark-images-in-real-time Goto Github PK
View Code? Open in Web Editor NEWThe project is the official implementation of our CVPR 2021 paper, "Restoring Extremely Dark Images in Real Time"
License: Other
The project is the official implementation of our CVPR 2021 paper, "Restoring Extremely Dark Images in Real Time"
License: Other
Hello, may I ask which data set the model loaded in the demo is trained from, how big is the tested original image, and how generic the model is?
Could you fix this problem?
You reply
MohitLamba94/LLPackNet#4 (comment)
# from https://github.com/MohitLamba94/LLPackNet/blob/master/ablations/LLPackNet.ipynb
class get_data(Dataset):
"""Loads the Data."""
def __init__(self,opt):
self.train_files = glob.glob('/media/mohit/data/mohit/chen_dark_cvpr_18_dataset/Sony/short/1*_00_0.1s.ARW')
# self.train_files = self.train_files + glob.glob('/media/mohit/data/mohit/chen_dark_cvpr_18_dataset/Sony/short/2*_00_0.1s.ARW')
self.gt_files = []
for x in self.train_files:
self.gt_files =self.gt_files+ glob.glob('/media/mohit/data/mohit/chen_dark_cvpr_18_dataset/Sony/long/*'+x[-17:-12]+'*.ARW')
self.to_tensor = torchvision.transforms.Compose([torchvision.transforms.ToTensor()])
self.opt = opt
you used only 0.1s raw data for testing, why not use the below data for testing???
10003_00_0.04s.ARW 10030_00_0.04s.ARW 10045_00_0.04s.ARW 10178_00_0.04s.ARW 10193_00_0.04s.ARW 10217_00_0.04s.ARW
10006_00_0.04s.ARW 10032_00_0.04s.ARW 10054_00_0.04s.ARW 10185_00_0.04s.ARW 10198_00_0.04s.ARW 10226_00_0.04s.ARW
10011_00_0.04s.ARW 10034_00_0.04s.ARW 10055_00_0.04s.ARW 10187_00_0.04s.ARW 10199_00_0.04s.ARW 10227_00_0.04s.ARW
10016_00_0.04s.ARW 10035_00_0.04s.ARW 10068_00_0.04s.ARW 10191_00_0.04s.ARW 10203_00_0.04s.ARW 10228_00_0.04s.ARW
10022_00_0.04s.ARW 10040_00_0.04s.ARW 10069_00_0.04s.ARW 10192_00_0.04s.ARW 10213_00_0.04s.ARW
10178_00_0.033s.ARW 10187_00_0.033s.ARW 10192_00_0.033s.ARW 10198_00_0.033s.ARW 10203_00_0.033s.ARW 10217_00_0.033s.ARW 10227_00_0.033s.ARW
10185_00_0.033s.ARW 10191_00_0.033s.ARW 10193_00_0.033s.ARW 10199_00_0.033s.ARW 10213_00_0.033s.ARW 10226_00_0.033s.ARW 10228_00_0.033s.ARW
which was tested in sid and did.
# from https://github.com/parasmaharjan/DeepImageDenoising/blob/master/Pytorch_EDSR_Test.py#L120
in_files = glob.glob(input_dir + '%05d_00*.ARW' % test_id)
for k in range(len(in_files)):
you said
For training I merge the 0* and 2* both of which belong to training phase
But for sid and did, they only use 0* data, totaly 161. BUT you use 181 data(more 20 data) for training?
# from https://github.com/cchen156/Learning-to-See-in-the-Dark/blob/master/train_Sony.py#L17
train_fns = glob.glob(gt_dir + '0*.ARW')
train_ids = [int(os.path.basename(train_fn)[0:5]) for train_fn in train_fns]
# from https://github.com/parasmaharjan/DeepImageDenoising/blob/master/Pytorch_EDSR.py#L39
train_fns = glob.glob(gt_dir + '0*.ARW')
train_ids = []
for i in range(len(train_fns)):
_, train_fn = os.path.split(train_fns[i])
train_ids.append(int(train_fn[0:5]))
Can you explain the above two issues?:)
Thank you for the interesting solution.
It would be nice to have it ported to JS, lets say with TFJS.
this is a test!
Hi, You did a really nice job, congratulations! Do you mind sharing how you get Ma metric? I am trying to use the code from https://github.com/chaoma99/sr-metric but it seems not very efficient. Is that the same way how you get Ma metric to put your test image directly to this process?
Very comprehensive and brilliant work!But I have a problem when I try to retrain the Network in your project,I found that when you choose the train&test files in SID Sony Dataset, you just choose '0*_00_0.1s.ARW','1*_00_0.1s.ARW','2*_00_0.1s.ARW' and corresponding long exposure raw image as your dataset,which cause that you just use 181 pairs data for train and 50 pairs data for test,and ignore other exposure time low light image such as '*0.04s.ARW','*0.033s.ARW'
LDC Learning to restore low-light images via decomposition-andenhancement CVPR2020
I cannot find its code. But you say that LDC [65] has publicly available code.
Can you point out its url or offer your reimplementation?
Thanks a lot!
These three scales 2x, 8x, and 32x in the network.py are executed sequentially in the forward of the same network, not in parallel
the test image format is raw, how can i test with the jpeg or png format
Thanks a lot for your great work and shared code.
I tested your code on a Canon camera raw image and found that the amplifier module works poorly, while the camera final RGM image is very nice.
This is the output of your mode with external amp=1.0 (the outputs for amp = 2, 5 are way too over exposed)
Do you have suggestions on how to improve the amplifier?
I have a problem about how you train the DCE model with SID dataset ,did you postprocess the lowlight raw data with rawpy to get the RGB data and train the DCE model with those data?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.