fnzhan / emlight Goto Github PK
View Code? Open in Web Editor NEW[AAAI 2021] EMLight: Lighting Estimation via Spherical Distribution Approximation, [TIP] GMLight: Lighting Estimation via Geometric Distribution Approximation
[AAAI 2021] EMLight: Lighting Estimation via Spherical Distribution Approximation, [TIP] GMLight: Lighting Estimation via Geometric Distribution Approximation
Hi, fnZhang.
Do you reproduce the paper “Fast Spatially-Varying Indoor Lighting Estimation“ as reference, or in any other ways to get the predictions.
Hi, could you please let me know where is the code that generates the Gaussian map from the lighting distribution, intensity, and ambient? Thanks!
Hi again!
Could you provide the pre trained models please (for reproduction of your results)?
Thanks!
The luminance of HDR panoramas in the Laval datasets maybe in the different scales.Did you make the HDR panoramas in the same scale? How to do that?
Hi! I was wondering if you could share the weights for your generator network on your training set?
Thank you!
Hi!
In the paper, you mentioned 'Detailed network structure of neural projector and the training settings are provided in the supplementary file.', but I can not fild the supplementary file. Would you like to tell me where is the supplementary file?
What's more, I tried to train my own dataset with your code for comparison, but I don't know the original file structure so I got some problems. Could you tell me the structure of the original file and the format of the images in the folder?
Thank you very much!
Hello!
I found that you use a TonemapHDR(..) function to tone your images in '\crop' folder.
Is it mean that images in '\crop' are just cut from hdr, without any tone process ?
hi,
i have some question about evaluation.
The paper mention that the scenes used in evaluations consist of three spheres with different materials including diffuse gray, matte silver and mirror silver.
Was it bulided with blender?
if it was builded with blender , can you provide the parameters of three different materials ball?
Hi again!
Could you please share your running environment so I can try to run the code with little to no changes in it please?
I'm facing some silly problems like conversion from numpy to torch, which I suspect is because you use another pytorch version than mine, otherwise you would have the same problem as me. One example of error I'm getting:
$ Illumination-Estimation/RegressionNetwork/train.py
Thanks!
Hi,
congrats for the great work!
I'm trying to run the training code and I'm getting the following error:
(base) root@bd14969643f5:~/codes/Illumination-Estimation/RegressionNetwork# CUDA_VISIBLE_DEVICES=2 python train.py
+ Number of params: 9.50M
0 optim: 0.001
Traceback (most recent call last):
File "train.py", line 68, in <module>
for i, para in enumerate(dataloader):
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/root/codes/Illumination-Estimation/RegressionNetwork/data.py", line 68, in __getitem__
training_pair['depth'] = torch.from_numpy(gt['depth']).float()
KeyError: 'depth'
I generated the dataset using distribution_representation.py
, but I could not find anywhere the depth being added to the list.
Thanks for the help!
I tried both train and test code on Laval Indoor dataset, and from the test code, I get one result as the image shown above. It seems like the Guassian Map that mentioned in the paper, and I wonder how to reconstruct the environment map from this image.
Thank you so much for your excellent work and I look forward to any reply.
the different scale of input images leads to different information of the scene.
maybe one image includes just one object,for example one chair , and the other image include the same chair and the environment around it. each image means different region . one is smaller than the other. and the first just in the second.
how did this model balance it? or what did you do to fix it?
thks for any reply~
hi ,
i saw your code in data.py have ['depth'] term, but not in distribution_representation.py pkl's save para.
Is the 'depth' used in training? and how can i get the 'depth' information.
thanks!
hi,
I have some question about the evaluations matrix.
The matrix in the paper was to caculating the loss of render the three different material sphere with predicted light.
how to render the three different material sphere with the hdr lighting??
Traceback (most recent call last):
File "distribution_representation.py", line 145, in
para, map = extractor.compute(hdr)
File "distribution_representation.py", line 94, in compute
hdr = self.steradian * hdr
ValueError: operands could not be broadcast together with shapes (128,256,1) (1024,2048,3)
When I run distribution_representation.py, I encounter the above error, how can I solve it, thank you!
Thanks for your wonderful work. I wonder why you delete the sigmoid in the output layers in DenseNet.py in the latter version? And I guess it will be reasonable if self.fc_dist(out) is followed by softmax , since the sum of gt_distribution is one .(https://github.com/fnzhan/Illumination-Estimation/blob/master/RegressionNetwork/DenseNet.py)
Dear zhan:
I am training the regression ,now I happend to the probelms that the intensity output of '*.pth' model is positive number, however convered the onnx model output is negative number. would you like help me sovle the problem?
Thank you very much for sharing your work, but the data set link has been lost, can you update it, thank you
Hi fnzhan,
In your qualitative evaluation stage, how do you insert the virtual objects into a really scene LDR photo?
Do you implement this method by yourself or refer to some existing methods.
Best wishes
Hello,fnzhan:
Thank for your release the code for needlets,But the code to generate 'SN_Matrix3.npy' metioned in gt_jen_j3.py was not found, Would you mind providing more details about extracting the needlet coefficients from a HDR panorama and reconstructing the environment map with the needlets coefficients?
hi, your work is so beautiful!and thks for your code.
i want to know what should i do to use the result in blender?
thks for any reply
Hi!
I found intensity term is difficult to convergence during trainning, is it a common phenomenon Or my own problem?
What's more, i found you divide intensity by 500 in data.py before trainning, why do this?
EMLight/RegressionNetwork/data.py
Line 71 in 3a52b3a
Best wishes!
Hi, @fnzhan
I downloaded the Virtual Object Relighting (VOR) dataset and I do not how to evaluate the results. The dataset includes .blend, .blend1, and .jpg. What are they? I know blender a little bit. I read related issues and I know how to replace illumination maps. Do we need to save the rendered images one by one? After obtained rendered images (both gt and predicted illumination maps), do you release the metrics of RMSE, si-RMSE, Angular Error, and AMT? It would be nice if you provide more details about the evaluation. Thank you!
Hi, fnzhan:
It is mentioned in the article, when extracting crops from panorama,you apply the same image warping as in [1] to each image.
There is no open source code of [1], do you implement this method by yourself or ask the author for source code. Or you don't apply this method while training the network.
Best wishes.
[1]Learning to predict indoor illumination from a single image. In: SIGGRAPH Asia (2017)
HI~I have some question about REMS and si-REMS.
I know the matrix, but i don't know which parameter will be calculated.
is the illumination map?
I also want to ask that how to calculate the Angular Error.
thanks!
I have been trying to get the code to work on generate a rough HDRI from an image input but have had no luck so far, I have downloaded the pretrained model but I am not entirely sure where to start. Can anyone help me?
Hi fnzhan:
Could you please tell me which illumination map parameters you used as input during the training of the Neural Projector of EMlight, the parameters predicted by Regression network or the parameters of Ground Truth ?
Forgive me, my level is not enough,
the input of the first stage of the model is a local picture, then its output is a local light,
but I see from the code it should be global, there is no position information, how to achieve this
Dear authors,
Thanks for opening source your code, and it is really helpful. I have a question about the train/test split of the Laval HDR dataset when I read the paper. It says that 19,556 training pairs are generated from 2,100 HDR panoramas, and 200 images are randomly selected as the test set. I am not sure if the 200 images are selected from the 2,100 HDR panoramas, or just from the 19,556 generated images. If they are from the 19,556 images, how to make sure the corresponding HDR panoramas of them are not in the training set or not seen during the training process?
Best regards,
Hao XU
Hi everyone! Just wanted to share a video I made about this paper. Is very introductory but it may help someone to get a better idea about the problem and the proposed method.
https://www.youtube.com/watch?v=D4YcqkK2rCs&t=6s
Hope you enjoy it :)
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.