Giter Site home page Giter Site logo

tensoir's People

Contributors

haian-jin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

tensoir's Issues

How to get nice lego geometry?

Thank you first for the great work.
I tried the model on NeRFactor dataset (512x512 with low res. env. map). The results of the Lego scene make me confused.

Result in the paper:
Screenshot from 2023-05-10 11-14-24

Result of the standard Lego config, the result is blur and has an unexcepted 'bottom' part:
039

Result of the 'tricky' Lego config (enable the those three lines), the result looks better but still it has 1. tricky normal direction in some part, 2. noisy normal for the flat surface and 3. not correct surface reconstruction inside the digging bucket:
039

As can be seen, training with either standard lego config or the tricky one can not produce results as good as in the paper. One reason is that I trained it with a smaller image size (512x512, in the paper 800x800). Do you have any suggestions for getting a better geometry?

[Code] Enable gradient in renderer.py

An amazing work and congrats! I have a question in the code and hope for a reply if you're convenient. In the file renderer.py Line 37, the TensoIR model is warped by torch.enable_grad(), so I wonder why we need the gradient here?

Results on out-bounded scenes ?

Hello,
thanks for your great work and for open sourcing your code. It is really a nice paper.

For my use case, I need to learn (and relight accurately) an outbounded scene (such as the ones in Mip-NeRF360, although with multiple lights). Have you tried running your algorithm on such scenes? Did it perform well, or should I look for other works ?

Limitation on glossy materials

Thank you again for this insightful research project and for providing us with the source code.

I was trying to track down an issue related to glossy estimation. I did a simple test where I optimize a glossy scene and then render novel views by setting the roughness value (after volume rendering is done) such that every 800x800 pixels has a roughness of 0. I tried other values but it seems that values lower than 0.5 cause issues. Below, I am showing results for different r values:

I did an ablation for r=0.2, the strange highlights seem to be coming from the directly lit specular component:
image

Do you know why there happens to be artefacts for r=0.2 ; while r=0 looks very matte? Let me know if I overlooked something.

Troubleshooting Unbounded Scene

Hello, I am trying to experiment with applying TensoIR to an unbounded scene.

I started with trying the well-known "Garden" scene from the MipNeRF-360 dataset.

I'm not sure if this will be the only issue, but one problem that I have run into is the following, where it seems that the scene geometry is limited to a bounding box that is much smaller than the full scene. This also seems like it leads to incorrect optimization of the geometry/normals:

020

Do you have any ideas as to what I may do to resolve this bounding box issue and improve results?

Thanks very much!

Artifacts observed on synthetic hotdog with single lighting

Hi, thank you very much for making your work publicly available.

Unfortunately, while I was trying to reproduce your experimental results on synthetic hotdog, a strange artifact was observed in a specific area in the scene as shown in the pictures below.
albedo
normal

I followed your instructions for setting up python environment and used "configs/single_light/hotdog.txt" configuration provided in the codebase. Running the experiment multiple times or setting a different random seed value doesn't seem to resolve this issue. So I was wondering if I can get some help here.

Thanks in advance!

The nerf synthetic dataset cannot be trained

Thank you very much for your excellent work. I get an error when I train the synthetic dataset for nerf:
image
The format of the nerf synthetic dataset is as follows:
image
Looking at the error, it seems to ask for a metadata.json file.
But when I look at the metadata.json file in the dataset you provided, it looks like HDR is required!
image
Is HDR a required input?

Problems Generating My new dataset

Hello! Thanks for your great work. We are really impressed by reconstruction quality has achieved.
We had followed 'Generating your own synthetic dataset' for the code generating out new scenes with a single small cube, and try to reconstruct it by this project.
However we found it doesn't work and the reuslt is pretty blurry. We had made the backgroud transparency like the armadillo in TensoIR-Synthetic.(using train_tensoIR.py )
After diving into the code about photos genreation and this project, we found the code to genreate new pictures with blender reamin some inconsistency with this project, especially generating camera params in the json file(camera_angle_x, rotation, rgba_xxsscene_000.png etc) I had fixed some, but it still remain some inconsistency.
Would you give me some advice about how to import my own scenes description into this project?(orientation of Z-axis, or is 'rotation' used in json? ) Or maybe could you share me you latest dataset generation code for blender?(the code in 'Generating your own synthetic dataset' dosen't seems to be latest)
Thanks!
(I am sorry maybe I am not a good English Speaker, if you feeled confused what I am talking about, maybe I can use Chinese

dataset's website can't log in

Thanks to your code!!
I am a student in another university and interesting in nerf and computer graphics and want to follow your work.But I have no zju email and can't regist.
Can you share the dataset by other ways?
Thanks!!

Question about the rescale ratio for diffuse color

Hi,

I am a student with a keen interest in the topic at hand. While exploring your repository, I came across the "relight_importance.py" script and noticed the presence of a "compute_rescale_ratio" function.
It appears that this computed ratio is intended for the purpose of rescaling the diffuse color.
I am curious to understand the specific rationale behind this rescaling process. In my view, it might be possible to achieve the desired relighting results using the original, unaltered diffuse color.

Thanks for your amazing work and look forward to your reply.

How to Extract maps which can be used in UE?

Great job. I have seen that it is possible to export mesh, but I would like to know how to export texture maps so that they can be used in UE rendering engines effectively. Thanks a lot.

Doubts about the failure of the environment map estimation

Hello, thank you for your excellent work.
I would like to add the SDF field to your work for further reconstruction. I used the geometric representation of neus while retaining the density tensor. I tried it on the 'hotdog' scene of the Blender dataset.
(1) The estimation of the environment map was found to be unreasonable, and the estimated albedo was severely darkened. Here is the environment map.
YTSG8IEP2UQN3GBGT{9ELWY

(2) Besides, the estimated RGB image (nvs with radiance field) seems to have grids, which is not smooth and continuous.
P_L3XR2Q5N%E)Y2L{ V8383

Could you give me some suggestions about it? Any reply will be appreciated.

Out of memory when training on custom blender scene

Hi, thank you for this exciting paper.

I was looking to run the training on a custom blender scene. I used the utility files to render the full dataset from a user defined blender file. While I manage to train the full 80k steps on the single light armadillo experiment, I am unable to do so on a custom dataset because of a memory issue (gpu used is also 11GB). This happens at the 10k mark. From what I understand, 10k is when you start upsampling the vector and matrix sizes. Just after the 10k steps, the training fails with an out of memory error. What I don't understand is why this happens on my custom scene while the provide scene works correctly.

Is this related to the fact the alpha mask is being computed at that point?
Do you have any pointers as what would be causing this issue?

release date

Thanks for your great work!
Will the code be released today? really looking forward to it!

Questions about the radiance field color loss

Excellent work! Here is my question:
Is the radiance field color (computing by volume rendering) uesd as the indirect lighting, same as the radiance field color used in the loss L_RF? If so, Why does the indirect lighting need to be consistent with GT color?
I apologize for the possible error in my understanding. Looking forward to your answer and thank you for your excellent work!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.