Giter Site home page Giter Site logo

b1ueber2y / dist-renderer Goto Github PK

View Code? Open in Web Editor NEW
216.0 216.0 27.0 2.22 MB

DIST: Rendering Deep Implicit Signed Distance Function with Differentiable Sphere Tracing (CVPR 2020).

Home Page: http://b1ueber2y.me/projects/DIST-Renderer/

License: MIT License

Python 99.20% Shell 0.80%

dist-renderer's People

Contributors

b1ueber2y avatar pengsongyou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dist-renderer's Issues

A bug in core/utils/decoder_utils.py

Hi, thanks for the excellent work.

I'm trying to use your code to do some experiments and find a bug in core/utils/decoder_utils.py:

In line 84, it seems that the grad_outputs param should receive torch.ones_like(sdf) instead of torch.ones_like(points_batch).

So the final code should be :

grad_tensor = torch.autograd.grad(outputs=sdf, inputs=points_batch, grad_outputs=torch.ones_like(sdf), create_graph=True, retain_graph=True).

The original code raises the following error during a test:

Traceback (most recent call last):
File "run_multi_pmodata.py", line 115, in
main()
File "run_multi_pmodata.py", line 100, in main
shape_code, optimizer_latent = optimize_multi_view(sdf_renderer, evaluator, shape_code, optimizer_latent, imgs, cameras, weight_list, num_views_per_round=num_views_per_round, num_iters=20, num_sample_points=num_sample_points, visualizer=visualizer, points_gt=points_gt, vis_dir=vis_dir, vis_flag=args.visualize, full_flag=args.full)
File "/home/disk/diske/yudeng/DIST-Renderer/core/inv_optimizer/optimize_multi.py", line 76, in optimize_multi_view
visualizer=visualizer)
File "/home/disk/diske/yudeng/DIST-Renderer/core/inv_optimizer/loss_multi.py", line 27, in compute_loss_color_warp
render_output = sdf_renderer.render_warp(shape_code, R1, T1, R2, T2, view1, view2, no_grad_normal=True)
File "/home/disk/diske/yudeng/DIST-Renderer/core/sdfrenderer/renderer_warp.py", line 131, in render_warp
normal1 = self.render_normal(latent, R1, T1, Zdepth1, valid_mask1, no_grad=no_grad_normal, clamp_dist=clamp_dist, MAX_POINTS=100000) # (3, H*W)
File "/home/disk/diske/yudeng/DIST-Renderer/core/sdfrenderer/renderer.py", line 896, in render_normal
gradient = decode_sdf_gradient(self.decoder, latent, points.transpose(1,0), clamp_dist=clamp_dist, no_grad=no_grad, MAX_POINTS=MAX_POINTS) # (N, 3)
File "/home/disk/diske/yudeng/DIST-Renderer/core/utils/decoder_utils.py", line 84, in decode_sdf_gradient
grad_tensor = torch.autograd.grad(outputs=sdf, inputs=points_batch, grad_outputs=torch.ones_like(points_batch), create_graph=True, retain_graph=True)
File "/home/yudeng/anaconda3/envs/siren/lib/python3.6/site-packages/torch/autograd/init.py", line 151, in grad
grad_outputs = _make_grads(outputs, grad_outputs)
File "/home/yudeng/anaconda3/envs/siren/lib/python3.6/site-packages/torch/autograd/init.py", line 30, in _make_grads
+ str(out.shape) + ".")
RuntimeError: Mismatch in shape: grad_output[0] has a shape of torch.Size([2006, 3]) and output[0] has a shape of torch.Size([2006, 1]).

Questions about paper

Hi @B1ueber2y @pengsongyou , thanks for the great work! I have two questions regarding your paper:

  1. For video sequence supervised reconstruction, you mentioned that you did not use masks. Then why does the final predicted shape only contain foreground objects? I assume that the shape should be random at the beginning and the photometric loss is applied to each pixel of a pair of groundtruth images.
  2. It seems that you always optimize the latent code instead of the model. To my understanding, this means for every new object you would need a specific optimization. Is there any reason why you didn't optimize the model and use an image encoder to make the sdf network conditioned on an input?

Thank you!

How did you train the DeepSDF models?

Hi, thanks for the great work!

Following the repo of DeepSDF https://github.com/facebookresearch/DeepSDF, I preprocess the ShapeNetV2 data and train the network on Car category.

Suprisingly, compared with the pretrained models you offered, I got much worse results of generate_training_meshes.py.
Did you just follow the code of DeepSDF and simply run the code?

Just another minor issue, I found that the pretrained latent code you offered has different length with the training data (2788 vs 2807). If I understand correctly, the latent code should be one-to-one corresponding to training shapes.

Thanks!

Confused about normal map rendering

Thanks for sharing your work with the community!

I have a quick question: in the original paper, in Eq. 3, you mention that normal maps are computed using finite differences on the deep signed distance function.

However, when digging through your code, it seems like normals are rather either computed analytically from the deep signed distance field (use_depth2normal = False) or are computed using finite differences on the depth map (use_depth2normal = True).

I was curious to know what is your insight on these 3 alternative approaches to compute the same quantity!

Failed while running the "preprocess_data.py" of the DeepSDF project

I failed while running the "preprocess_data.py" of the DeepSDF project. There are 22 issues in that project. It seems that none of the askers solved their problems. have you ever meet the problem that the folder of the output data is empty? It's very strange. I'll appreciate if you can tell me how you get the results successfully!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.