Giter Site home page Giter Site logo

neuralgraph's Introduction

Neural Deformation Graphs


Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction
Aljaž Božič, Pablo Palafox, Michael Zollhöfer, Justus Thies, Angela Dai, Matthias Nießner
CVPR 2021 (Oral Presentation)

This repository contains the code for the CVPR 2021 paper Neural Deformation Graphs, a novel approach for globally-consistent deformation tracking and 3D reconstruction of non-rigid objects.

Specifically, we implicitly model a deformation graph via a deep neural network and empose per-frame viewpoint consistency as well as inter-frame graph and surface consistency constraints in a self-supervised fashion.

That results in a differentiable construction of a deformation graph that is able to handle deformations present in the whole sequence.

Install all dependencies

  • Download the latest conda here.

  • To create a conda environment with all the required packages using conda run the following command:

conda env create -f resources/env.yml

The above command creates a conda environment with the name ndg.

  • Compile external dependencies inside external directory by executing:
conda activate ndg
./build_external.sh

The external dependencies are PyMarchingCubes, gaps and Eigen.

Generate data for visualization & training

In our experiments we use depth inputs from 4 camera views. These depth maps were captured with 4 Kinect Azure sensors. For quantitative evaluation we also used synthetic data, where 4 depth views were rendered from ground truth meshes. In both cases, screened Poisson reconstruction (implemented in MeshLab) was used to obtain meshes for data generation. An example sequence of meshes of a synthetic doozy sequence can be downloaded here.

To generate training data from these meshes, they need to be put into a directory out/meshes/doozy. Then the following code executes data generation, producing generated data samples in out/dataset/doozy:

./generate_data.sh

Visualize neural deformation graphs using pre-trained models

After data generation you can already check out the neural deformation graph estimation using a pre-trained model checkpoint. You need to place it into the out/models directory, and run visualization:

./viz.sh

Reconstruction visualization can take longer, if you want to check out graphs only, you can uncomment --viz_only_graph argument in viz.sh.

Within the Open3D viewer, you can navigate different settings using these keys:

  • N: toggle graph nodes and edges
  • G: toggle ground truth
  • D: show next
  • A: show previous
  • S: toggle smooth shading

Train a model from scratch

You can train a model from scratch using train_graph.sh and train_shape.sh scripts, in that order. The model checkpoints and tensorboard stats are going to be stored into out/experiments.

Optimize graph

To estimate a neural deformation graph from input observations, you need to specify the dataset to be used (inside out/dataset, should be generated before hand), and then training can be started using the following script:

./train_graph.sh

We ran all our experiments on NVidia 2080Ti GPU, for about 500k iterations. After the model has converged, you can visualize the optimized neural deformation graph using viz.sh script.

To check out convergence, you can visualize loss curves with tensorboard by running the following inside out/experiments directory:

tensorboard --logdir=.

Optimize shape

To optimize shape, you need to initialize the graph with a pre-trained graph model. That means that inside train_shape.sh you need to specify the graph_model_path, which should point to the converged checkpoint of the graph model (graph model usually converges at around 500k iterations). Multi-MLP model can then be optimized to reconstruct shape geometry by running:

./train_shape.sh

Similar to graph optimization also shape optimization converges in about 500k iterations.

Citation

If you find our work useful in your research, please consider citing:

@article{bozic2021neuraldeformationgraphs,
title={Neural Deformation Graphs for Globally-consistent Non-rigid Reconstruction},
author={Bo{\v{z}}i{\v{c}}, Alja{\v{z}} and Palafox, Pablo and Zollh{\"o}fer, Michael and Dai, Angela and Thies, Justus and Nie{\ss}ner, Matthias},
journal={CVPR},
year={2021}
}

Related work

Some other related works on non-rigid reconstruction by our group:

License

The code from this repository is released under the MIT license, except where otherwise stated (i.e., Eigen).

neuralgraph's People

Contributors

aljazbozic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

neuralgraph's Issues

Evaluation code

Hi, thanks for sharing this great work!

I'm making a paper related to your work and really want to get a quantitative comparison with your great work. However, I can't find the code to compute the metric for Table.1 in the paper. Could you release also the evaluation metric code?

Thanks!

leg switching problem

Hi Aljaz, started testing the method on a new dataset. i trained graph only for 540000 iters and i get good but also some bad deformations. the data is a person walking for 100 frames, and i deform the gt mesh from frame 0 to all other 99 frames (I'm using similar code used in surface Consistency and the newly added denseTracking). on several frames the legs are deformed such that they get switched from left to right. i see the graph nodes also switch. I suspect that its due to the symmetry and the harmonic movement. some pairs in the batch have the right and left leg close and it drives the graph to this bad local minimum. I'm thinking about a possible solution by adding a collision \ mesh stretch (ARAP) \ semantic loss to prevent this. ill try to contribute and add it...
i can share the data if needed and attached ply with the problem for close inspection:
mesh_trans.zip
frame 0 deformed to 30 - still good:
image
frame 0 deformed to 31 - leg switch:
image
zoomed:
image
zoomed with graph:
image
zoomed good frame, see the pink, light green and yellow nodes are now back to left:
image

Comparison dynamic fusion

Hi there,

Thanks for this great work.
I have seen a few implementations of the dynamic fusion on github.
Could you say which implementation of the dynamic fusion you used for the comparison with NeutralGraph in your paper?
OpenCV 4.5.2 also had the dynamic fusion but it is not working.

Regards,
Yaqub

SDF grid acquisition different in code and in paper

Hi, thank you for sharing the code, the result looks amazing.

I just have one question, in Section 4, Evaluation on Synthetic data, it is said that the SDF is computed from 4 fixed view depth image.

To mimic our real data capture setup, we render 4 fixed depth views for every frame of synthetic animation, and generate SDF grids from these 4 views.

However, in the code for SDF grid generation, it seems it's directly computed from mesh without any depth rendering: https://github.com/tomfunkhouser/gaps/blob/f9c51cb706444953c87ed991a2601ffb9173be94/apps/msh2grd/msh2grd.cpp#L74

So does this method require ground truth perfect SDF to train the neural graph?

tracking a keyframe

Hi. first thanks for this amazing work.
I have a question about tracking. how can i deform a specific key frame (with its mesh topology) to another frame in the sequence or to recreate the consistent coloring in the paper?
my guess is that i can sample keyframe vertices and deform them using NDGs of other frames (instead of the uniform grid sampling in viz.py)

Different results when retraining doozy dataset

Hi, i have retrained the graph model for doozy for about 600K from scratch. i get different results, somewhat worse vs the given checkpoint. the graph affinity changes on each frame and legs are sometimes connected, transforms are also different and mesh has some badly deformed parts (see below).
for deforming a source frame to target i use the same code as in surfaceConsistencyLoss. (see my fork: https://github.com/maorp/NeuralGraph/blob/maorp-transformCloud/node_sampler/loss.py)

retrained graph, legs connected:
image

loss:
image

deforming frame 30 to 0:
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.