Giter Site home page Giter Site logo

simongiebenhain / nphm Goto Github PK

View Code? Open in Web Editor NEW
171.0 8.0 14.0 215.75 MB

[CVPR'23] Learning Neural Parametric Head Models

Home Page: https://simongiebenhain.github.io/NPHM/

License: Other

Shell 0.15% Python 99.49% GLSL 0.36%
3d-reconstruction neural-fields 3d-deep-learning 3d-face-reconstruction cvpr-2023 implicit-representations morphable-model parametric

nphm's Introduction

My Simple Personal Website

Adapted from "Allan Lab" Website Template

This website is powered by Jekyll and some Bootstrap, Bootwatch. We tried to make it simple yet adaptable, so that it is easy for you to use it as a template. Plese feel free to copy and modify for your own purposes. You don't have to link to us or mention us (but of course we appreciate it).

Go to this website for more information.

Copyright Allan Lab and Simon Giebenhain. Code released under the MIT License.

nphm's People

Contributors

simongiebenhain avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nphm's Issues

Training on Facescape Dataset

Hi, it's a fantastic work!

I have a question about training NPHM on Facescape Dataset. First, I have prepare the neutral expression data to train the Identity Network, and it can obtain a nice result as the following image shown.
image

facescape_neutral.zip

And then I use the registered mesh of Facescape dataset and run the scripts/data_processing/sample_deformation_field.py to prepare samples, then train the forward deformation fields. But the results on the training set seems to be wrong. Could you please help me figure out the problem?

image image

facescape_expression.zip

Thanks!

RGB Images

Hi,

Thank you very much for the great dataset.

In the paper, you mentioned that, "as part of our dataset we also provide the RGB frames captured during the 3D scanning process". I am wondering where I can find the RGB images per scan.

BR

how to get registration mesh from scan data?

Hi, it's a fantastic work!
I have a question about how to get registration mesh in the process of preparing data? for example, how to estimate id, exp parameters of flame with input of scan and how to upsample the vertices.

Fitting and model latent code question

Hi,

Thanks for your great work, I have one question that whether I just need to fit Head points cloud to get global id latent code and expression latent code? But the local id latent code is not needed?

Can export 3D obj ?

Hello, thank you for sharing your repo.
I have three questions about your repo. The first question is that can we output obj or ply file ? The second question is that did your model can do texture? And the final question is that can I use my own RGB single image as input?
Thanks.

How do you fit the model into a 3090 GPU during training?

Hi,

Thank you for your code and dataset. In your supplementary, you mentioned that you did the training on a 3090 GPU. However, when I did the training with my 4090, the memory usage exceeded 24 GB. I wonder how you put the model into a 3090?

Besides, any suggestions on reducing GPU memory usage? I tried to reduce batch size in the nphm.yaml file from 32 to 8, but it didn't work.

problems on facescape data

Hello, dear authors, I've read your paper and downloaded the data, great job!
I run the demo with Facescape publishable data. I fed in the fitting_pointclouds.py with mvs point clould that has been aligned with nphm coordinate.
The neutral expession reuslt looks good yet other expressions seem to be wrong....
I followed the steps described in seciton Fitting Point Clouds , and used the your pretrained data. Could you please help me figure out the problem? thank you very much!
facescpae_344_6_jaw_right

facescpae_344_6_jaw_right.zip

Point Cloud - Custom Dataset

Love the work done; great job - could you suggest methodologies for getting a point cloud from monocular video?

Identity with glasses

Have you tried to test any identity who wears glasses?
Do you have any identity with glasses in your training dataset?

I would like to run such test, but it is not clear how to pre-process it.

image

The claim in paper that it takes point cloud as input, but there is not any sample about it in documentation or code. Test or dummy data takes the prepared ply meshes: flame, scan and registration. But what is the use of project if it already has such good meshes, how to get mesh from point cloud is still unclear

linting note: __pycache__

install generated folders __pycache__/ do not need to be checked in. That, build/ and *egg-info/ could be added to .gitignore.
not important... just linting.

straight longhair has bad results

As title mentioned, I test straight longhair, the output always is curly hair no matter what the parameter is. Face shape and expression are also not good enough.
Input:
Screenshot from 2023-08-13 17-31-26
NPHM outputs:
Screenshot from 2023-08-13 17-32-34

Additionally, I find that NPHM cannot deal with long bangs.
Do you have some suggestions to handle this kind of situation?

I upload the test scan ply if you need it to check.
scan.zip

FLAME model

How to get FLAME model? Which solution was used?

Is it just enough to scale to 4 FLAME model? Without any rotation and moving?

And how did you align point cloud with FLAME model?

How you make point clouds using depth map

Hi, after reading your appendix, I want to generate my custom point clouds using depth map that I get using my depth camera, so, I want to know how you convert the depth map to point clouds so that I can test my own data?

Like this:
image

Can you point out which script or method you used to do this convert? Thanks for your time!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.