simongiebenhain / nphm Goto Github PK
View Code? Open in Web Editor NEW[CVPR'23] Learning Neural Parametric Head Models
Home Page: https://simongiebenhain.github.io/NPHM/
License: Other
[CVPR'23] Learning Neural Parametric Head Models
Home Page: https://simongiebenhain.github.io/NPHM/
License: Other
Hi,
Thank you very much for the great dataset.
In the paper, you mentioned that, "as part of our dataset we also provide the RGB frames captured during the 3D scanning process". I am wondering where I can find the RGB images per scan.
BR
Hi, it's a fantastic work!
I have a question about training NPHM on Facescape Dataset. First, I have prepare the neutral expression data to train the Identity Network, and it can obtain a nice result as the following image shown.
And then I use the registered mesh of Facescape dataset and run the scripts/data_processing/sample_deformation_field.py to prepare samples, then train the forward deformation fields. But the results on the training set seems to be wrong. Could you please help me figure out the problem?
Thanks!
Hi, it's a fantastic work!
I have a question about how to get registration mesh in the process of preparing data? for example, how to estimate id, exp parameters of flame with input of scan and how to upsample the vertices.
As title mentioned, I test straight longhair, the output always is curly hair no matter what the parameter is. Face shape and expression are also not good enough.
Input:
NPHM outputs:
Additionally, I find that NPHM cannot deal with long bangs.
Do you have some suggestions to handle this kind of situation?
I upload the test scan ply if you need it to check.
scan.zip
Super cool implementation, thanks for that! How would you go about the inverse fitting process (from Neural Parametric Head Models to photorealistic portrait)?
Hello, dear authors, I've read your paper and downloaded the data, great job!
I run the demo with Facescape publishable data. I fed in the fitting_pointclouds.py with mvs point clould that has been aligned with nphm coordinate.
The neutral expession reuslt looks good yet other expressions seem to be wrong....
I followed the steps described in seciton Fitting Point Clouds , and used the your pretrained data. Could you please help me figure out the problem? thank you very much!
As title mentioned, are there some methods to get FLAME 51 landmark? Many evaluations need the results of landmark..
Hi,
Thank you for your code and dataset. In your supplementary, you mentioned that you did the training on a 3090 GPU. However, when I did the training with my 4090, the memory usage exceeded 24 GB. I wonder how you put the model into a 3090?
Besides, any suggestions on reducing GPU memory usage? I tried to reduce batch size in the nphm.yaml file from 32 to 8, but it didn't work.
Hi, after reading your appendix, I want to generate my custom point clouds using depth map that I get using my depth camera, so, I want to know how you convert the depth map to point clouds so that I can test my own data?
Can you point out which script or method you used to do this convert? Thanks for your time!
install generated folders __pycache__/
do not need to be checked in. That, build/ and *egg-info/ could be added to .gitignore.
not important... just linting.
Hello, thank you for sharing your repo.
I have three questions about your repo. The first question is that can we output obj or ply file ? The second question is that did your model can do texture? And the final question is that can I use my own RGB single image as input?
Thanks.
Hi,
Thanks for your great work, I have one question that whether I just need to fit Head points cloud to get global id latent code and expression latent code? But the local id latent code is not needed?
Love the work done; great job - could you suggest methodologies for getting a point cloud from monocular video?
How to get FLAME model? Which solution was used?
Is it just enough to scale to 4 FLAME model? Without any rotation and moving?
And how did you align point cloud with FLAME model?
Have you tried to test any identity who wears glasses?
Do you have any identity with glasses in your training dataset?
I would like to run such test, but it is not clear how to pre-process it.
The claim in paper that it takes point cloud as input, but there is not any sample about it in documentation or code. Test or dummy data takes the prepared ply meshes: flame, scan and registration. But what is the use of project if it already has such good meshes, how to get mesh from point cloud is still unclear
Great work!
As the title mentioned, the scores of pretrained models in paper are much better than github.
are you plan to release the models that trained in all of identities?
Thanks.
Hi, Congratulations. I want to test the inference code with the pointcloud as input. Could you provide some advices? Thanks very much.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.