Giter Site home page Giter Site logo

soubhiksanyal / now_evaluation Goto Github PK

View Code? Open in Web Editor NEW
96.0 6.0 16.0 592 KB

This is the official repository for evaluation on the NoW Benchmark Dataset. The goal of the NoW benchmark is to introduce a standard evaluation metric to measure the accuracy and robustness of 3D face reconstruction methods from a single image under variations in viewing angle, lighting, and common occlusions.

Home Page: https://now.is.tue.mpg.de/

Python 100.00%
python python3 computer-vision 3d-face-alignment 3d-reconstruction face-alignment benchmark-datasets single-image-reconstruction 2d-3d face-reconstruction

now_evaluation's People

Contributors

neelays avatar radekd91 avatar soubhiksanyal avatar timobolkart avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

now_evaluation's Issues

metrical evaluation

hi, Is the non-metrical evaluation calculated by this code, and how to calculate the metrical evaluation

NotImplementedError: Unknown mesh file format.

Traceback (most recent call last):
File "check_predictions.py", line 144, in
main(sys.argv)
File "check_predictions.py", line 138, in main
check_mesh_alignment(pred_mesh_filename, pred_lmk_filename, gt_mesh_filename, gt_lmk_filename)
File "check_predictions.py", line 99, in check_mesh_alignment
groundtruth_scan = Mesh(filename=gt_mesh_filename)
File "/home/ly/anaconda3/envs/now_evaluation/lib/python3.7/site-packages/psbody/mesh/mesh.py", line 67, in init
self.load_from_file(filename)
File "/home/ly/anaconda3/envs/now_evaluation/lib/python3.7/site-packages/psbody/mesh/mesh.py", line 461, in load_from_file
serialization.load_from_file(self, filename)
File "/home/ly/anaconda3/envs/now_evaluation/lib/python3.7/site-packages/psbody/mesh/serialization/serialization.py", line 423, in load_from_file
raise NotImplementedError("Unknown mesh file format.")
NotImplementedError: Unknown mesh file format.

Installation script

git clone https://github.com/soubhiksanyal/now_evaluation.git
mkdir now_env
python3 -m venv now_env
source now_env/bin/activate
pip install -U pip
cd now_evaluation
pip install -r requirements.txt
cd ..
sudo apt-get install libboost-dev
git clone https://github.com/MPI-IS/mesh.git
cd mesh
BOOST_INCLUDE_DIRS=/usr/include/boost make all
cd ..
git clone https://github.com/Rubikplayer/flame-fitting.git
cp flame-fitting/smpl_webuser now_evaluation/smpl_webuser -r
cp flame-fitting/sbody now_evaluation/sbody -r
git clone https://gitlab.com/libeigen/eigen.git
cd eigen
git checkout 3.4.0
cd ..
cp eigen now_evaluation/sbody/alignment/mesh_distance/eigen -r
cd now_evaluation/sbody/alignment/mesh_distance
make

cumulative_errors.py

What is the form of the output file list and corresponding list method names?

Only neutral scans provided?

I found Validation scans only provided neutral scans for each subject.
But validation_list include all challenge: expressions, neutral, occlusions...
Can we evaluate other challenges by only neutral scans?
Is there any other way to get full scans?

rigidly aligned error

I followed the install guide and everything was correct.

In step 2, run the check_predictions.py to check the landmarks. The script could run and output the result, but the result was not correct and I think the 7 landmarks I got were right.

The method that I use to reconstruct was Deep3D pytorch. The 3DMM mode was BFM model.

I have drawn the landmarks in 2D image to compare (red points is the 7 landmarks):

image

image

the result I got was not aligned:

image
image
image

And then, I ran the error compute script, I got the result distance error was too large then this method should has:
image

So, I wondered if the version of scan to mesh was wrong or other things. By the way, the flame-fitting I used the master branch, and eigen I used the 3.4 version

Does the head mesh affect the performance?

Hi, I am trying to evaluate the performance of some reconstruction methods in NoW. I noticed that some methods (e.g., DECA) output a complete head mesh while the GT scans only contain the face region. As I am not clear about the details of the ScanToMesh function in the code, I am not sure whether it is necessary to output only the face region mesh? Does it matter?

Issues with eigen

Hi,

While running make in now_evaluation/sbody/alignment/mesh_distance I am getting a lot of errors related to eigen. Am I supposed to use a specific version of eigen for installation?

what's the meaning of output?

when i run python check_predictions.py ./test/IMG_0045_detail.obj.obj ./test/IMG_0045.txt ./test/natural_head_rotation.000001.obj ./test/natural_head_rotation.000001_picked_points.pp
The output is
1.56e+08 | dist: 1.56e+08 | s_reg: 0.00e+00
1.40e+08 | dist: 1.40e+08 | s_reg: 0.00e+00
1.13e+08 | dist: 1.13e+08 | s_reg: 0.00e+00
1.00e+08 | dist: 1.00e+08 | s_reg: 0.00e+00
9.64e+07 | dist: 9.64e+07 | s_reg: 0.00e+00
9.64e+07 | dist: 9.64e+07 | s_reg: 0.00e+00

what's meaning?

Performing evaluation with DECA

Hello guys, thank you for releasing the code for your amazing research!
I have been trying to perform the allignment withe results obtained with the DECA technique.
However, I have stumbled into issues regarding the allignment with the ground-truth scans.

We were using the landmarks available with the face model, however, they seem not to be the landmarks that you used during the evaluation for DECA> How did you manage to make the quantitative evaluation with deca and which landmarks did you use? Do you have any file on the vertices?

evaluation dataset

could someone provide these images corresponding to the eval/test lists?

It is too big to download all the dataset.

Performing evaluation with Deng et al

Hello guys, thanks again for releasing this benchmark! I have been trying to duplicate the results you obtained in the evaluation and I have stumbled into some issues.

The Deng et al technique uses a facial landmark extractor, we tried with dlib, however, in some cases it fails to extract landmarks. Did you use dlib or did you use another technique?

And in cases that the landmark extractor failed, did your team eliminate the image from the evaluation proccess?

I tried to use dlib, it did not work on 70 images of the test set. We eliminated those images on the evaluation and the error we obtained was much higher than the reported on the evaluation set. We were trying to find the source of error.

Again, thank you for your attention!

expression coefficients set to zero?

Thanks for releasing the code for your amazing research!

I notice that the scans only with people in neutral expressions.

so when i use deep3D with bfm to construct the mesh, should i set the expression coefficients to zero? Because the val scans only with neutral expressions, and how about the pose coefficients?

Thank you very much!

rigid_scan_2_mesh_alignment(scan, mesh) very slow

Hi, I am trying the NoW with the evaluation code. However, I generated the BFM shape model and landmarks from the validation image set, and pass them to the evaluation code. It shows something like this:

5.60e+07 | dist: 5.60e+07 | s_reg: 0.00e+00
4.75e+07 | dist: 4.75e+07 | s_reg: 0.00e+00
2.92e+07 | dist: 2.92e+07 | s_reg: 0.00e+00
7.28e+06 | dist: 7.28e+06 | s_reg: 0.00e+00
3.22e+06 | dist: 3.22e+06 | s_reg: 0.00e+00
1.67e+06 | dist: 1.67e+06 | s_reg: 0.00e+00
4.79e+05 | dist: 4.79e+05 | s_reg: 0.00e+00
1.87e+05 | dist: 1.87e+05 | s_reg: 0.00e+00
1.15e+05 | dist: 1.15e+05 | s_reg: 0.00e+00
8.80e+04 | dist: 8.80e+04 | s_reg: 0.00e+00
7.32e+04 | dist: 7.32e+04 | s_reg: 0.00e+00

It takes a few minutes to print a new line of the above information but no result return.
Could you please help me with it?

The evaluation speed is slow

Dear authors, thank you for providing the codes.
My issue is, it seems quite time-consuming to run 'compute_error.py'. It took me about 1 to 2 hours to process one image. Is it normal?

Thank you in advance for your help.

About the reply time of now test

Thx for your gereat work!

I would like to ask you how long it usually takes you to reply to the email for the test set test of now benchmark? I may have some urgent needs, thank you!

hello,about bfm model evaluation

Thank you for your work,I want to test methods based on BFM model, such as 3ddfa, deepfacecon or synergynet;
What is the index correspondence of landmarks?
Should I set the average expression and expression coefficient to zero,and keep the influence of the rotation and translation matrices??
look forward to your answers~

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.