Giter Site home page Giter Site logo

expose's Introduction

ExPose: Monocular Expressive Body Regression through Body-Driven Attention

report

[Project Page] [Paper] [Supp. Mat.]

SMPL-X Examples

Short Video Long Video
ShortVideo LongVideo

Table of Contents

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use the ExPose data, model and software, (the "Data & Software"), including 3D meshes, images, videos, textures, software, scripts, and animations. By downloading and/or using the Data & Software (including downloading, cloning, installing, and any other use of the corresponding github repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Data & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

Description

EXpressive POse and Shape rEgression (ExPose) is a method that estimates 3D body pose and shape, hand articulation and facial expression of a person from a single RGB image. For more details, please see our ECCV paper Monocular Expressive Body Regression through Body-Driven Attention. This repository contains:

  • A PyTorch demo to run ExPose on images.
  • An inference script for the supported datasets.

Installation

To install the necessary dependencies run the following command:

    pip install -r requirements.txt

The code has been tested with two configurations: a) with Python 3.7, CUDA 10.1, CuDNN 7.5 and PyTorch 1.5 on Ubuntu 18.04, and b) with Python 3.6, CUDA 10.2 and PyTorch 1.6 on Ubuntu 18.04.

Preparing the data

First, you should head to the project website and create an account. If you want to stay informed, please opt-in for email communication and we will reach out with any updates on the project. Once you have your account, login and head to the download section to get the pre-trained ExPose model. Create a folder named data and extract the downloaded zip there. You should now have a folder with the following structure:

data
├── checkpoints
├── all_means.pkl
├── conf.yaml
├── shape_mean.npy
├── SMPLX_to_J14.pkl

For more information on the data, please read the data documentation. If you don't already have an account on the SMPL-X website, please register to be able to download the model. Afterward, extract the SMPL-X model zip inside the data folder you created above.

data
├── models
│   ├── smplx

You are now ready to run the demo and inference scripts.

Demo

We provide a script to run ExPose directly on images. To get you started, we provide a sample folder, taken from pexels, which can be processed with the the following command:

    python demo.py --image-folder samples \
    --exp-cfg data/conf.yaml \
    --show=False \
    --output-folder OUTPUT_FOLDER \
    --save-params [True/False] \
    --save-vis [True/False] \
    --save-mesh [True/False]

The script will use a Keypoint R-CNN from torchvision to detect people in the images and then produce a SMPL-X prediction for each using ExPose. You should see the following output for the sample image:

Sample HD Overlay

Inference

The inference script can be used to run inference on one of the supported datasets. For example, if you have a folder with images and OpenPose keypoints with the following structure:

folder
├── images
│   ├── img0001.jpg
│   └── img0002.jpg
│   └── img0002.jpg
├── keypoints
│   ├── img0001_keypoints.json
│   └── img0002_keypoints.json
│   └── img0002_keypoints.json

Then you can use the following command to run ExPose for each person:

python inference.py --exp-cfg data/conf.yaml \
           --datasets openpose \
           --exp-opts datasets.body.batch_size B datasets.body.openpose.data_folder folder \
           --show=[True/False] \
           --output-folder OUTPUT_FOLDER \
           --save-params [True/False] \
           --save-vis [True/False] \
           --save-mesh [True/False]

You can select if you want to save the estimated parameters, meshes, and renderings by setting the corresponding flags.

Citation

If you find this Model & Software useful in your research we would kindly ask you to cite:

@inproceedings{ExPose:2020,
    title= {Monocular Expressive Body Regression through Body-Driven Attention},
    author= {Choutas, Vasileios and Pavlakos, Georgios and Bolkart, Timo and Tzionas, Dimitrios and Black, Michael J.},
    booktitle = {European Conference on Computer Vision (ECCV)},
    year = {2020},
    url = {https://expose.is.tue.mpg.de}
}
@inproceedings{SMPL-X:2019,
    title = {Expressive Body Capture: 3D Hands, Face, and Body from a Single Image},
    author = {Pavlakos, Georgios and Choutas, Vasileios and Ghorbani, Nima and Bolkart, Timo and Osman, Ahmed A. A. and Tzionas, Dimitrios and Black, Michael J.},
    booktitle = {Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
    year = {2019}
}

Acknowledgments

We thank Haiwen Feng for the FLAME fits, Nikos Kolotouros, Muhammed Kocabas and Nikos Athanasiou for helpful discussions, Sai Kumar Dwivedi and Lea Muller for proofreading, Mason Landry and Valerie Callaghan for video voiceovers.

Contact

The code of this repository was implemented by Vassilis Choutas.

For questions, please contact [email protected].

For commercial licensing (and all related questions for business applications), please contact [email protected].

expose's People

Contributors

dimtzionas avatar vchoutas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

expose's Issues

how to translate the smplx param to smpl

first thanks for your greate work!
like the title show, i use the expose result to the smpl lib, after i slove the pose and betas ,the trans params is confuse me.
what is the trans in smpl equal to smplx??
thanks

What if I only have hand 2D keypints

Hi,Thanks for the nice work of NETWORK version of smplx,I have been watching this serials of project for so long.
Hey here is a issue here, If I only have hand label(left/hand) and hand 2D keypoints,Can we just fit the hand part of your network to a hand-only image and get hand parameters (s,R,t and MANO parameters)?

About CPU inferance

thank you for open sourcing this amazing piece of work.
I tried running the code on windows 10 without GPU withuse_cuda: false and removing some cuda checks. But, it threw the following error.

    return func(*args, **kwargs)
  File "demo.py", line 281, in main
    full_imgs_list, body_imgs, body_targets = batch
TypeError: cannot unpack non-iterable MemoryPinning object

I also tried running it on my Linux machine with GPU but with use_cuda: true and use_cuda: false . The inference time was almost the same. So, I guess the use_cuda flag is not working.

Can we do the inference on the CPU?

How can I take use of the param.npz files?

Thanks for your remarkable works.
I have run the demo.py on my own dataset and get the output parameters file XXX_params.npz.My question is how can I get the rendered hd_image from the .npz files.In other words,can I use the params like 'body_pose' and 'hand_pose' to get a rendered image?I wonder if you have some APIs to solve my problem.
Thanks!

Output_Folder only contains a empty folder

And when I set --show to be true , some errors occur .
OpenGL.error.GLError: GLError(err = 12289,
baseOperation = eglInitialize,
cArguments = (
<OpenGL._opaque.EGLDisplay_pointer object at 0x7f198c31fa70>,
c_long(0),
c_long(0),
),
result = 0
)

And I think it related to openGL . I have s 3.3 version opengl and can you tell me your version ?

OpenGL.error.GLError: GLError( err = 12289, baseOperation = eglInitialize, cArguments = ( <OpenGL._opaque.EGLDisplay_pointer object at 0x7f7565840040>, c_long(0), c_long(0), ), result = 0 ) libEGL warning: DRI2: failed to create dri screen libEGL warning: Not allowed to force software rendering when API explicitly selects a hardware device.

I get the following error when running it for a folder with only one image:

(expose) mona@goku:~/research/code/expose$ python demo.py --image-folder ~/Downloads/sample1     --exp-cfg data/conf.yaml     --show=True     --output-folder ~/Downloads/sample_out     --save-params True     --save-vis True     --save-mesh True
INFO - 2021-01-29 16:33:41,267 - acceleratesupport - No OpenGL_accelerate module loaded: No module named 'OpenGL_accelerate'
Processing with R-CNN: 100%|█████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.74s/it]
2021-01-29 16:33:46.356 | INFO     | __main__:main:241 - Saving results to: /home/mona/Downloads/sample_out
2021-01-29 16:33:46.359 | WARNING  | expose.models.attention.predictor:__init__:91 - Apply hand network on body: True
2021-01-29 16:33:46.359 | WARNING  | expose.models.attention.predictor:__init__:93 - Apply hand network on hands: True
2021-01-29 16:33:46.360 | WARNING  | expose.models.attention.predictor:__init__:95 - Predict hands: True
2021-01-29 16:33:46.360 | WARNING  | expose.models.attention.predictor:__init__:102 - Predict head: True
2021-01-29 16:33:46.360 | INFO     | expose.models.attention.predictor:__init__:109 - Condition hand on body: True
2021-01-29 16:33:46.360 | INFO     | expose.models.attention.predictor:__init__:112 - Condition hand wrist pose on body: True
2021-01-29 16:33:46.360 | INFO     | expose.models.attention.predictor:__init__:117 - Condition hand finger pose on body: True
2021-01-29 16:33:46.360 | INFO     | expose.models.attention.predictor:__init__:121 - Condition hand shape on body shape: False
2021-01-29 16:33:46.360 | INFO     | expose.models.attention.predictor:__init__:169 - Condition head on body: True
2021-01-29 16:33:46.360 | INFO     | expose.models.attention.predictor:__init__:170 - Condition expression on body: True
2021-01-29 16:33:46.360 | INFO     | expose.models.attention.predictor:__init__:172 - Condition shape on body: False
2021-01-29 16:33:46.360 | INFO     | expose.models.attention.predictor:__init__:173 - Condition neck pose on body: False
2021-01-29 16:33:46.360 | INFO     | expose.models.attention.predictor:__init__:175 - Condition jaw pose on body: True
2021-01-29 16:33:46.987 | INFO     | expose.models.attention.predictor:__init__:252 - Body model: SMPLXLayer(
  Gender: NEUTRAL
  Number of joints: 55
  Betas: 10
  Number of PCA components: 6
  Flat hand mean: False
  Number of Expression Coefficients: 10
  (vertex_joint_selector): VertexJointSelector()
)
2021-01-29 16:33:47.343 | INFO     | expose.models.backbone.hrnet:init_weights:487 - => init weights from normal distribution
2021-01-29 16:33:47.980 | WARNING  | expose.models.backbone.hrnet:load_weights:519 - => please download pre-trained models first!
2021-01-29 16:33:47.980 | WARNING  | expose.models.backbone.hrnet:load_weights:520 - data/network_weights/hrnet/imagenet/hrnet_w48-8ef0771d.pth does not exist!
2021-01-29 16:33:48.000 | INFO     | expose.models.common.networks:__init__:267 - Building iterative regressor with 3 stages
2021-01-29 16:33:48.001 | INFO     | expose.models.common.networks:__init__:282 - Detach mean: False
2021-01-29 16:33:48.134 | INFO     | expose.models.backbone.resnet:resnet18:113 - Loading pretrained ResNet-18
2021-01-29 16:33:48.258 | INFO     | expose.models.common.networks:__init__:267 - Building iterative regressor with 3 stages
2021-01-29 16:33:48.258 | INFO     | expose.models.common.networks:__init__:282 - Detach mean: False
2021-01-29 16:33:48.260 | INFO     | expose.models.attention.head_predictor:__init__:81 - Building head predictor with 3 stages
2021-01-29 16:33:48.398 | INFO     | expose.models.backbone.resnet:resnet18:113 - Loading pretrained ResNet-18
2021-01-29 16:33:48.471 | INFO     | expose.models.common.networks:__init__:267 - Building iterative regressor with 3 stages
2021-01-29 16:33:48.471 | INFO     | expose.models.common.networks:__init__:282 - Detach mean: False
2021-01-29 16:33:48.472 | INFO     | expose.models.attention.predictor:__init__:490 - 2D Head crop keyps loss: KeypointLoss(Norm type: L1)
2021-01-29 16:33:48.472 | INFO     | expose.models.attention.predictor:__init__:502 - 2D Left hand crop keyps loss: KeypointLoss(Norm type: L1)
2021-01-29 16:33:48.473 | INFO     | expose.models.attention.predictor:__init__:515 - 2D Left hand crop keyps loss: KeypointLoss(Norm type: L1)
2021-01-29 16:33:48.473 | INFO     | expose.models.common.smplx_loss_modules:__init__:48 - Stages to penalize: [-1]
2021-01-29 16:33:48.473 | INFO     | expose.models.common.smplx_loss_modules:__init__:400 - Stages to regularize: [-1]
2021-01-29 16:33:48.676 | INFO     | expose.utils.checkpointer:__init__:44 - Creating directory data/checkpoints
2021-01-29 16:33:48.676 | INFO     | expose.utils.checkpointer:load_checkpoint:90 - Load pretrained: False
2021-01-29 16:33:48.676 | WARNING  | expose.utils.checkpointer:load_checkpoint:93 - Loading checkpoint from data/checkpoints/model.ckpt!
2021-01-29 16:33:49.287 | WARNING  | expose.utils.checkpointer:load_checkpoint:121 - The following keys were not found: ['smplx.head_idxs', 'smplx.body_model.left_hand_components', 'smplx.body_model.right_hand_components', 'smplx.body_model.left_hand_mean', 'smplx.body_model.right_hand_mean', 'smplx.body_model.pose_mean', 'smplx.body_model.dynamic_lmk_bary_coords']
2021-01-29 16:33:49.287 | WARNING  | expose.utils.checkpointer:load_checkpoint:124 - The following keys were not expected: ['smplx.body_model.source_idxs', 'smplx.body_model.target_idxs', 'smplx.body_model.extra_joint_regressor', 'smplx.body_model.dynamic_lmk_b_coords', 'smplx.hand_predictor.hand_offset', 'smplx.hand_predictor.hand_model.extra_joints_idxs', 'smplx.hand_predictor.hand_model.faces_tensor', 'smplx.hand_predictor.hand_model.v_template', 'smplx.hand_predictor.hand_model.shapedirs', 'smplx.hand_predictor.hand_model.J_regressor', 'smplx.hand_predictor.hand_model.posedirs', 'smplx.hand_predictor.hand_model.parents', 'smplx.hand_predictor.hand_model.lbs_weights', 'smplx.hand_predictor.pca_decoder.pca_basis', 'smplx.hand_predictor.pca_decoder.inv_pca_basis', 'smplx.hand_predictor.pca_decoder.mean', 'smplx.head_predictor.head_offset', 'smplx.head_predictor.head_vertices_ids', 'smplx.head_predictor.head_model.faces_tensor', 'smplx.head_predictor.head_model.v_template', 'smplx.head_predictor.head_model.shapedirs', 'smplx.head_predictor.head_model.expr_dirs', 'smplx.head_predictor.head_model.J_regressor', 'smplx.head_predictor.head_model.posedirs', 'smplx.head_predictor.head_model.parents', 'smplx.head_predictor.head_model.lbs_weights', 'smplx.head_predictor.head_model.lmk_faces_idx', 'smplx.head_predictor.head_model.lmk_bary_coords', 'smplx.head_predictor.head_model.dynamic_lmk_faces_idx', 'smplx.head_predictor.head_model.dynamic_lmk_b_coords', 'smplx.head_predictor.head_model.neck_kin_chain', 'smplx.body_loss.edge_loss.gt_connections', 'smplx.body_loss.edge_loss.est_connections', 'smplx.hand_loss.edge_loss.gt_connections', 'smplx.hand_loss.edge_loss.est_connections', 'smplx.head_loss.edge_loss.gt_connections', 'smplx.head_loss.edge_loss.est_connections']
libEGL warning: DRI2: failed to create dri screen
libEGL warning: Not allowed to force software rendering when API explicitly selects a hardware device.
libEGL warning: DRI2: failed to create dri screen
Traceback (most recent call last):
  File "demo.py", line 554, in <module>
    main(
  File "/home/mona/venv/expose/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "demo.py", line 271, in main
    hd_renderer = HDRenderer(img_size=body_crop_size)
  File "/home/mona/research/code/expose/expose/utils/plot_utils.py", line 735, in __init__
    super(HDRenderer, self).__init__(**kwargs)
  File "/home/mona/research/code/expose/expose/utils/plot_utils.py", line 576, in __init__
    super(OverlayRenderer, self).__init__(faces=faces, img_size=img_size)
  File "/home/mona/research/code/expose/expose/utils/plot_utils.py", line 418, in __init__
    self.renderer = pyrender.OffscreenRenderer(
  File "/home/mona/venv/expose/lib/python3.8/site-packages/pyrender/offscreen.py", line 31, in __init__
    self._create()
  File "/home/mona/venv/expose/lib/python3.8/site-packages/pyrender/offscreen.py", line 149, in _create
    self._platform.init_context()
  File "/home/mona/venv/expose/lib/python3.8/site-packages/pyrender/platforms/egl.py", line 177, in init_context
    assert eglInitialize(self._egl_display, major, minor)
  File "/home/mona/venv/expose/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 402, in __call__
    return self( *args, **named )
  File "/home/mona/venv/expose/lib/python3.8/site-packages/OpenGL/error.py", line 228, in glCheckError
    raise GLError(
OpenGL.error.GLError: GLError(
	err = 12289,
	baseOperation = eglInitialize,
	cArguments = (
		<OpenGL._opaque.EGLDisplay_pointer object at 0x7f7565840040>,
		c_long(0),
		c_long(0),
	),
	result = 0
)

Also, check https://stackoverflow.com/questions/65962222/solving-opengl-error-glerror-glerror-err-12289-opengl-opaque-egldisplay for more details

Cannot find "smplx_flip_correspondences.npz"

Hello, in "curated_fittings.py", variable "vertex_flip_correspondences" needs "data/smplx_flip_correspondences.npz" file, but I cannot find it in the folder. What does it use for? If possible, could you please upload it? Thanks!

bad result

Hi, is the shape parameter varies in so low distribution? When I try mid fat and fat people, the results are so far from the reality? Is there any parameter I should set to resolve this or is it the best thing the model can do?
hd_imgs
hd_orig_overlay
hd_overlay
hd_imgs
hd_orig_overlay
hd_overlay

how did you get the GT data from EHF??

In EHF.zip
there are

  • filename: "*_img.jpg" - RGB full-body image in JPG format.
  • filename: "*_img.png" - RGB full-body image in PNG format.
  • filename: "*_scan.obj" - 3D scan for this frame (multi-view stereo reconstruction).
  • filename: "*_align.ply" - Alignment of SMPL-X to the 3D scan, in the form of a 3D mesh (used as pseudo ground truth).
  • filename: "*_2Djnt.json" - 2D joints estimated with OpenPose (from monocular RGB).
  • filename: "*_2Djnt.png" - Visualization of OpenPose 2D joints.

also the length of 'v'(means vertices) is 128343 and
length of 'f'(means faces) is 255832..

how did you preprocess the EHF??

about the curated_fits

Did you get the curated_fits with smplify-x? I'm running smplify-x on lsp dataset's images which are included in your val.npz, and I'm using the keypoints provided in your curated_fits, but there are many images are getting bad results, some of them are getting good results before 3 or 4 stages and after that they get twisted. Can you help me solve this?
im1001

2021-03-07 23-56-49屏幕截图
2021-03-07 23-56-34屏幕截图
2021-03-07 23-57-14屏幕截图
2021-03-07 23-57-02屏幕截图

smpl parameters

Hi, how to get the smpl parameters (pose and beta parameters) from an image?

Rotation of SMPL-generated mesh

Hi, I'm trying to render the ExPose output on a web renderer for divulgation purposes.
I'm facing a problem with alignment between the original image and the generated model. Basically I want to generate the same effect as the rendered output image you make, but making the whole 3D scene (body mesh + background image) on the browser.

I'm generating the boy mesh based on the SPML body_pose returned by ExPose. Then, I'm feeding this pose into VPoser, finding the embedding and generating the mesh for that pose. The problem is that this generated pose doesn't have anymore the correct rotation. It properly catches the same body pose as detected by Expose, but then misses the correct orientation of this pose.
I'm trying to fit this mesh to the image by playing with the ExPose returned value on 'global_orient'. But I'm not being able to find the correct positioning?

Am I doing something wrong? Am I correct on the asumption that global_orient + body_pose = same mesh as the one returned from expose??

Thanks again for the help, last answer for the other issue I opened here helped me a lot

How can i get the joints?

@vchoutas your work is very wonderful, but i have a question about the joints like smpl-x. Can i get the joints with a file about the body and face and hand? thank you very much!

Retrain for Close Ups

ExPose seems to break up close. Is there any way to retrain to handle face close ups? I know I could use FLAME but FLAME doesn't properly anchor the neck

s

behaviour on invalid image

if the image supplied does not consist of a single human, would the model go haywire and generate nonsensical body model?

some warning when i run the demo.py

2020-09-07 18:47:29.548 | WARNING | expose.utils.checkpointer:load_checkpoint:122 - The following keys were not found: ['smplx.head_idxs', 'smplx.body_model.left_hand_components', 'smplx.body_model.right_hand_components', 'smplx.body_model.left_hand_mean', 'smplx.body_model.right_hand_mean', 'smplx.body_model.pose_mean', 'smplx.body_model.dynamic_lmk_bary_coords'] 2020-09-07 18:48:50.447 | WARNING | expose.utils.checkpointer:load_checkpoint:125 - The following keys were not expected: ['smplx.body_model.source_idxs', 'smplx.body_model.target_idxs', 'smplx.body_model.extra_joint_regressor', 'smplx.body_model.dynamic_lmk_b_coords', 'smplx.hand_predictor.hand_offset', 'smplx.hand_predictor.hand_model.extra_joints_idxs', 'smplx.hand_predictor.hand_model.faces_tensor', 'smplx.hand_predictor.hand_model.v_template', 'smplx.hand_predictor.hand_model.shapedirs', 'smplx.hand_predictor.hand_model.J_regressor', 'smplx.hand_predictor.hand_model.posedirs', 'smplx.hand_predictor.hand_model.parents', 'smplx.hand_predictor.hand_model.lbs_weights', 'smplx.hand_predictor.pca_decoder.pca_basis', 'smplx.hand_predictor.pca_decoder.inv_pca_basis', 'smplx.hand_predictor.pca_decoder.mean', 'smplx.head_predictor.head_offset', 'smplx.head_predictor.head_vertices_ids', 'smplx.head_predictor.head_model.faces_tensor', 'smplx.head_predictor.head_model.v_template', 'smplx.head_predictor.head_model.shapedirs', 'smplx.head_predictor.head_model.expr_dirs', 'smplx.head_predictor.head_model.J_regressor', 'smplx.head_predictor.head_model.posedirs', 'smplx.head_predictor.head_model.parents', 'smplx.head_predictor.head_model.lbs_weights', 'smplx.head_predictor.head_model.lmk_faces_idx', 'smplx.head_predictor.head_model.lmk_bary_coords', 'smplx.head_predictor.head_model.dynamic_lmk_faces_idx', 'smplx.head_predictor.head_model.dynamic_lmk_b_coords', 'smplx.head_predictor.head_model.neck_kin_chain', 'smplx.body_loss.edge_loss.gt_connections', 'smplx.body_loss.edge_loss.est_connections', 'smplx.hand_loss.edge_loss.gt_connections', 'smplx.hand_loss.edge_loss.est_connections', 'smplx.head_loss.edge_loss.gt_connections', 'smplx.head_loss.edge_loss.est_connections']

if right to ignore these warnings

Generate Famale / Male meshes

Hello and thank you for this awesome work!

Can you please tell me if it's possible to generate models that use male or female shapes? I tried enforcing this through SMPLX_FEMALE.pkl and SMPLX_FEMALE.npz files but it still generated neutral meshes.

Best regards!

Bent knees of reconstructed model

Reconstructed 3d model has bent knees, although person in image is standing straight.Any ideas?
Meshlab is used for visualization of model

Screenshot from 2020-12-14 13-58-27

Evaluation codes

Hi,

Could you tell me how to evaluate the model to get the metrics reported in the paper ?

In addition, I notice that in the codes of EHF dataset, you use a file named 'gt_keyps.npz'. But I did not find it on the website of EHF dataset. Could you tell me how to get this file?

Thanks!

demo error

Hi, when I run the demo I got the error below:
Traceback (most recent call last):
File "demo.py", line 32, in
import open3d as o3d
File "/home/melih/anaconda3/envs/pose/lib/python3.6/site-packages/open3d/init.py", line 56, in
_CDLL(next((_Path(file).parent / 'cpu').glob('pybind*')))
File "/home/melih/anaconda3/envs/pose/lib/python3.6/ctypes/init.py", line 348, in init
self._handle = _dlopen(self._name, mode)
OSError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27' not found (required by /home/melih/anaconda3/envs/pose/lib/python3.6/site-packages/open3d/cpu/pybind.cpython-36m-x86_64-linux-gnu.so)

Issues Generating STL Mesh

I followed all the instructions for the demo, installed the dependencies, and have the pre-trained ExPose and SMPL-X models in the data folder. I initially ran the demo with --save-mesh True, but it didn't export the mesh nor throw any errors. Subsequent attempts are for some reason giving me errors with Qt, and I'm not sure if this is related to it not generating the mesh.

qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "~/.local/lib/python3.8/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: xcb, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl.

I set QT_QPA_PLATFORM=minimal and it gets farther, but now I'm getting an error in Torch:

Traceback (most recent call last):
  File "demo.py", line 554, in <module>
    main(
  File "~/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
    return func(*args, **kwargs)
  File "demo.py", line 461, in main
    axes[0, 1].imshow(out_img['rgb'][idx])
KeyError: 'rgb'

I'm running Python 3.8.2 on Linux Mint 20 Ulyana. Is there something I'm missing?

Calculating point-to-surface (p2s) distance

Hi,

Thanks for sharing this wonderful code.

I have a question about how to calculate the point-to-surface (p2s) error used in Table 3.

Would you mind give me some clues on this ?

Thanks !

flat_hand_mean is True or False when using curated_fits dataset and training the network?

Hi, thanks for your great work!

I am confused about the flat_hand_mean variable when you use smplx model, it seems that when using curated_fits dataset flat_hand_mean should be set to True (see image below), but in the code and conf.yaml file, flat_hand_mean is False.
Is flat_hand_mean True or False when you use curated_fits dataset and train the network?

Look forward to your reply. Thanks a lot!

image

Fail to crop the dataset

Hi, this is really nice work.However, when i tried to use the dataset and try to crop the data, i got the error like this
"ValueError: could not broadcast input array from shape (57,97,3) into shape (57,25,3)". Seems like the annotations are not properly align to the exact image. Or maybe do i miss something? Thanks.

how did you use the frei-hand dataset??

as i know there are only right-hand on FreiHand.
how did you train the network with both-hand?

and also when i fed the pose parameter of Freihand to SMPLX layer,
there were awakard hand mesh.
and i convert axis-angle representation pose parameter of frei hand to rotmat representation.
Also i used SMPLX layer of body_models.py, flat_hand_mean and use_pca are False.
save
this is correct hand mesh applied on manopth.
weird_frei
and this is the result from SMPLX layer.

Video inference

Thanks for the interesting work! Is that possible to provide a code for performing inference on video?

About v_template in checkpoint

Hi,
I'm trying to recover a smplx model from the saved params output by ExPose and overlay it on the original image.
However, I noticed that there is a shift from the rendered image and the original people.
After checking the code, I find that this problem is caused by the 'v_template' field loaded from checkpoint. To be specific, the 'v_template' in SMPLX_NEUTRAL.pkl is a bit different from that in the checkpoint file. (name of v_template in checkpoint is smplx.body_model.v_template).
Thus, I want to ask about the reason why the difference exists. Thanks a lot!

Training Code ?

Thanks for your wonderful work, but will you release the training code for us ?

resource

TODO: add resource to requirements.txt

how can I use the mean pose parameters in all_means.pkl?

I am tring to train a model with the fits you provided, and I need to set the mean pose parameters.
From the all_means.pkl you provided, there are

  • 21*3*1 for body
  • 15*3*1 for each hand,

so there are totally 51*3*1 . But the Curated fits you provided contains SMPL-X pose vector in axis-angle format, which are 55*3 , so how can i set the mean pose parameters? I think there should be some specific ones need to be set to 0 but I don't know which ones. Can you help me please? Thank you.

Mesh from just openpose keypoints

Hi,
Is it possible to generate a mesh with just the openpose keypoints information (and not the images folder)?

Wondering if there is a speaker-agnostic way to get these mesh renders. As currently, they rely on the speaker's image and the openpose keypoints.

Looking forward to your reply.

@vchoutas @dimtziwnas

RCNN or OPENPOSE?

Hi,it's a great job.
May I ask when you train your model,you use openpose or rcnn to crop Hand and Head?and which you think is the best way?

question about dataset

hello, thanks for your devotion for this wonderful work! I check your code /data/datasets/ehf.py and I want to know that how to create the ehf.npz file, could you share the preprocessing code for this dataset with me?

Number of smpl params?

Hi, I'm trying to use VPoser to encode the generated SMPL pose from an image.

As far as I've seen Vposer uses a 63 (21*3) length vector (the smpl vector?) to encode it to a vector on its latent space (32 length).
But using expose, the closest to a smpl vector I can get is in: model_output.get('body')["final"]["body_pose"], which is a (1,21,3,3) vector. so it has 3 times more params than vposer is expecting.

What is the reason for this?
Thank you!

Report error "ValueError: Unknown model type , exiting"

System: Ubuntu 16.04, Python 3.7.9
After installing all prerequisites and putting models and data into the right place, I tried to run the demo python script, and it shows the error below.
$ python demo.py --image-folder samples \ --exp-cfg /p300/dataset/data/conf.yaml \ --show=False \ --output-folder ./result \ --save-params False \ --save-vis False \ --save-mesh False Processing with R-CNN: 0%| | 0/1 [00:00<?, ?it/s]/root/anaconda3/envs/ap_expose/lib/python3.7/site-packages/torchvision/ops/boxes.py:101: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /opt/conda/conda-bld/pytorch_1595629403081/work/torch/csrc/utils/python_arg_parser.cpp:766.) keep = keep.nonzero().squeeze(1) Processing with R-CNN: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.43s/it] 2020-09-22 09:49:14.633 | INFO | __main__:main:241 - Saving results to: ./result 2020-09-22 09:49:14.641 | WARNING | expose.models.attention.predictor:__init__:92 - Apply hand network on body: True 2020-09-22 09:49:14.641 | WARNING | expose.models.attention.predictor:__init__:94 - Apply hand network on hands: True 2020-09-22 09:49:14.641 | WARNING | expose.models.attention.predictor:__init__:95 - Predict hands: True 2020-09-22 09:49:14.641 | WARNING | expose.models.attention.predictor:__init__:102 - Predict head: True 2020-09-22 09:49:14.641 | INFO | expose.models.attention.predictor:__init__:109 - Condition hand on body: True 2020-09-22 09:49:14.642 | INFO | expose.models.attention.predictor:__init__:113 - Condition hand wrist pose on body: True 2020-09-22 09:49:14.642 | INFO | expose.models.attention.predictor:__init__:118 - Condition hand finger pose on body: True 2020-09-22 09:49:14.642 | INFO | expose.models.attention.predictor:__init__:122 - Condition hand shape on body shape: False 2020-09-22 09:49:14.642 | INFO | expose.models.attention.predictor:__init__:169 - Condition head on body: True 2020-09-22 09:49:14.642 | INFO | expose.models.attention.predictor:__init__:171 - Condition expression on body: True 2020-09-22 09:49:14.642 | INFO | expose.models.attention.predictor:__init__:172 - Condition shape on body: False 2020-09-22 09:49:14.643 | INFO | expose.models.attention.predictor:__init__:174 - Condition neck pose on body: False 2020-09-22 09:49:14.643 | INFO | expose.models.attention.predictor:__init__:176 - Condition jaw pose on body: True Traceback (most recent call last): File "demo.py", line 565, in <module> rcnn_batch=rcnn_batch, File "/root/anaconda3/envs/ap_expose/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "demo.py", line 244, in main model = SMPLXNet(exp_cfg) File "/p300/audiopose/expose/expose/models/smplx_net.py", line 39, in __init__ self.smplx = build_attention_head(exp_cfg) File "/p300/audiopose/expose/expose/models/attention/build.py", line 23, in build_attention_head return SMPLXHead(cfg) File "/p300/audiopose/expose/expose/models/attention/predictor.py", line 251, in __init__ **body_model_cfg) File "/root/anaconda3/envs/ap_expose/lib/python3.7/site-packages/smplx/body_models.py", line 2310, in build_layer raise ValueError(f'Unknown model type {model_type}, exiting!') ValueError: Unknown model type , exiting!

I wonder where the error is and how to fix this problem, thank you!

File "/home/mona/research/code/expose/expose/utils/plot_utils.py", line 831, in __call__ valid_mask = (color[3] > 0)[np.newaxis] IndexError: index 3 is out of bounds for axis 0 with size 3

two questions:

  1. Is acceleratesupport - No OpenGL_accelerate module loaded: No module named 'OpenGL_accelerate' fine?
  2. How do I resolve the following error and why is it happening?
(expose) mona@goku:~/research/code/expose$ python demo.py --image-folder ~/Downloads/sample1     --exp-cfg data/conf.yaml     --show=True     --output-folder ~/Downloads/sample_out     --save-params True     --save-vis True     --save-mesh True
INFO - 2021-02-04 16:06:35,899 - acceleratesupport - No OpenGL_accelerate module loaded: No module named 'OpenGL_accelerate'
Processing with R-CNN: 100%|█████████████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00,  1.94s/it]
2021-02-04 16:06:41.101 | INFO     | __main__:main:241 - Saving results to: /home/mona/Downloads/sample_out
2021-02-04 16:06:41.105 | WARNING  | expose.models.attention.predictor:__init__:91 - Apply hand network on body: True
2021-02-04 16:06:41.105 | WARNING  | expose.models.attention.predictor:__init__:93 - Apply hand network on hands: True
2021-02-04 16:06:41.105 | WARNING  | expose.models.attention.predictor:__init__:95 - Predict hands: True
2021-02-04 16:06:41.105 | WARNING  | expose.models.attention.predictor:__init__:102 - Predict head: True
2021-02-04 16:06:41.105 | INFO     | expose.models.attention.predictor:__init__:109 - Condition hand on body: True
2021-02-04 16:06:41.106 | INFO     | expose.models.attention.predictor:__init__:112 - Condition hand wrist pose on body: True
2021-02-04 16:06:41.106 | INFO     | expose.models.attention.predictor:__init__:117 - Condition hand finger pose on body: True
2021-02-04 16:06:41.106 | INFO     | expose.models.attention.predictor:__init__:121 - Condition hand shape on body shape: False
2021-02-04 16:06:41.106 | INFO     | expose.models.attention.predictor:__init__:169 - Condition head on body: True
2021-02-04 16:06:41.106 | INFO     | expose.models.attention.predictor:__init__:170 - Condition expression on body: True
2021-02-04 16:06:41.106 | INFO     | expose.models.attention.predictor:__init__:172 - Condition shape on body: False
2021-02-04 16:06:41.106 | INFO     | expose.models.attention.predictor:__init__:173 - Condition neck pose on body: False
2021-02-04 16:06:41.106 | INFO     | expose.models.attention.predictor:__init__:175 - Condition jaw pose on body: True
2021-02-04 16:06:41.718 | INFO     | expose.models.attention.predictor:__init__:252 - Body model: SMPLXLayer(
  Gender: NEUTRAL
  Number of joints: 55
  Betas: 10
  Number of PCA components: 6
  Flat hand mean: False
  Number of Expression Coefficients: 10
  (vertex_joint_selector): VertexJointSelector()
)
2021-02-04 16:06:42.066 | INFO     | expose.models.backbone.hrnet:init_weights:487 - => init weights from normal distribution
2021-02-04 16:06:42.686 | WARNING  | expose.models.backbone.hrnet:load_weights:519 - => please download pre-trained models first!
2021-02-04 16:06:42.687 | WARNING  | expose.models.backbone.hrnet:load_weights:520 - data/network_weights/hrnet/imagenet/hrnet_w48-8ef0771d.pth does not exist!
2021-02-04 16:06:42.707 | INFO     | expose.models.common.networks:__init__:267 - Building iterative regressor with 3 stages
2021-02-04 16:06:42.707 | INFO     | expose.models.common.networks:__init__:282 - Detach mean: False
2021-02-04 16:06:42.839 | INFO     | expose.models.backbone.resnet:resnet18:113 - Loading pretrained ResNet-18
2021-02-04 16:06:42.909 | INFO     | expose.models.common.networks:__init__:267 - Building iterative regressor with 3 stages
2021-02-04 16:06:42.909 | INFO     | expose.models.common.networks:__init__:282 - Detach mean: False
2021-02-04 16:06:42.910 | INFO     | expose.models.attention.head_predictor:__init__:81 - Building head predictor with 3 stages
2021-02-04 16:06:43.062 | INFO     | expose.models.backbone.resnet:resnet18:113 - Loading pretrained ResNet-18
2021-02-04 16:06:43.133 | INFO     | expose.models.common.networks:__init__:267 - Building iterative regressor with 3 stages
2021-02-04 16:06:43.133 | INFO     | expose.models.common.networks:__init__:282 - Detach mean: False
2021-02-04 16:06:43.135 | INFO     | expose.models.attention.predictor:__init__:490 - 2D Head crop keyps loss: KeypointLoss(Norm type: L1)
2021-02-04 16:06:43.135 | INFO     | expose.models.attention.predictor:__init__:502 - 2D Left hand crop keyps loss: KeypointLoss(Norm type: L1)
2021-02-04 16:06:43.135 | INFO     | expose.models.attention.predictor:__init__:515 - 2D Left hand crop keyps loss: KeypointLoss(Norm type: L1)
2021-02-04 16:06:43.135 | INFO     | expose.models.common.smplx_loss_modules:__init__:48 - Stages to penalize: [-1]
2021-02-04 16:06:43.136 | INFO     | expose.models.common.smplx_loss_modules:__init__:400 - Stages to regularize: [-1]
2021-02-04 16:06:43.329 | INFO     | expose.utils.checkpointer:__init__:44 - Creating directory data/checkpoints
2021-02-04 16:06:43.329 | INFO     | expose.utils.checkpointer:load_checkpoint:90 - Load pretrained: False
2021-02-04 16:06:43.329 | WARNING  | expose.utils.checkpointer:load_checkpoint:93 - Loading checkpoint from data/checkpoints/model.ckpt!
2021-02-04 16:06:43.947 | WARNING  | expose.utils.checkpointer:load_checkpoint:121 - The following keys were not found: ['smplx.head_idxs', 'smplx.body_model.left_hand_components', 'smplx.body_model.right_hand_components', 'smplx.body_model.left_hand_mean', 'smplx.body_model.right_hand_mean', 'smplx.body_model.pose_mean', 'smplx.body_model.dynamic_lmk_bary_coords']
2021-02-04 16:06:43.947 | WARNING  | expose.utils.checkpointer:load_checkpoint:124 - The following keys were not expected: ['smplx.body_model.source_idxs', 'smplx.body_model.target_idxs', 'smplx.body_model.extra_joint_regressor', 'smplx.body_model.dynamic_lmk_b_coords', 'smplx.hand_predictor.hand_offset', 'smplx.hand_predictor.hand_model.extra_joints_idxs', 'smplx.hand_predictor.hand_model.faces_tensor', 'smplx.hand_predictor.hand_model.v_template', 'smplx.hand_predictor.hand_model.shapedirs', 'smplx.hand_predictor.hand_model.J_regressor', 'smplx.hand_predictor.hand_model.posedirs', 'smplx.hand_predictor.hand_model.parents', 'smplx.hand_predictor.hand_model.lbs_weights', 'smplx.hand_predictor.pca_decoder.pca_basis', 'smplx.hand_predictor.pca_decoder.inv_pca_basis', 'smplx.hand_predictor.pca_decoder.mean', 'smplx.head_predictor.head_offset', 'smplx.head_predictor.head_vertices_ids', 'smplx.head_predictor.head_model.faces_tensor', 'smplx.head_predictor.head_model.v_template', 'smplx.head_predictor.head_model.shapedirs', 'smplx.head_predictor.head_model.expr_dirs', 'smplx.head_predictor.head_model.J_regressor', 'smplx.head_predictor.head_model.posedirs', 'smplx.head_predictor.head_model.parents', 'smplx.head_predictor.head_model.lbs_weights', 'smplx.head_predictor.head_model.lmk_faces_idx', 'smplx.head_predictor.head_model.lmk_bary_coords', 'smplx.head_predictor.head_model.dynamic_lmk_faces_idx', 'smplx.head_predictor.head_model.dynamic_lmk_b_coords', 'smplx.head_predictor.head_model.neck_kin_chain', 'smplx.body_loss.edge_loss.gt_connections', 'smplx.body_loss.edge_loss.est_connections', 'smplx.hand_loss.edge_loss.gt_connections', 'smplx.hand_loss.edge_loss.est_connections', 'smplx.head_loss.edge_loss.gt_connections', 'smplx.head_loss.edge_loss.est_connections']
  0%|                                                                                                        | 0/2 [00:01<?, ?it/s]
Traceback (most recent call last):
  File "demo.py", line 554, in <module>
    main(
  File "/home/mona/venv/expose/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "demo.py", line 361, in main
    hd_orig_overlays = hd_renderer(
  File "/home/mona/venv/expose/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
    return func(*args, **kwargs)
  File "/home/mona/research/code/expose/expose/utils/plot_utils.py", line 831, in __call__
    valid_mask = (color[3] > 0)[np.newaxis]
IndexError: index 3 is out of bounds for axis 0 with size 3

Adjusting shape parameters individually

I've found a way to change the shape parameters ('Betas') but don't really know wich parameters I have to adjust to make the model look the way I want it. Is there any info about the effect of each parameter on the model?

Like:

  • Beta[0] >> height
  • Beta[1] >> wheight
  • ...

Thanks for reading it.

Problem While running demo code

I can't run demo code. I have installed all the requirements. I get following error:

Traceback (most recent call last): File "demo.py", line 565, in <module> rcnn_batch=rcnn_batch, File "/home/mkhan/.local/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "demo.py", line 238, in main image_folder, exp_cfg, batch_size=rcnn_batch, device=device) File "demo.py", line 114, in preprocess_images output = rcnn_model(batch['images']) File "/home/mkhan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/mkhan/.local/lib/python3.6/site-packages/torchvision/models/detection/generalized_rcnn.py", line 98, in forward proposals, proposal_losses = self.rpn(images, features, targets) File "/home/mkhan/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/mkhan/.local/lib/python3.6/site-packages/torchvision/models/detection/rpn.py", line 493, in forward boxes, scores = self.filter_proposals(proposals, objectness, images.image_sizes, num_anchors_per_level) File "/home/mkhan/.local/lib/python3.6/site-packages/torchvision/models/detection/rpn.py", line 410, in filter_proposals keep = box_ops.batched_nms(boxes, scores, lvl, self.nms_thresh) File "/home/mkhan/.local/lib/python3.6/site-packages/torch/jit/__init__.py", line 1599, in wrapper compiled_fn = script(wrapper.__original_fn) File "/home/mkhan/.local/lib/python3.6/site-packages/torch/jit/__init__.py", line 1550, in script fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj)) File "/home/mkhan/.local/lib/python3.6/site-packages/torch/jit/_recursive.py", line 583, in try_compile_fn return torch.jit.script(fn, _rcb=rcb) File "/home/mkhan/.local/lib/python3.6/site-packages/torch/jit/__init__.py", line 1550, in script fn = torch._C._jit_script_compile(qualified_name, ast, _rcb, get_default_args(obj)) RuntimeError: object has no attribute nms: File "/home/mkhan/.local/lib/python3.6/site-packages/torchvision/ops/boxes.py", line 40 by NMS, sorted in decreasing order of scores """ return torch.ops.torchvision.nms(boxes, scores, iou_threshold) ~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE 'nms' is being compiled since it was called from 'batched_nms' File "/home/mkhan/.local/lib/python3.6/site-packages/torchvision/ops/boxes.py", line 82 offsets = idxs.to(boxes) * (max_coordinate + torch.tensor(1).to(boxes)) boxes_for_nms = boxes + offsets[:, None] keep = nms(boxes_for_nms, scores, iou_threshold) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE return keep

Inaccurate Prediction for Overweight People

With the demo code, I found that the estimated mesh for overweight people is highly inaccurate. Even for the image of a person with a slight overweight, the estimation mesh around the tummy area is much smaller:
Example Image 1
For the image of a more overweight person, the estimation accuracy degrades even further in both the tummy and the chest area:
Example Image 2

This problem exists for side view images and front view images - I was never able to estimate the 3D shape accurately for overweight people.

Is this something that can be fixed by changing parameters in the code? Or, is this a limitation on the SMPL-X model?

Thanks.

Bus error (core dumped)

run the script : python3 demo.py --image-folder samples --exp-cfg data/conf.yaml --show=False --output-folder OUTPUT_FOLDER --save-params=False --save-mesh=True --save-vis=False

got the error : Bus error (core dumped)

envirenment: ubuntu18, python3.6.9 , torch 1.7 ,cuda 10.1

pip install -r requirements.txt throws an error

$ pip install -r requirements.txt 
Collecting fvcore>=0.1.1.post20200716
  Downloading fvcore-0.1.2.post20210128.tar.gz (32 kB)
Collecting loguru>=0.5.1
  Downloading loguru-0.5.3-py3-none-any.whl (57 kB)
     |████████████████████████████████| 57 kB 846 kB/s 
Collecting matplotlib>=3.3.1
  Downloading matplotlib-3.3.4-cp38-cp38-manylinux1_x86_64.whl (11.6 MB)
     |████████████████████████████████| 11.6 MB 2.6 MB/s 
Collecting numpy>=1.19.1
  Using cached numpy-1.19.5-cp38-cp38-manylinux2010_x86_64.whl (14.9 MB)
Collecting open3d>=0.10.0.0
  Using cached open3d-0.12.0-cp38-cp38-manylinux2014_x86_64.whl (188.5 MB)
Collecting opencv-python>=3.4.3
  Using cached opencv_python-4.5.1.48-cp38-cp38-manylinux2014_x86_64.whl (50.4 MB)
Collecting Pillow>=7.2.0
  Using cached Pillow-8.1.0-cp38-cp38-manylinux1_x86_64.whl (2.2 MB)
Collecting pyrender>=0.1.43
  Using cached pyrender-0.1.43-py3-none-any.whl (1.2 MB)
Collecting smplx>=0.1.21
  Using cached smplx-0.1.26-py3-none-any.whl (29 kB)
Collecting threadpoolctl>=2.1.0
  Using cached threadpoolctl-2.1.0-py3-none-any.whl (12 kB)
Collecting torch>=1.6.0
  Using cached torch-1.7.1-cp38-cp38-manylinux1_x86_64.whl (776.8 MB)
Collecting torchvision>=0.7.0+cu101
  Using cached torchvision-0.8.2-cp38-cp38-manylinux1_x86_64.whl (12.8 MB)
Collecting tqdm>=4.48.2
  Using cached tqdm-4.56.0-py2.py3-none-any.whl (72 kB)
Collecting trimesh>=3.8.1
  Using cached trimesh-3.9.1-py3-none-any.whl (628 kB)
Collecting iopath>=0.1.2
  Using cached iopath-0.1.3.tar.gz (10 kB)
Collecting pyyaml>=5.1
  Downloading PyYAML-5.4.1-cp38-cp38-manylinux1_x86_64.whl (662 kB)
     |████████████████████████████████| 662 kB 13.6 MB/s 
Collecting tabulate
  Using cached tabulate-0.8.7-py3-none-any.whl (24 kB)
Processing /home/mona/.cache/pip/wheels/a0/16/9c/5473df82468f958445479c59e784896fa24f4a5fc024b0f501/termcolor-1.1.0-py3-none-any.whl
Collecting yacs>=0.1.6
  Using cached yacs-0.1.8-py3-none-any.whl (14 kB)
Collecting python-dateutil>=2.1
  Using cached python_dateutil-2.8.1-py2.py3-none-any.whl (227 kB)
Collecting pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3
  Using cached pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
Collecting kiwisolver>=1.0.1
  Using cached kiwisolver-1.3.1-cp38-cp38-manylinux1_x86_64.whl (1.2 MB)
Collecting cycler>=0.10
  Using cached cycler-0.10.0-py2.py3-none-any.whl (6.5 kB)
Processing /home/mona/.cache/pip/wheels/22/0b/40/fd3f795caaa1fb4c6cb738bc1f56100be1e57da95849bfc897/sklearn-0.0-py2.py3-none-any.whl
Collecting widgetsnbextension
  Using cached widgetsnbextension-3.5.1-py2.py3-none-any.whl (2.2 MB)
Collecting notebook
  Using cached notebook-6.2.0-py3-none-any.whl (9.5 MB)
Collecting pandas
  Using cached pandas-1.2.1-cp38-cp38-manylinux1_x86_64.whl (9.7 MB)
Collecting addict
  Using cached addict-2.4.0-py3-none-any.whl (3.8 kB)
Collecting plyfile
  Using cached plyfile-0.7.2-py3-none-any.whl (39 kB)
Collecting ipywidgets
  Using cached ipywidgets-7.6.3-py2.py3-none-any.whl (121 kB)
Collecting pyglet>=1.4.10
  Using cached pyglet-1.5.14-py3-none-any.whl (1.1 MB)
Collecting six
  Using cached six-1.15.0-py2.py3-none-any.whl (10 kB)
Collecting imageio
  Using cached imageio-2.9.0-py3-none-any.whl (3.3 MB)
Collecting freetype-py
  Using cached freetype_py-2.2.0-py3-none-manylinux1_x86_64.whl (890 kB)
Collecting PyOpenGL==3.1.0
  Using cached PyOpenGL-3.1.0.tar.gz (1.2 MB)
Collecting networkx
  Using cached networkx-2.5-py3-none-any.whl (1.6 MB)
Collecting scipy
  Using cached scipy-1.6.0-cp38-cp38-manylinux1_x86_64.whl (27.2 MB)
Collecting torchgeometry>=0.1.2
  Using cached torchgeometry-0.1.2-py2.py3-none-any.whl (42 kB)
Collecting typing-extensions
  Using cached typing_extensions-3.7.4.3-py3-none-any.whl (22 kB)
Requirement already satisfied: setuptools in /home/mona/venv/expose/lib/python3.8/site-packages (from trimesh>=3.8.1->-r requirements.txt (line 14)) (44.0.0)
Collecting portalocker
  Downloading portalocker-2.1.0-py2.py3-none-any.whl (13 kB)
Collecting scikit-learn
  Downloading scikit_learn-0.24.1-cp38-cp38-manylinux2010_x86_64.whl (24.9 MB)
     |████████████████████████████████| 24.9 MB 305 kB/s 
Collecting nbconvert
  Using cached nbconvert-6.0.7-py3-none-any.whl (552 kB)
Collecting terminado>=0.8.3
  Using cached terminado-0.9.2-py3-none-any.whl (14 kB)
Collecting tornado>=6.1
  Using cached tornado-6.1-cp38-cp38-manylinux2010_x86_64.whl (427 kB)
Collecting jupyter-core>=4.6.1
  Using cached jupyter_core-4.7.0-py3-none-any.whl (82 kB)
Collecting pyzmq>=17
  Downloading pyzmq-22.0.2-cp38-cp38-manylinux2010_x86_64.whl (1.1 MB)
     |████████████████████████████████| 1.1 MB 1.1 MB/s 
Collecting ipython-genutils
  Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB)
Collecting prometheus-client
  Using cached prometheus_client-0.9.0-py2.py3-none-any.whl (53 kB)
Collecting jinja2
  Using cached Jinja2-2.11.2-py2.py3-none-any.whl (125 kB)
Collecting ipykernel
  Using cached ipykernel-5.4.3-py3-none-any.whl (120 kB)
Collecting nbformat
  Using cached nbformat-5.1.2-py3-none-any.whl (113 kB)
Collecting Send2Trash>=1.5.0
  Using cached Send2Trash-1.5.0-py3-none-any.whl (12 kB)
Collecting argon2-cffi
  Using cached argon2_cffi-20.1.0-cp35-abi3-manylinux1_x86_64.whl (97 kB)
Collecting jupyter-client>=5.3.4
  Using cached jupyter_client-6.1.11-py3-none-any.whl (108 kB)
Collecting traitlets>=4.2.1
  Using cached traitlets-5.0.5-py3-none-any.whl (100 kB)
Collecting pytz>=2017.3
  Using cached pytz-2020.5-py2.py3-none-any.whl (510 kB)
Collecting jupyterlab-widgets>=1.0.0; python_version >= "3.6"
  Using cached jupyterlab_widgets-1.0.0-py3-none-any.whl (243 kB)
Collecting ipython>=4.0.0; python_version >= "3.3"
  Using cached ipython-7.19.0-py3-none-any.whl (784 kB)
Collecting decorator>=4.3.0
  Using cached decorator-4.4.2-py2.py3-none-any.whl (9.2 kB)
Collecting joblib>=0.11
  Using cached joblib-1.0.0-py3-none-any.whl (302 kB)
Collecting mistune<2,>=0.8.1
  Using cached mistune-0.8.4-py2.py3-none-any.whl (16 kB)
Collecting testpath
  Using cached testpath-0.4.4-py2.py3-none-any.whl (163 kB)
Collecting defusedxml
  Using cached defusedxml-0.6.0-py2.py3-none-any.whl (23 kB)
Collecting bleach
  Using cached bleach-3.2.3-py2.py3-none-any.whl (146 kB)
Collecting pygments>=2.4.1
  Using cached Pygments-2.7.4-py3-none-any.whl (950 kB)
Collecting entrypoints>=0.2.2
  Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB)
Processing /home/mona/.cache/pip/wheels/fc/39/52/8d6f3cec1cca4ceb44d658427c35711b19d89dbc4914af657f/pandocfilters-1.4.3-py3-none-any.whl
Collecting jupyterlab-pygments
  Using cached jupyterlab_pygments-0.1.2-py2.py3-none-any.whl (4.6 kB)
Collecting nbclient<0.6.0,>=0.5.0
  Using cached nbclient-0.5.1-py3-none-any.whl (65 kB)
Collecting ptyprocess; os_name != "nt"
  Using cached ptyprocess-0.7.0-py2.py3-none-any.whl (13 kB)
Collecting MarkupSafe>=0.23
  Using cached MarkupSafe-1.1.1-cp38-cp38-manylinux1_x86_64.whl (32 kB)
Collecting jsonschema!=2.5.0,>=2.4
  Using cached jsonschema-3.2.0-py2.py3-none-any.whl (56 kB)
Collecting cffi>=1.0.0
  Using cached cffi-1.14.4-cp38-cp38-manylinux1_x86_64.whl (411 kB)
Collecting backcall
  Using cached backcall-0.2.0-py2.py3-none-any.whl (11 kB)
Collecting prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0
  Using cached prompt_toolkit-3.0.14-py3-none-any.whl (359 kB)
Collecting pexpect>4.3; sys_platform != "win32"
  Using cached pexpect-4.8.0-py2.py3-none-any.whl (59 kB)
Collecting jedi>=0.10
  Using cached jedi-0.18.0-py2.py3-none-any.whl (1.4 MB)
Collecting pickleshare
  Using cached pickleshare-0.7.5-py2.py3-none-any.whl (6.9 kB)
Collecting packaging
  Using cached packaging-20.8-py2.py3-none-any.whl (39 kB)
Collecting webencodings
  Using cached webencodings-0.5.1-py2.py3-none-any.whl (11 kB)
Collecting async-generator
  Using cached async_generator-1.10-py3-none-any.whl (18 kB)
Collecting nest-asyncio
  Using cached nest_asyncio-1.5.1-py3-none-any.whl (5.0 kB)
Processing /home/mona/.cache/pip/wheels/3d/22/08/7042eb6309c650c7b53615d5df5cc61f1ea9680e7edd3a08d2/pyrsistent-0.17.3-cp38-cp38-linux_x86_64.whl
Collecting attrs>=17.4.0
  Using cached attrs-20.3.0-py2.py3-none-any.whl (49 kB)
Collecting pycparser
  Using cached pycparser-2.20-py2.py3-none-any.whl (112 kB)
Collecting wcwidth
  Using cached wcwidth-0.2.5-py2.py3-none-any.whl (30 kB)
Collecting parso<0.9.0,>=0.8.0
  Using cached parso-0.8.1-py2.py3-none-any.whl (93 kB)
Building wheels for collected packages: fvcore, iopath, PyOpenGL
  Building wheel for fvcore (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: /home/mona/venv/expose/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-mqhx1x0z/fvcore/setup.py'"'"'; __file__='"'"'/tmp/pip-install-mqhx1x0z/fvcore/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-zfh593mc
       cwd: /tmp/pip-install-mqhx1x0z/fvcore/
  Complete output (6 lines):
  usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
     or: setup.py --help [cmd1 cmd2 ...]
     or: setup.py --help-commands
     or: setup.py cmd --help
  
  error: invalid command 'bdist_wheel'
  ----------------------------------------
  ERROR: Failed building wheel for fvcore
  Running setup.py clean for fvcore
  Building wheel for iopath (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: /home/mona/venv/expose/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-mqhx1x0z/iopath/setup.py'"'"'; __file__='"'"'/tmp/pip-install-mqhx1x0z/iopath/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-f0zm3nw8
       cwd: /tmp/pip-install-mqhx1x0z/iopath/
  Complete output (6 lines):
  usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
     or: setup.py --help [cmd1 cmd2 ...]
     or: setup.py --help-commands
     or: setup.py cmd --help
  
  error: invalid command 'bdist_wheel'
  ----------------------------------------
  ERROR: Failed building wheel for iopath
  Running setup.py clean for iopath
  Building wheel for PyOpenGL (setup.py) ... error
  ERROR: Command errored out with exit status 1:
   command: /home/mona/venv/expose/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-mqhx1x0z/PyOpenGL/setup.py'"'"'; __file__='"'"'/tmp/pip-install-mqhx1x0z/PyOpenGL/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-ekp37855
       cwd: /tmp/pip-install-mqhx1x0z/PyOpenGL/
  Complete output (6 lines):
  usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
     or: setup.py --help [cmd1 cmd2 ...]
     or: setup.py --help-commands
     or: setup.py cmd --help
  
  error: invalid command 'bdist_wheel'
  ----------------------------------------
  ERROR: Failed building wheel for PyOpenGL
  Running setup.py clean for PyOpenGL
Failed to build fvcore iopath PyOpenGL
Installing collected packages: Pillow, portalocker, tqdm, iopath, numpy, pyyaml, tabulate, termcolor, yacs, fvcore, loguru, six, python-dateutil, pyparsing, kiwisolver, cycler, matplotlib, joblib, scipy, threadpoolctl, scikit-learn, sklearn, ipython-genutils, traitlets, mistune, MarkupSafe, jinja2, testpath, defusedxml, packaging, webencodings, bleach, pygments, entrypoints, pyrsistent, attrs, jsonschema, jupyter-core, nbformat, pandocfilters, jupyterlab-pygments, async-generator, tornado, pyzmq, jupyter-client, nest-asyncio, nbclient, nbconvert, ptyprocess, terminado, prometheus-client, decorator, backcall, wcwidth, prompt-toolkit, pexpect, parso, jedi, pickleshare, ipython, ipykernel, Send2Trash, pycparser, cffi, argon2-cffi, notebook, widgetsnbextension, pytz, pandas, addict, plyfile, jupyterlab-widgets, ipywidgets, open3d, opencv-python, trimesh, pyglet, imageio, freetype-py, PyOpenGL, networkx, pyrender, typing-extensions, torch, torchgeometry, smplx, torchvision
    Running setup.py install for iopath ... done
    Running setup.py install for fvcore ... done
    Running setup.py install for PyOpenGL ... done
Successfully installed MarkupSafe-1.1.1 Pillow-8.1.0 PyOpenGL-3.1.0 Send2Trash-1.5.0 addict-2.4.0 argon2-cffi-20.1.0 async-generator-1.10 attrs-20.3.0 backcall-0.2.0 bleach-3.2.3 cffi-1.14.4 cycler-0.10.0 decorator-4.4.2 defusedxml-0.6.0 entrypoints-0.3 freetype-py-2.2.0 fvcore-0.1.2.post20210128 imageio-2.9.0 iopath-0.1.3 ipykernel-5.4.3 ipython-7.19.0 ipython-genutils-0.2.0 ipywidgets-7.6.3 jedi-0.18.0 jinja2-2.11.2 joblib-1.0.0 jsonschema-3.2.0 jupyter-client-6.1.11 jupyter-core-4.7.0 jupyterlab-pygments-0.1.2 jupyterlab-widgets-1.0.0 kiwisolver-1.3.1 loguru-0.5.3 matplotlib-3.3.4 mistune-0.8.4 nbclient-0.5.1 nbconvert-6.0.7 nbformat-5.1.2 nest-asyncio-1.5.1 networkx-2.5 notebook-6.2.0 numpy-1.19.5 open3d-0.12.0 opencv-python-4.5.1.48 packaging-20.8 pandas-1.2.1 pandocfilters-1.4.3 parso-0.8.1 pexpect-4.8.0 pickleshare-0.7.5 plyfile-0.7.2 portalocker-2.1.0 prometheus-client-0.9.0 prompt-toolkit-3.0.14 ptyprocess-0.7.0 pycparser-2.20 pyglet-1.5.14 pygments-2.7.4 pyparsing-2.4.7 pyrender-0.1.43 pyrsistent-0.17.3 python-dateutil-2.8.1 pytz-2020.5 pyyaml-5.4.1 pyzmq-22.0.2 scikit-learn-0.24.1 scipy-1.6.0 six-1.15.0 sklearn-0.0 smplx-0.1.26 tabulate-0.8.7 termcolor-1.1.0 terminado-0.9.2 testpath-0.4.4 threadpoolctl-2.1.0 torch-1.7.1 torchgeometry-0.1.2 torchvision-0.8.2 tornado-6.1 tqdm-4.56.0 traitlets-5.0.5 trimesh-3.9.1 typing-extensions-3.7.4.3 wcwidth-0.2.5 webencodings-0.5.1 widgetsnbextension-3.5.1 yacs-0.1.8

system info:

$ lsb_release -a
LSB Version:	core-11.1.0ubuntu2-noarch:security-11.1.0ubuntu2-noarch
Distributor ID:	Ubuntu
Description:	Ubuntu 20.04.1 LTS
Release:	20.04
Codename:	focal

$ python
Python 3.8.5 (default, Jul 28 2020, 12:59:40) 
[GCC 9.3.0] on linux

$ pip --version
pip 20.0.2 from /home/mona/venv/expose/lib/python3.8/site-packages/pip (python 3.8)

adding more features to SMPLX model

when init the class SMPLXLayer and adding more features - changing
False to True

class SMPLXLayer(SMPLX):
def init(
self,
*args,
**kwargs
) -> None:

    super(SMPLXLayer, self).__init__(
        create_global_orient=False,
        create_body_pose=False,
        create_left_hand_pose=False,
        create_right_hand_pose=False,
        create_jaw_pose=False,
        create_leye_pose=False,
        create_reye_pose=False,
        create_betas=False,
        create_expression=False,
        create_transl=False,
        *args, **kwargs,
    )

i am getting that error

Traceback (most recent call last):
File "expose-master/demo.py", line 578, in
rcnn_batch=rcnn_batch,
File "/home/anaconda3/envs/temp/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/expose-master/demo.py", line 245, in main
model = SMPLXNet(exp_cfg)
File "expose-master/expose/models/smplx_net.py", line 39, in init
self.smplx = build_attention_head(exp_cfg)
File "expose-master/expose/models/attention/build.py", line 23, in build_attention_head
return SMPLXHead(cfg)
File "expose-master/expose/models/attention/predictor.py", line 251, in init
**body_model_cfg)
File "/home/anaconda3/envs/temp/lib/python3.7/site-packages/smplx/body_models.py", line 2342, in build_layer
return SMPLXLayer(model_path, **kwargs)
File "/home/anaconda3/envs/temp/lib/python3.7/site-packages/smplx/body_models.py", line 1310, in init
*args, **kwargs,
File "/home/anaconda3/envs/temp/lib/python3.7/site-packages/smplx/body_models.py", line 991, in init
**kwargs)
File "/home/anaconda3/envs/temp/lib/python3.7/site-packages/smplx/body_models.py", line 596, in init
use_compressed=use_compressed, dtype=dtype, ext=ext, **kwargs)
File "/home/anaconda3/envs/temp/lib/python3.7/site-packages/smplx/body_models.py", line 206, in init
global_orient, dtype=dtype)
TypeError: must be real number, not CfgNode

Process finished with exit code 1

the error happen during execute :

default_global_orient = torch.tensor(
global_orient, dtype=dtype)

global_orient value is : CfgNode({'param_type': 'cont_rot_repr'})

the same is relevant for the rest of the features (create_body_pose,create_left_hand_pose..)
thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.