Giter Site home page Giter Site logo

xuxy09 / rsc-net Goto Github PK

View Code? Open in Web Editor NEW
52.0 2.0 4.0 672 KB

Implementation for "3D human pose, shape and texture from low-resolution images and videos", TPAMI 2021

Python 99.63% Shell 0.37%
deep-learning 3d 3d-human-shape-and-pose-estimation 3d-human-mesh weakly-supervised-learning contrastive-learning computer-vision

rsc-net's Introduction

RSC-Net: 3D Human Pose, Shape and Texture from Low-Resolution Images and Videos

Implementation for "3D Human Pose, Shape and Texture from Low-Resolution Images and Videos", TPAMI 2021

Conference version: "3D Human Shape and Pose from a Single Low-Resolution Image with Self-Supervised Learning", ECCV 2020

Project page

What is new?

  • RSC-Net:

    • Resolution-aware structure
    • Self-supervised learning
    • Contrastive learning
  • Temporal post-processing for video input

  • TexGlo: Global module for 3D texture reconstruction

Brief introduction

Alt Text

Video

Video

Code

Packages

Make sure you have gcc==5.x.x for installing the packages. Then run:

bash install_environment.sh

If you are running the code without a screen, please install OSMesa and the corresponding PyOpenGL. Then uncomment the 2nd line of "utils/renderer.py".

Data preparation

  • Download meta data, and unzip it in "./data".

  • Download datasets, and unzip it in "./datasets_pkl".

Note that all paths are set in "config.py".

Demo

python demo.py --checkpoint=./pretrained/RSC-Net.pt --img_path=./examples/im1.png
  • Note: if you have trouble in using Pyrender, please try "demo_nr.py":
python demo_nr.py --checkpoint=./pretrained/RSC-Net.pt --img_path=./examples/im1.png

If your neural-renderer has errors, please re-install the package from the source.

Evaluation

python eval.py --checkpoint=./pretrained/RSC-Net.pt 

Training

python train.py --name=RSC-Net 

   

If you find this work helpful in your research, please cite our paper:

@article{xu20213d,
title={3D Human Pose, Shape and Texture from Low-Resolution Images and Videos},
author={Xu, Xiangyu and Chen, Hao and Moreno-Noguer, Francesc and Jeni, Laszlo A and De la Torre, Fernando},
journal={TPAMI},
year={2021},
}

@inproceedings{xu20203d,
title={3D Human Shape and Pose from a Single Low-Resolution Image with Self-Supervised Learning},
author={Xu, Xiangyu and Chen, Hao and Moreno-Noguer, Francesc and Jeni, Laszlo A and De la Torre, Fernando},
booktitle={ECCV},
year={2020},
}

rsc-net's People

Contributors

xuxy09 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

rsc-net's Issues

How many epochs will it take to get results similar to your pretrained model? Also is this Warning while training during epoch 3 expected?

Loading latest checkpoint [/content/drive/MyDrive/RSC_NET/RSC-Net-master/logs/RSC-Net/checkpoints/2023_05_08-22_37_47.pt]
75% 3/4 [00:00<?, ?it/s]
Epoch 3: 32% 9620/30272 [00:01<?, ?it/s]
/usr/local/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:117: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "
100% 4/4 [00:01<00:00, 1.03s/it]

Missing source file and GL error.

It seems that some source files are still missing. Would you please update it? Thanks.
Also, it seems that there is a problem with pyrender, and I get an error when running the program.

Traceback (most recent call last):
File "demo.py", line 54, in
img_renderer = Renderer(focal_length=constants.FOCAL_LENGTH, img_res=constants.IMG_RES, faces=smpl_neutral.faces)
File "/home/RSC-Net/utils/renderer.py", line 17, in init
point_size=1.0)
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyrender/offscreen.py", line 149, in _create
self._platform.init_context()
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyrender/platforms/pyglet_platform.py", line 52, in init_context
width=1, height=1)
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyglet/window/xlib/init.py", line 173, in init
super(XlibWindow, self).init(*args, **kwargs)
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyglet/window/init.py", line 606, in init
context = config.create_context(gl.current_context)
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyglet/gl/xlib.py", line 204, in create_context
return XlibContextARB(self, share)
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyglet/gl/xlib.py", line 314, in init
super(XlibContext13, self).init(config, share)
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyglet/gl/xlib.py", line 218, in init
raise gl.ContextException('Could not create GL context')
pyglet.gl.ContextException: Could not create GL context

Will you release the training code?

Hi @xuxy09, thanks for your code. Applying contrastive learning in human mesh recovery is interesting. Do you plan to release the implementation of this part? Thanks again.

performacne gap

Hi, I trained your model using your released codes and datasets, but the performance (~357 mm MPJPE and ~121mm reconst. error) seems to be very large compared to using your pretrained model. Is there any config settings I did not notice? (I didn't change the default setting in your codes)

joints/vertices detected

Hi,
i used your model to scan a low res image of a person for joints
it returned the following joints:

print(joints)
[[-6.04124069e-02 -7.58821130e-01 -1.36030287e-01]
[ 6.64194077e-02 -7.22911358e-01 -1.27670541e-03]
[-7.00186044e-02 -6.70804858e-01 8.52521062e-02]
[-1.71616733e-01 -4.77660120e-01 9.67647880e-02]
[-1.14487618e-01 -5.75384080e-01 -9.08450782e-02]
[ 2.02057883e-01 -6.65608585e-01 -8.79856050e-02]
[ 3.34627986e-01 -4.87401307e-01 -1.36859328e-01]
[ 1.40230328e-01 -4.98071045e-01 -2.29404062e-01]
[-4.13627969e-03 -2.36514777e-01 1.82570945e-02]
[-7.99493119e-02 -1.72764868e-01 4.19234037e-02]
[-3.04055244e-01 -4.19897437e-02 -1.56800151e-01]
[-1.95208192e-01 1.82137489e-01 5.87285310e-02]
[ 1.65932719e-02 -1.39060929e-01 -1.17385648e-02]
[-1.18877657e-01 4.24312949e-02 -2.33635023e-01]
[-1.03444241e-01 2.48754814e-01 2.34296471e-02]
[-7.78085738e-02 -8.01025391e-01 -9.89653692e-02]
[-3.36936563e-02 -8.10274601e-01 -1.45270944e-01]
[-4.43344414e-02 -8.19930434e-01 -7.00204819e-03]
[ 5.31038344e-02 -8.41723680e-01 -1.07568718e-01]
[-2.00364143e-01 3.71141195e-01 -5.49159050e-02]
[-1.26811206e-01 3.74223828e-01 -3.79313231e-02]
[-6.84206337e-02 2.40866959e-01 7.32095838e-02]
[-2.68898785e-01 3.15093815e-01 -3.10784578e-02]
[-2.90248752e-01 2.87368000e-01 4.05772030e-02]
[-1.62859082e-01 1.79714262e-01 1.14852905e-01]
[-1.95208192e-01 1.82137489e-01 5.87285310e-02]
[-3.04055244e-01 -4.19897437e-02 -1.56800151e-01]
[-8.81563351e-02 -2.82877415e-01 8.54171440e-02]
[ 1.01012930e-01 -2.23942950e-01 -3.15033905e-02]
[-1.18877657e-01 4.24312949e-02 -2.33635023e-01]
[-1.03444241e-01 2.48754814e-01 2.34296471e-02]
[-1.14487618e-01 -5.75384080e-01 -9.08450782e-02]
[-1.71616733e-01 -4.77660120e-01 9.67647880e-02]
[-7.00186044e-02 -6.70804858e-01 8.52521062e-02]
[ 2.02057883e-01 -6.65608585e-01 -8.79856050e-02]
[ 3.34627986e-01 -4.87401307e-01 -1.36859328e-01]
[ 1.40230328e-01 -4.98071045e-01 -2.29404062e-01]
[ 5.61497957e-02 -7.20549762e-01 -1.31271025e-02]
[-5.81591874e-02 -9.40906048e-01 -9.45677161e-02]
[ 1.30820684e-02 -2.47979790e-01 4.07510623e-02]
[ 6.94963410e-02 -6.79624557e-01 -8.00156035e-04]
[ 7.99441785e-02 -4.94875371e-01 3.82016413e-02]
[-1.68672577e-03 -7.64190912e-01 -8.66440311e-02]
[-2.71644648e-02 -8.91248465e-01 -7.28784502e-02]
[-6.04124069e-02 -7.58821130e-01 -1.36030287e-01]
[-3.36936563e-02 -8.10274601e-01 -1.45270944e-01]
[-7.78085738e-02 -8.01025391e-01 -9.89653692e-02]
[ 5.31038344e-02 -8.41723680e-01 -1.07568718e-01]
[-4.43344414e-02 -8.19930434e-01 -7.00204819e-03]]

my doubt is:
which of these co-ordinates corresponds to a particular body part
as in right_knee, left_knee, elbow, etc
Thank you!

Training Error

After executing training command its only showing killed after sometime,nothing else.Any fix for this issue?

bug in eval.py

there seems to be a bug in eval.py
it no longer works

error:

Traceback (most recent call last):
File "eval.py", line 196, in
run_evaluation(hmr_model, ds, eval_size=args.eval_size, batch_size=args.batch_size, num_workers=args.num_workers)
File "eval.py", line 143, in run_evaluation
global_orient=pred_rotmat[:, 0].unsqueeze(1), pose2rot=False)
File "C:\Users\Asus\anaconda3\envs\RSC-net\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "D:\dev\projects\football AR\RSC-Net-master\RSC-Net-master\models\smpl.py", line 23, in forward
smpl_output = super(SMPL, self).forward(*args, **kwargs)
File "C:\Users\Asus\anaconda3\envs\RSC-net\lib\site-packages\smplx\body_models.py", line 376, in forward
self.lbs_weights, pose2rot=pose2rot, dtype=self.dtype)
File "C:\Users\Asus\anaconda3\envs\RSC-net\lib\site-packages\smplx\lbs.py", line 205, in lbs
J_transformed, A = batch_rigid_transform(rot_mats, J, parents, dtype=dtype)
File "C:\Users\Asus\anaconda3\envs\RSC-net\lib\site-packages\smplx\lbs.py", line 347, in batch_rigid_transform
rel_joints.view(-1, 3, 1)).view(-1, joints.shape[1], 4, 4)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

No module named 'utils.data_loader'

I have already followed all the setup instruction.
when I ran this command: python demo.py --checkpoint=./pretrained/RSC-Net.pt --img_path=./examples/im1.png
I got the output: No module named 'utils.data_loader'

texture network

Hi @xuxy09 , thanks for your code. Isn't the training network for texture estimation public?

IndexError: list index out of range

Hi, thanks for your implementation.
When I'm training, the code gives the following error in trainer.py :
img_renderer = Renderer(focal_length=constants.FOCAL_LENGTH, img_res=constants.IMG_RES, faces=smpl_neutral.faces)
self.renderer = pyrender.OffscreenRenderer(viewport_width=img_res, viewport_height=img_res, point_size=1.0)

IndexError: list index out of range

image

How to solve this problem please?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.