Giter Site home page Giter Site logo

xuxy09 / rsc-net Goto Github PK

View Code? Open in Web Editor NEW
52.0 2.0 4.0 672 KB

Implementation for "3D human pose, shape and texture from low-resolution images and videos", TPAMI 2021

Python 99.63% Shell 0.37%
deep-learning 3d 3d-human-shape-and-pose-estimation 3d-human-mesh weakly-supervised-learning contrastive-learning computer-vision

rsc-net's Issues

No ckpt file in the Dropbox

Hi, I tried to download the pretrained model with the link provided in the READEME, but there is no file inside of it. Is there any way to get the ckpt file in order to not train the entire model from scratch?

Thank you

bug in eval.py

there seems to be a bug in eval.py
it no longer works

error:

Traceback (most recent call last):
File "eval.py", line 196, in
run_evaluation(hmr_model, ds, eval_size=args.eval_size, batch_size=args.batch_size, num_workers=args.num_workers)
File "eval.py", line 143, in run_evaluation
global_orient=pred_rotmat[:, 0].unsqueeze(1), pose2rot=False)
File "C:\Users\Asus\anaconda3\envs\RSC-net\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "D:\dev\projects\football AR\RSC-Net-master\RSC-Net-master\models\smpl.py", line 23, in forward
smpl_output = super(SMPL, self).forward(*args, **kwargs)
File "C:\Users\Asus\anaconda3\envs\RSC-net\lib\site-packages\smplx\body_models.py", line 376, in forward
self.lbs_weights, pose2rot=pose2rot, dtype=self.dtype)
File "C:\Users\Asus\anaconda3\envs\RSC-net\lib\site-packages\smplx\lbs.py", line 205, in lbs
J_transformed, A = batch_rigid_transform(rot_mats, J, parents, dtype=dtype)
File "C:\Users\Asus\anaconda3\envs\RSC-net\lib\site-packages\smplx\lbs.py", line 347, in batch_rigid_transform
rel_joints.view(-1, 3, 1)).view(-1, joints.shape[1], 4, 4)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

IndexError: list index out of range

Hi, thanks for your implementation.
When I'm training, the code gives the following error in trainer.py :
img_renderer = Renderer(focal_length=constants.FOCAL_LENGTH, img_res=constants.IMG_RES, faces=smpl_neutral.faces)
self.renderer = pyrender.OffscreenRenderer(viewport_width=img_res, viewport_height=img_res, point_size=1.0)

IndexError: list index out of range

image

How to solve this problem please?

No module named 'utils.data_loader'

I have already followed all the setup instruction.
when I ran this command: python demo.py --checkpoint=./pretrained/RSC-Net.pt --img_path=./examples/im1.png
I got the output: No module named 'utils.data_loader'

Missing source file and GL error.

It seems that some source files are still missing. Would you please update it? Thanks.
Also, it seems that there is a problem with pyrender, and I get an error when running the program.

Traceback (most recent call last):
File "demo.py", line 54, in
img_renderer = Renderer(focal_length=constants.FOCAL_LENGTH, img_res=constants.IMG_RES, faces=smpl_neutral.faces)
File "/home/RSC-Net/utils/renderer.py", line 17, in init
point_size=1.0)
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyrender/offscreen.py", line 149, in _create
self._platform.init_context()
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyrender/platforms/pyglet_platform.py", line 52, in init_context
width=1, height=1)
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyglet/window/xlib/init.py", line 173, in init
super(XlibWindow, self).init(*args, **kwargs)
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyglet/window/init.py", line 606, in init
context = config.create_context(gl.current_context)
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyglet/gl/xlib.py", line 204, in create_context
return XlibContextARB(self, share)
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyglet/gl/xlib.py", line 314, in init
super(XlibContext13, self).init(config, share)
File "/home/anaconda3/envs/rsc-net/lib/python3.7/site-packages/pyglet/gl/xlib.py", line 218, in init
raise gl.ContextException('Could not create GL context')
pyglet.gl.ContextException: Could not create GL context

texture network

Hi @xuxy09 , thanks for your code. Isn't the training network for texture estimation public?

performacne gap

Hi, I trained your model using your released codes and datasets, but the performance (~357 mm MPJPE and ~121mm reconst. error) seems to be very large compared to using your pretrained model. Is there any config settings I did not notice? (I didn't change the default setting in your codes)

Will you release the training code?

Hi @xuxy09, thanks for your code. Applying contrastive learning in human mesh recovery is interesting. Do you plan to release the implementation of this part? Thanks again.

Training Error

After executing training command its only showing killed after sometime,nothing else.Any fix for this issue?

How many epochs will it take to get results similar to your pretrained model? Also is this Warning while training during epoch 3 expected?

Loading latest checkpoint [/content/drive/MyDrive/RSC_NET/RSC-Net-master/logs/RSC-Net/checkpoints/2023_05_08-22_37_47.pt]
75% 3/4 [00:00<?, ?it/s]
Epoch 3: 32% 9620/30272 [00:01<?, ?it/s]
/usr/local/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:117: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). "
100% 4/4 [00:01<00:00, 1.03s/it]

joints/vertices detected

Hi,
i used your model to scan a low res image of a person for joints
it returned the following joints:

print(joints)
[[-6.04124069e-02 -7.58821130e-01 -1.36030287e-01]
[ 6.64194077e-02 -7.22911358e-01 -1.27670541e-03]
[-7.00186044e-02 -6.70804858e-01 8.52521062e-02]
[-1.71616733e-01 -4.77660120e-01 9.67647880e-02]
[-1.14487618e-01 -5.75384080e-01 -9.08450782e-02]
[ 2.02057883e-01 -6.65608585e-01 -8.79856050e-02]
[ 3.34627986e-01 -4.87401307e-01 -1.36859328e-01]
[ 1.40230328e-01 -4.98071045e-01 -2.29404062e-01]
[-4.13627969e-03 -2.36514777e-01 1.82570945e-02]
[-7.99493119e-02 -1.72764868e-01 4.19234037e-02]
[-3.04055244e-01 -4.19897437e-02 -1.56800151e-01]
[-1.95208192e-01 1.82137489e-01 5.87285310e-02]
[ 1.65932719e-02 -1.39060929e-01 -1.17385648e-02]
[-1.18877657e-01 4.24312949e-02 -2.33635023e-01]
[-1.03444241e-01 2.48754814e-01 2.34296471e-02]
[-7.78085738e-02 -8.01025391e-01 -9.89653692e-02]
[-3.36936563e-02 -8.10274601e-01 -1.45270944e-01]
[-4.43344414e-02 -8.19930434e-01 -7.00204819e-03]
[ 5.31038344e-02 -8.41723680e-01 -1.07568718e-01]
[-2.00364143e-01 3.71141195e-01 -5.49159050e-02]
[-1.26811206e-01 3.74223828e-01 -3.79313231e-02]
[-6.84206337e-02 2.40866959e-01 7.32095838e-02]
[-2.68898785e-01 3.15093815e-01 -3.10784578e-02]
[-2.90248752e-01 2.87368000e-01 4.05772030e-02]
[-1.62859082e-01 1.79714262e-01 1.14852905e-01]
[-1.95208192e-01 1.82137489e-01 5.87285310e-02]
[-3.04055244e-01 -4.19897437e-02 -1.56800151e-01]
[-8.81563351e-02 -2.82877415e-01 8.54171440e-02]
[ 1.01012930e-01 -2.23942950e-01 -3.15033905e-02]
[-1.18877657e-01 4.24312949e-02 -2.33635023e-01]
[-1.03444241e-01 2.48754814e-01 2.34296471e-02]
[-1.14487618e-01 -5.75384080e-01 -9.08450782e-02]
[-1.71616733e-01 -4.77660120e-01 9.67647880e-02]
[-7.00186044e-02 -6.70804858e-01 8.52521062e-02]
[ 2.02057883e-01 -6.65608585e-01 -8.79856050e-02]
[ 3.34627986e-01 -4.87401307e-01 -1.36859328e-01]
[ 1.40230328e-01 -4.98071045e-01 -2.29404062e-01]
[ 5.61497957e-02 -7.20549762e-01 -1.31271025e-02]
[-5.81591874e-02 -9.40906048e-01 -9.45677161e-02]
[ 1.30820684e-02 -2.47979790e-01 4.07510623e-02]
[ 6.94963410e-02 -6.79624557e-01 -8.00156035e-04]
[ 7.99441785e-02 -4.94875371e-01 3.82016413e-02]
[-1.68672577e-03 -7.64190912e-01 -8.66440311e-02]
[-2.71644648e-02 -8.91248465e-01 -7.28784502e-02]
[-6.04124069e-02 -7.58821130e-01 -1.36030287e-01]
[-3.36936563e-02 -8.10274601e-01 -1.45270944e-01]
[-7.78085738e-02 -8.01025391e-01 -9.89653692e-02]
[ 5.31038344e-02 -8.41723680e-01 -1.07568718e-01]
[-4.43344414e-02 -8.19930434e-01 -7.00204819e-03]]

my doubt is:
which of these co-ordinates corresponds to a particular body part
as in right_knee, left_knee, elbow, etc
Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.