mkocabas / pare Goto Github PK
View Code? Open in Web Editor NEWCode for ICCV2021 paper PARE: Part Attention Regressor for 3D Human Body Estimation
License: Other
Code for ICCV2021 paper PARE: Part Attention Regressor for 3D Human Body Estimation
License: Other
Your work is so excellent ! I want to retrain your model.And I would appreciate it if you share your training instruction! best wishes~
which version of neural_renderer are you using?
is this https://github.com/daniilidis-group/neural_renderer?
Thanks for the great work!
I wanted to leave issues while I was running the evaluation code.
Line 39 in fa90aff
num_workers=-1
raises an error in my machine as below:ValueError: num_workers option should be non-negative; use num_workers=0 to disable multiprocessing.
num_workers=0
resolves the issue.
In order to use jpeg4py module smoothly, I had to install libturbojpeg using the following command:
sudo apt-get install libturbojpeg
You might not have noticed this dependency yet, since it's not a python module.
I encountered a problem: undefined symbol: _ZTVN5torch3jit6MethodE
I solved it by
!pip install torchtext==0.7
Hi Vibe, Great Work!
I wanted to ask, I need the part features before the heat maps.
Altho, from the code I see that the final step of the part features is the _get_part_attention_map function.
Afterwards, I see that there is an if statement that says: elif self.use_heatmaps == 'part_segm' then output['pred_segm_mask'] = heatmaps.
I wanted to ask, in this case the pred_segm_mask are the body part segments as you attached in the appendix of the paper?
(see attached picture :-) )
Thanks for the exciting work.
Can you provide the 3DPW-OCC dataset mentioned in the paper?
It would be appreciated if you provide a 3DPW-OCC annotation file or video sequence names.
Thank you.
From VIBE, there are some operation when do smpl, it will do fitting in a train loop, but PARE seems don't have it.
Why it was still performance better than VIBE?
Thanks your good work.
could you give preprocessed dataset?
I think it may due to the egl reason.
Hi, thanks for your great work!
In your code, orig_cam
is converted from pred_cam
. However during 2D projection, a prespective camera is assumed by a given focal_length
, which means pred_cam
is actually used in prespective camera.
So I want to ask
(1) how can I use orig_cam
? Should I use them in a weak-prespective or a prespective way?
(2) My goal in the end: if I assume a focal_length
, can I also calculate the prespective camera translation in the original image space, using orig_cam
?
When i run 'eval.py' and 'occlusion_analysis.py' , i got error:
WARNING: You are using a SMPL model, with only 10 shape coefficients.
WARNING: You are using a SMPL model, with only 10 shape coefficients.
WARNING: You are using a SMPL model, with only 10 shape coefficients.
libEGL warning: DRI2: failed to create dri screen
libEGL warning: DRI2: failed to create dri screen
Traceback (most recent call last):
File "/home/ywk/Paper/PARE/scripts/occlusion_analysis.py", line 265, in
run_dataset(args, hparams)
File "/home/ywk/Paper/PARE/scripts/occlusion_analysis.py", line 129, in run_dataset
model = PARETrainer(hparams=hparams).to(device)
File "/home/ywk/Paper/PARE/scripts/pare/core/trainer.py", line 223, in init
mesh_color=self.hparams.DATASET.MESH_COLOR,
File "/home/ywk/Paper/PARE/scripts/pare/utils/renderer.py", line 43, in init
point_size=1.0
File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/pyrender/offscreen.py", line 134, in _create
self._platform.init_context()
File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/pyrender/platforms/egl.py", line 177, in init_context
assert eglInitialize(self._egl_display, major, minor)
File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 415, in call
return self( *args, **named )
File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/OpenGL/error.py", line 234, in glCheckError
baseOperation = baseOperation,
OpenGL.raw.EGL._errors.EGLError: EGLError(
err = EGL_NOT_INITIALIZED,
baseOperation = eglInitialize,
cArguments = (
<OpenGL._opaque.EGLDisplay_pointer object at 0x7fa447ad8680>,
c_long(0),
c_long(0),
),
result = 0
)
Process finished with exit code 1
Can you tell me how to solve it ? Thanks.
Hi, I ran into this issue when I was trying to run demo.py on Windows platform. Here is the log:
Traceback (most recent call last):
File "D:\工程文件\Python\PARE\scripts\demo.py", line 242, in <module>
main(args)
File "D:\工程文件\Python\PARE\scripts\demo.py", line 69, in main
input_image_folder, num_frames, img_shape = video_to_images(
File "D:\工程文件\Python\PARE\pare\utils\demo_utils.py", line 195, in video_to_images
subprocess.call(command)
File "C:\Users\Freeman\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 340, in call
with Popen(*popenargs, **kwargs) as p:
File "C:\Users\Freeman\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 854, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "C:\Users\Freeman\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 1307, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
FileNotFoundError: [WinError 2] 系统找不到指定的文件。
The Chinese sentence means "System cannot find the specified file". It seems this error occurred when the process was trying to launch a subprocess to convert the video to images, but I can't figure out exactly which file is missing nor do I understand why such an error would occur. Is there anyone who ran into a similar problem?
Hi @mkocabas, PARE is an interesting work. Analyses on the influence of occlusions are meaningful. Could you please tell me how to get the heatmap results for deeper analysising?
Hi, How many joints you use to compute 3DPW PAMPJPE?
In prohmr, it is 14.
I think different quantity produces different results.
I cloned the repo and run
'source scripts/install_pip.sh'
'source scripts/prepare_data.sh'
'python scripts/demo.py --vid_file data/sample_video.mp4 --output_folder logs/demo '.
Then i got error:
"(PARE) ywk@hello-Precision-3640-Tower:~/桌面/Paper/PARE$ python scripts/demo.py --vid_file data/sample_video.mp4 --output_folder logs/demo
2021-11-01 21:05:45.270 | INFO | main:main:65 - Frames are already extracted in "logs/demo/sample_video_/tmp_images"
2021-11-01 21:05:45.389 | INFO | main:main:97 - Demo options:
Namespace(batch_size=16, beta=1.0, cfg='data/pare/checkpoints/pare_w_3dpw_config.yaml', ckpt='data/pare/checkpoints/pare_w_3dpw_checkpoint.ckpt', detector='yolo', display=False, draw_keypoints=False, exp='', image_folder=None, min_cutoff=0.004, mode='video', no_render=False, no_save=False, output_folder='logs/demo', save_obj=False, sideview=False, smooth=False, staf_dir='/home/mkocabas/developments/openposetrack', tracker_batch_size=12, tracking_method='bbox', vid_file='data/sample_video.mp4', wireframe=False, yolo_img_size=416)
2021-11-01 21:05:46.038 | INFO | pare.models.backbone.hrnet:init_weights:530 - => init weights from normal distribution
2021-11-01 21:05:46.231 | WARNING | pare.models.backbone.hrnet:init_weights:558 - IMPORTANT WARNING!! Please download pre-trained models if you are in TRAINING mode!
2021-11-01 21:05:46.231 | INFO | pare.models.head.pare_head:init:125 - "Keypoint Attention" should be activated to be able to use part segmentation
2021-11-01 21:05:46.231 | INFO | pare.models.head.pare_head:init:126 - Overriding use_keypoint_attention
2021-11-01 21:05:46.253 | INFO | pare.models.head.pare_head:init:327 - Keypoint attention is active
WARNING: You are using a SMPL model, with only 10 shape coefficients.
2021-11-01 21:05:58.125 | INFO | pare.core.tester:_load_pretrained_model:113 - Loading pretrained model from data/pare/checkpoints/pare_w_3dpw_checkpoint.ckpt
2021-11-01 21:05:58.365 | WARNING | pare.utils.train_utils:load_pretrained_model:45 - Removing "model." keyword from state_dict keys..
2021-11-01 21:05:58.749 | INFO | pare.core.tester:_load_pretrained_model:116 - Loaded pretrained weights from "data/pare/checkpoints/pare_w_3dpw_checkpoint.ckpt"
2021-11-01 21:05:58.753 | INFO | main:main:103 - Input video number of frames 3080
Downloading files from https://raw.githubusercontent.com/mkocabas/yolov3-pytorch/master/yolov3/config/yolov3.cfg
--2021-11-01 21:05:58-- https://raw.githubusercontent.com/mkocabas/yolov3-pytorch/master/yolov3/config/yolov3.cfg
正在连接 127.0.0.1:8889... 已连接。
已发出 Proxy 请求,正在等待回应... 200 OK
长度: 8338 (8.1K) [text/plain]
正在保存至: “/home/ywk/.torch/config/yolov3.cfg”
yolov3.cfg 100%[===================>] 8.14K --.-KB/s 用时 0s
2021-11-01 21:05:59 (32.7 MB/s) - 已保存 “/home/ywk/.torch/config/yolov3.cfg” [8338/8338])
Running Multi-Person-Tracker
100%|█████████████████████████████████████████| 257/257 [01:23<00:00, 3.09it/s]
Finished. Detection + Tracking FPS 37.06
2021-11-01 14:54:27.210 | INFO | pare.core.tester:run_on_video:287 - Running PARE on each tracklet...
0%| | 0/278 [00:00<?, ?it/s]2021-11-07 14:54:28.564 | INFO | pare.core.tester:run_on_video:362 - Converting smpl keypoints 2d to original image coordinate
0%|▏ | 1/278 [00:01<06:12, 1.34s/it]2021-11-07 14:54:30.089 | INFO | pare.core.tester:run_on_video:362 - Converting smpl keypoints 2d to original image coordinate
1%|▎ | 2/278 [00:02<06:26, 1.40s/it]2021-11-07 14:54:31.578 | INFO | pare.core.tester:run_on_video:362 - Converting smpl keypoints 2d to original image coordinate
1%|▍ | 3/278 [00:04<06:32, 1.43s/it]
................
100%|█████████████████████████████████████████| 278/278 [03:13<00:00, 1.44it/s]
2021-11-01 21:10:36.733 | INFO | main:main:115 - PARE FPS: 15.92
2021-11-01 21:10:36.733 | INFO | main:main:117 - Total time spent: 277.98 seconds (including model loading time).
2021-11-01 21:10:36.733 | INFO | main:main:118 - Total FPS (including model loading time): 11.08.
2021-11-01 21:10:36.734 | INFO | main:main:121 - Saving output results to "logs/demo/sample_video_/pare_output.pkl".
WARNING: You are using a SMPL model, with only 10 shape coefficients.
libEGL warning: DRI2: failed to create dri screen
libEGL warning: DRI2: failed to create dri screen
Traceback (most recent call last):
File "scripts/demo.py", line 238, in
main(args)
File "scripts/demo.py", line 126, in main
orig_width, orig_height, num_frames)
File "./pare/core/tester.py", line 392, in render_results
wireframe=self.args.wireframe
File "./pare/utils/vibe_renderer.py", line 66, in init
point_size=1.0
File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/pyrender/offscreen.py", line 31, in init
self._create()
File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/pyrender/offscreen.py", line 134, in _create
self._platform.init_context()
File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/pyrender/platforms/egl.py", line 177, in init_context
assert eglInitialize(self._egl_display, major, minor)
File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 415, in call
return self( *args, **named )
File "/home/ywk/anaconda3/envs/PARE/lib/python3.7/site-packages/OpenGL/error.py", line 234, in glCheckError
baseOperation = baseOperation,
OpenGL.raw.EGL._errors.EGLError: EGLError(
err = EGL_NOT_INITIALIZED,
baseOperation = eglInitialize,
cArguments = (
<OpenGL._opaque.EGLDisplay_pointer object at 0x7f9a2e003710>,
c_long(0),
c_long(0),
),
result = 0
)"
can you tell me how solve the error?
I really thank you for sharing this state-of-the-art model.
I just try to export the .fbx file (from vibe implementation) by it seems that there are some differences between vibe and pare in the SMPL pose parameters shape.
the shape of pose in vibe is (n_frames, 72)
but in pare is pose (n_frames, 24, 3, 3)
How can I export .fbx from the .pkl file generated via PARE?
Best regards,
Hi, hope you are fine.
I'm looking to use PARE for a project.
The whole idea is pass a multiperson video dataset where the humanoids takes the gesture and save just the humanoids on a separate video.
I want to pass some parameters on the result like age, gender, race,... so in the next step, humanizing the "humanoid" can "dress" it with images provided (body and faces)
Please let me know how to separate the results in different video dataset of the "humanoids" replicating the movements.
Appreciate!
When I used import pare.core.tester in my program, it was be an error of " Segmentation fault (core dumped) " occurred. How can I fix this?
Hello, are the COCO-EFT, MPII-EFT, and LSPET-EFT that you mentioned in your paper image dataset collections? I want to use them in my training, referencing your methods. I found some JSON files of the EFT method proposed by FAIR, but I don't understand how you used them in your training. Thank you very much for answering my question
the page is not available
No module named 'multi_person_tracker'
I'm trying to fit SMPL to scans. Currently, I just render it in different views and choose the result predicted by PARE with lowest Chamfer distance. Is there any feasible improvement on view consistency?
How do I convert pkl to fbx or bvh, I tried utilizing VIBE but it still doesn't work. Looking forward to your reply, thanks!
Hi,could you please share the required data with google drive? I could not download from dropbox. Thanks! @mkocabas
Hi @mkocabas ,
Thanks for sharing this great work with helpful inspiration. I noticed that in this repo you contained some pieces of code to train PARE, however some data was missing (e.g., 3dpw_train.npz
, 3dpw_all_test_with_mmpose.npz
). May I kindly ask that what the status of training code and is it possible to provide missing training data listed above, it would be great if so. Thanks again for sharing ideas and code with the community.
Hello, excellent work! Is there a colab demo? The “open in colab” badge link to a 404 not found page.
hello, does anyone tried mobilenetv2 as backbone model?
Hi
I'm trying to train a model. But, training instruction does not exist.
When can you upload training instruction?
Thank you.
Hello, thanks for your excellent job! You have mentioned in articles that more details are provided in Sup. Mat, but I haven't found where are the Sup. Mat?
Hi,
Congratulations on such a great work!
When I run PARE on an image folder I get an output pkl file that doesn't match what you specify in README.me.
For example, you specify an output key:
pose (n_frames, 72) # SMPL pose parameters
But in the output from an image folder I get this:
pred_pose (1, 24, 3, 3)
I guess that you are transforming the 72 parameters into 24 joints rotation matrices, but I can't know exactly the format of these rotations.
Would it be possible to also get the original pose parameters?
Another question is about the joints output, which includes 49 elements.
But the SMPL skeleton has only 24 joints...
How do I relate this 49 positions to the original 24 joints?
Thank you in advance.
Sincerely,
Alejandro Beacco
Thanks for sharing this great and interesting work!
I want to know when will you release the training code?
Hi, after I run the image folder demo,I can't find joints2d in the output file. If it is possible to share the STAF dir? The path in demo.py is '/home/mkocabas/developments/openposetrack', and I can't find it. Thank you very much.
Hi
I`m tring to run demo, but the file pare-github-data.zip on https://www.dropbox.com/s/aeulffqzb3zmh8x/pare-github-data.zip cannot download.
Is there any other way can i get it?
thank u so much
Hello, thanks for your excellent job! Could you give me some advice that how can I get the Human3.6M moshed data for training?
Hello @mkocabas,
Thank you for the work.
I ran the demo.py on my folder of images and got the individual pkl files. Upon checking the shape of all the dict_keys,
I found the pred_pose which should be (1, 72) - since I have pkl files per frame.
I have (1,24, 3, 3) - I am aware that 24*3 would be the total 72 pose parameters, but why the extra 3 at the end.
Can you tell me how can I transform it into (1,72) shape
Thanks for your work! Could you release the preprocessed data and training instruction, please. Thanks!!
I found the format of your 3doh annotation in data/dataset_extras is different from the origin. Could you share your preprocess code?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.