Giter Site home page Giter Site logo

human_dynamics's People

Contributors

akanazawa avatar jasonyzhang avatar pannaf avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

human_dynamics's Issues

human3.6 mosh_data

Could you please provided the mosh_data including pose and shape parameters separately? It is critical to reproduce the best performance mentioned in your paper. Thanks.

resluts are wrong

the paramers predicted are totally wrong,
图片,
what's the problem of this? can you tell me?

A problem in read_human36m.py

Dear author

when I tried to run read_human36m.py, it gives the following error. I have already set up the directories and CDF. Could u help with this issue? thanks in advance.

Regards


read_human36m.py:184: DeprecationWarning: This method will be removed in future versions.  Use 'list(elem)' or iteration over elem instead.
  for tr in child.getchildren():
read_human36m.py:185: DeprecationWarning: This method will be removed in future versions.  Use 'list(elem)' or iteration over elem instead.
  if tr.getchildren()[0].text == str(myactionno):
read_human36m.py:186: DeprecationWarning: This method will be removed in future versions.  Use 'list(elem)' or iteration over elem instead.
  if tr.getchildren()[1].text == str(trialno):
read_human36m.py:187: DeprecationWarning: This method will be removed in future versions.  Use 'list(elem)' or iteration over elem instead.
  return tr.getchildren()[2 + sbj_id - 1].text
Sub: 1, action 1,  trial 1, cam 1
Orig seq_name Directions 1, new_seq_name Directions_0
Saving to /home/seamanj/Software/Dataset/human36m_25fps/S1/Directions_0/cam_0
Writing /home/seamanj/Software/Dataset/human36m_25fps/S1/Directions_0/cam_0/camera_wext.pkl
Traceback (most recent call last):
  File "read_human36m.py", line 466, in <module>
    main(raw_data_root, output_root, frame_skip)
  File "read_human36m.py", line 397, in main
    pose3d_paths[cam_id - 1], is_3d=True, joint_ids=joint_ids)
IndexError: list index out of range

Issues running demo

Hi,

I have followed the demo instructions from the README, but I can't seem to run the demo without encountering this error.

image

I have all requirements installed and running the correct versions of CUDA and Python

Any suggestions about how to fix this?

fine-tune the model, cannot load mocap_neutrMosh

My goal is to finetune the model published on my 2d pose dataset.

As a starting point I trying to create the simplest training script without Human3.6:
When running .
python -m src.main --pretrained_model_path ./models/hmr_noS5.ckpt-642561 --data_dir ./human_dynamics/tf_dataset --batch_size=8 --datasets penn_action --log_dir logs_release --num_conv_layers 3 --T 20 --mocap_datasets CMU,jointLim --use_3d_label

I created mocap_dataset for CMU,jointLim in /tf_dataset/mocap_neutrMosh/neutrSMPL_ and was able to load the created tf_records manually.

I get the following error:

tensorflow.python.framework.errors_impl.InvalidArgumentError: Feature: shape (data type: float) is required but could not be found.
[[Node: input_smpl_loader/read_smpl_data/ParseSingleExample/ParseSingleExample = ParseSingleExample[Tdense=[DT_FLOAT, DT_FLOAT], dense_keys=["pose", "shape"], dense_shapes=[[72], [10]], num_sparse=0, sparse_keys=[], sparse_types=[], _device="/job:localhost/replica:0/task:0/device:CPU:0"](input_smpl_loader/read_smpl_data/ReaderReadV2:1, input_smpl_loader/read_smpl_data/ParseSingleExample/Const, input_smpl_loader/read_smpl_data/ParseSingleExample/Const)]]

Bounding Box Processing penn_action

When using visualize_tf_records on penn_action dataset it is seems that the pre-processing jitters around the person bounding box, introducing large variability in time that doesn't exist in the original video.
In the example below, from Frame 0 to Frame 3 there was no major change in the person location or its joints, still the pre-process image is considerably different.
Do you think smoothing the human bounding box over time would be beneficial, or is it an advantage to generate this jittering?

penn_action/train/penn_action_00_copy00_hmr_noS5.ckpt-642561.tfrecord
Frame 0:
image
Frame 1:
image
Frame 3:
image

Compute_neutral_shape for 3dpw dataset

Hi, I have some trouble in preprocessing and converting the dataset into tfrecords. Specifically, for 3dpw dataset, the compute_neutral_shape.py file needs a neutral smpl model named neutral_smpl_with_cocoplustoesankles_reg.pkl. Could you provide the pickle file or give me a download link?

Thanks in advance.

Can't install pytorch==0.4.0

On Windows:
Can't install pytorch==0.4.0, the oldest that I can find from https://pytorch.org/get-started/previous-versions/ is 0.4.1

Collecting torch==0.4.0 (from -r requirements.txt (line 1))
ERROR: Could not find a version that satisfies the requirement torch==0.4.0 (from -r requirements.txt (line 1)) (from versions: 0.1.2, 0.1.2.post1)
ERROR: No matching distribution found for torch==0.4.0 (from -r requirements.txt (line 1))

Edit: works well on Ubuntu 16.04, Cuda 9.0

why “num_fill = count * B * T - N” ?

# Need margin on both sides. Num good frames = T - 2 * margin.
margin = (self.fov - 1) // 2
g = self.sequence_length - 2 * margin
count = np.ceil(N / (g * B)).astype(int)
num_fill = count * B * T - N
images_padded = np.concatenate((
np.zeros((margin, H, W, 3)), # Front padding.
all_images,
np.zeros((num_fill + margin, H, W, 3)), # Back padding.
), axis=0)
images_batched = []
# [ m ][ g ][ m ] Slide over by g every time.
# [ m ][ g ][ m ]
for i in range(count * B):
images_batched.append(images_padded[i * g : i * g + T])
images_batched = np.reshape(images_batched, (count, B, T, H, W, 3))

Is it the proper way to add margins for every batch?
suppose N = len(all_images) == 101, B=1, T=20,thus count==13
then num_fill==159 !!!

T>>matginwill get a reasonable 'num_fill', or add margin for each frame sequnence.
the way of adding margin in the code seems faster.

NBA dataset unavailable

I have seen "Unless otherwise indicated, all models are trained with Human3.6M, Penn Action, and NBA." In your paper, but i can't find any clues about NBA dataset in this repos. Can you give any cues for it and i wish to reproduce your experiments correctly.

A question about call_hmr_ief

Dear author:

Thanks for your wonderful work first. I have a little question about the call_hmr_ief() function.

in this function, hmr_ief has been called three times: one for estimating the current pose, two for the past and the future. However, I just found the parameters for the current and the past are almost the same. So how can we estimate the past and the current pose with the same function and parameters? I was a little bit confused, Thanks in advance.

Regards

neural render function called failed

File "G:\firestrike2020\human_dynamics-master\src\util\render\nmr_renderer.py", line 58, in init
img_size, camera_mode='look_at', perspective=False)
TypeError: init() got an unexpected keyword argument 'camera_mode'

in file nmr_render.py, it failed to call nr.Renderer(), feeds three parameters
self.renderer = nr.Renderer(
img_size, camera_mode='look_at', perspective=False)

but the nr.Renderer's definition as follow:
class Renderer(object):
def init(self):

it accepts no parameter, it's a wired error

error with transpose in neural renderer

Hello, I get the following error in the visualisation stage. Seems pretty basic, could this be a numpy version issue maybe.

running anaconda3 on windows 7 with numpy 1.13.3

Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\human_dynamics\human_dynamics-master\demo_video.py", line 246, in
main(model_hmmr)
File "D:\human_dynamics\human_dynamics-master\demo_video.py", line 234, in main
run_on_video(model, config.vid_path, trim_length)
File "D:\human_dynamics\human_dynamics-master\demo_video.py", line 216, in run_on_video
trim_length=trim_length
File "D:\human_dynamics\human_dynamics-master\demo_video.py", line 190, in predict_on_tracks
trim_length=trim_length,
File "D:\human_dynamics\human_dynamics-master\src\evaluation\run_video.py", line 161, in render_preds
rotated_view=True,
File "D:\human_dynamics\human_dynamics-master\src\util\render\nmr_renderer.py", line 416, in visualize_img_orig
no_text=no_text,
File "D:\human_dynamics\human_dynamics-master\src\util\render\nmr_renderer.py", line 306, in visualize_img
rend_img = renderer(vert, cam=cam, img=input_img, color_name=mesh_color)
File "D:\human_dynamics\human_dynamics-master\src\util\render\nmr_renderer.py", line 154, in call
rend = rend.data.cpu().numpy().transpose((0, 2, 3, 1))
AttributeError: 'tuple' object has no attribute 'data'

Root position

May I ask how to get the root position/transform?

stuck at restoring parameters while running demo

I am running the demo with the following command python -m demo_video --vid_path demo_data/penn_action-2278.mp4 --load_path models/hmmr_model.ckpt-1119816
However it starts running and get stuck after these terminal logs

Restoring resnet vars from models/hmr_noS5.ckpt-642561
INFO:tensorflow:Restoring parameters from models/hmr_noS5.ckpt-642561
Restoring checkpoint  models/hmmr_model.ckpt-1119816
INFO:tensorflow:Restoring parameters from models/hmmr_model.ckpt-1119816

I have waited for around half hr but it show no activity.

no metadata.xml

I run the read_human36m.py ,but there is no the metadata.xml . I have the camera.mat human36m_big.mat part_cmap.mat skel.mat, but I do not how to produce the metadata.xml, maybe i miss some things?

List index out of range when run the demo

hi author! I'm sorry to bother you.

I run the python -m demo_video --vid_path demo_data/penn_action-2278.mp4 --load_path models/hmmr_model.ckpt-1119816 on cuda9.0,torch0.4,and encountered an error :

WARNING:tensorflow:From /home/chensien/venv_hmmr/lib/python3.6/site-packages/tensorflow/contrib/slim/python/slim/nets/resnet_v2.py:224: calling reduce_mean (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
2019-12-23 20:26:08.335596: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-12-23 20:26:08.715735: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.645
pciBusID: 0000:03:00.0
totalMemory: 10.91GiB freeMemory: 7.90GiB
2019-12-23 20:26:09.091559: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1356] Found device 1 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.645
pciBusID: 0000:05:00.0
totalMemory: 10.91GiB freeMemory: 2.18GiB
2019-12-23 20:26:09.091729: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1435] Adding visible gpu devices: 0, 1
2019-12-23 20:26:10.527439: I tensorflow/core/common_runtime/gpu/gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-23 20:26:10.527511: I tensorflow/core/common_runtime/gpu/gpu_device.cc:929] 0 1
2019-12-23 20:26:10.527521: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 0: N Y
2019-12-23 20:26:10.527543: I tensorflow/core/common_runtime/gpu/gpu_device.cc:942] 1: Y N
2019-12-23 20:26:10.528594: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6702 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:03:00.0, compute capability: 6.1)
2019-12-23 20:26:10.657091: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 6703 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:05:00.0, compute capability: 6.1)
2019-12-23 20:26:10.660406: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 6.55G (7029050368 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-12-23 20:26:10.662843: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 5.89G (6326145024 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-12-23 20:26:10.665144: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 5.30G (5693530624 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-12-23 20:26:10.667553: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 4.77G (5124177408 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-12-23 20:26:10.670259: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 4.29G (4611759616 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-12-23 20:26:10.674016: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 3.87G (4150583552 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-12-23 20:26:10.676184: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 3.48G (3735525120 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-12-23 20:26:10.678594: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 3.13G (3361972480 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-12-23 20:26:10.681124: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 2.82G (3025775104 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-12-23 20:26:10.683562: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 2.54G (2723197440 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-12-23 20:26:10.686430: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 2.28G (2450877696 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
Restoring resnet vars from models/hmr_noS5.ckpt-642561
INFO:tensorflow:Restoring parameters from models/hmr_noS5.ckpt-642561
Restoring checkpoint models/hmmr_model.ckpt-1119816
INFO:tensorflow:Restoring parameters from models/hmmr_model.ckpt-1119816
Computing tracks on demo_data/penn_action-2278.mp4.
Writing frames to file: done!
Per-frame detection: done!
Tracking: done!
Not all frames have people detected in it.
Traceback (most recent call last):
File "/home/chensien/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/chensien/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/chensien/human_dynamics/demo_video.py", line 250, in
main(model_hmmr)
File "/home/chensien/human_dynamics/demo_video.py", line 238, in main
run_on_video(model, config.vid_path, trim_length)
File "/home/chensien/human_dynamics/demo_video.py", line 220, in run_on_video
trim_length=trim_length
File "/home/chensien/human_dynamics/demo_video.py", line 132, in predict_on_tracks
all_kps = get_labels_poseflow(poseflow_path, len(im_paths))
File "/home/chensien/human_dynamics/demo_video.py", line 87, in get_labels_poseflow
if frame_ids[0] != 0:
IndexError: list index out of range

could you please give me a hand and solve this?thx!

MemoryError sometimes when there are more than 1 people in the video

MemoryError sometimes when there are more than 1 people, error as below
Also, I used --track_id 1 hoping to get only the largest tracked human but it is frequently not, anyway to choose only the largest or the most confident human?

Start loading json file...

100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:16<00:00, 3.12it/s]
Start pose tracking...

100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 49/49 [04:09<00:00, 5.09s/it]
This video contains 27 people.
Export tracking results to json...

100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [00:00<00:00, 126258.40it/s]
PoseFlow successfully ran!

Total number of PoseFlow tracks: 19
Processing track_id: 1

Preprocessing frames.

Traceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/yeo/Desktop/venv_hmmr/human_dynamics-master/demo_video.py", line 246, in
main(model_hmmr)
File "/home/yeo/Desktop/venv_hmmr/human_dynamics-master/demo_video.py", line 232, in main
run_on_video(model, vid_path, trim_length)
File "/home/yeo/Desktop/venv_hmmr/human_dynamics-master/demo_video.py", line 216, in run_on_video
trim_length=trim_length
File "/home/yeo/Desktop/venv_hmmr/human_dynamics-master/demo_video.py", line 150, in predict_on_tracks
bbox_param=bbox_params_smooth[i],
File "/home/yeo/Desktop/venv_hmmr/human_dynamics-master/src/evaluation/run_video.py", line 81, in process_image
mode='edge'
File "/home/yeo/Desktop/venv_hmmr/lib/python3.5/site-packages/numpy/lib/arraypad.py", line 1381, in pad
newmat = _prepend_edge(newmat, pad_before, axis)
File "/home/yeo/Desktop/venv_hmmr/lib/python3.5/site-packages/numpy/lib/arraypad.py", line 175, in _prepend_edge
axis=axis)
MemoryError

If you suspect this is an IPython bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at [email protected]

You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.

Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
%config Application.verbose_crash=True

3D information output

Hello. I found the project is very amazing and want to implement it in my project.

I have now successfully run the code but I do not know where we can get the 3D information (Maybe (x-y-z)coordinate data?).

Could you help give more explanation on it?

how long it will take to convert the datasets into TFRecords

Dear author

I was trying to train the model, however, it has to convert the datasets into TFRecords first. Out of time estimation, I was wondering how long it will take to convert each dataset into TFRecords, for example, for Penn Action Dataset and Human3.6M. And how long it will take to train the model from them?

Thanks in advance

How to accelerate the prediction with hallucinator?

Hi! I used yolo to get bbox and ran inference on single image with hallucinator model, but the hmmr prediction takes over 600ms to produce the joints on my 1080 GPU computer. Is there any thing I can do to speed up this process. I would be very grateful if you can provide some clues about this work.

About the config file

First of all, I think the model is very cool. So I really want to run the offered model by myself.
But when I run the command : "python -m demo_video --vid_path demo_data/penn_action-2278.mp4", there is a erroe :''[!] You need to specify load_path to load a pretrained model''.
I think it may need a config file. So would you mind offer the config file to me? Thanks very much.

Question about 3DPW

Thank you for your great work!

I have a question about 3DPW. Why did you compute 3D joints from SMPL model instead of using 'jointPositions' provided in the 3DPW pkl file?

a question about read_human36m.py

@akanazawa [https://github.com/akanazawa/human_dynamics/blob/master/src/datasets/h36/read_human36m.py]

  • Link the mp4 with the new name to the old name.

     video_paths[cam_id - 1]
      action_name = action_names[action_id - 1]
      out_video_name = 'S{}_{}_{}_cam_{}.mp4'.format(sbj_id, action_name, trial_id - 1, cam_id - 1)
      out_video_path = join(output_dir, out_video_name)
      if not exists(out_video_path):
          orig_vid_path = video_paths[cam_id - 1].replace(" ", "\ ")
          cmd = 'ln -s {} {}'.format(orig_vid_path, out_video_path)
          ret = system(cmd)
          if ret > 0:
              print('something went wrong!')
              import ipdb
              ipdb.set_trace()
    

I don't know what this code can do.And I met a error when I run the read_human36m.py. the systen(cmd) can't be performed, the ret=1, have you met this question and how do you solve it ?thanks!

Created meshgrid gets distorted very fast

I have ran the demo as instructed. The estimated human mesh is accurate for the first 3-4 frames. But then it gets distorted for the rest of the frames and makes no sense or human shape.
I have tried to run the code on the given videos and other videos but I always get 3-4 of accurate frames and the rest of the frames are distorted.
What could be the reason for that? Is that supposed to happen?
Thanks,

Problem in testing my own video

Hi @akanazawa First of all thanks for your this kind work. It's really helpful. I'm facing following problem...

  1. This work is working fine with your provided videos using pre-trained weights. But when I run it on random video (single person), all the process runs successfully till PoseFlow successfully ran! but it stuck on Processing frames.
  2. I checked the demo_output folder --> video_folder and I found that there are only two folders inside it, one is AlphaPose_output and other one is video_frames. so no other folders like hmmr_output and hmmr_output_crop.
  3. In AlphaPose_output folder there are images frames of video along with json files.
  4. When I run it with your videos images frames are not stored within AlphaPose_output folder and all files are in separate folders as specified in project code. I'm unable to understand that what is problem with any wild video. Code it ok this problem is just with new video.
    5: Moreover I checked json files generated by your videos and my video... Json files seem to be ok.

get error while training

Dear author

when I was training, I got the following errors. do you think it is fatal? what should I do with these? thanks

2019-12-04 14:20:13.054281: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-12-04 14:20:13.090140: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2208000000 Hz 2019-12-04 14:20:13.091755: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x562da46dafc0 executing computations on platform Host. Devices: 2019-12-04 14:20:13.091783: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined> WARNING:tensorflow:From /home/seamanj/Software/anaconda3/envs/human_dynamics/lib/python3.7/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version. Instructions for updating: Use standard file APIs to check for files with this prefix. loading from /home/seamanj/Software/human_dynamics/models/hmr_noS5.ckpt-642561 2019-12-04 14:20:27.351456: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_66. Error: Pack node (smpl_main_4/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:27.351497: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_68. Error: Pack node (smpl_main_5/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:27.353487: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node batch_orth_proj_idrot_1/strided_slice. Error: Pack node (smpl_main_3/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:27.354405: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_32. Error: Pack node (smpl_main_1/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:27.354423: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_34. Error: Pack node (smpl_main_2/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:27.356630: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node batch_orth_proj_idrot/strided_slice. Error: Pack node (smpl_main/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:32.819601: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_66. Error: Pack node (smpl_main_4/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:32.819636: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_68. Error: Pack node (smpl_main_5/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:32.821490: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node batch_orth_proj_idrot_1/strided_slice. Error: Pack node (smpl_main_3/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:32.822380: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_32. Error: Pack node (smpl_main_1/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:32.822398: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_34. Error: Pack node (smpl_main_2/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:32.824279: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node batch_orth_proj_idrot/strided_slice. Error: Pack node (smpl_main/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:39.175761: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_66. Error: Pack node (smpl_main_4/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:39.175816: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_68. Error: Pack node (smpl_main_5/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:39.180563: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node batch_orth_proj_idrot_1/strided_slice. Error: Pack node (smpl_main_3/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:39.182307: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_32. Error: Pack node (smpl_main_1/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:39.182335: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_34. Error: Pack node (smpl_main_2/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:20:39.185923: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node batch_orth_proj_idrot/strided_slice. Error: Pack node (smpl_main/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:21:11.057179: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_66. Error: Pack node (smpl_main_4/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:21:11.057221: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_68. Error: Pack node (smpl_main_5/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:21:11.060505: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node batch_orth_proj_idrot_1/strided_slice. Error: Pack node (smpl_main_3/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:21:11.062146: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_32. Error: Pack node (smpl_main_1/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:21:11.062165: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node strided_slice_34. Error: Pack node (smpl_main_2/stack_1) axis attribute is out of bounds: 2 2019-12-04 14:21:11.064580: W ./tensorflow/core/grappler/optimizers/graph_optimizer_stage.h:241] Failed to run optimizer ArithmeticOptimizer, stage RemoveStackStridedSliceSameAxis node batch_orth_proj_idrot/strided_slice. Error: Pack node (smpl_main/stack_1) axis attribute is out of bounds: 2

Could you please post my fork link in your README?

I've forked your hmmr project and successfully run it on Windows and I've made small changes (eg. write predicts.pose into json) and reuse them in Unity. Could you please post my fork link to ur Readme? (https://github.com/Zju-George/human_dynamics) Because I believe that people really want the result to be used in their own applications and I really want to contribute!

If you couldn't do that, I understand and that's okay, just mention. Thank you guys you have done an amazing job! @jasonyzhang @pannaf @akanazawa

a question about smpl_tfrecords.py

Dear author

when I tried to convert smpl to tfrecord for adversarial prior training, there is a flag named "temporal", whose default value is true. Its corresponding output directory is mocap_neutrMosh_temporal. However, when we do training, data_loader_sequence.py load data from mocap_neutrMosh folder. So my question is do we really need mocap_neutrMosh_temporal for training? what does it use for? Many thanks.

image test

Can I inference in single image? How to use the code?

How to use hallucinator to predict with single image?

Hi! I'm really appreciated to see this brilliant work with code provided. The project successfully runs on my machine. I am wondering how to use hallucinator for prediction, since the demo script says only pred mode support now and I have found 'hal' in predefined mode. Looking forward to further update, thanks!

Training code

Do we have ETA for training code? Thanks for great work.

the input of the hallucinator?

Dear author

I am a little bit confused on hallucinator.

In paper (Figure 2.), it says "We also train a hallucinator h that takes a single image feature phi_t and learns to hallucinate its temporal representation". However, in the source code, the input is the whole sequence image features. is it contradict? Thanks in advance.

`
def fc2_res(phi, name='fc2_res'):
"""
Converts pretrained (fixed) resnet features phi into movie strip.

This applies 2 fc then add it to the orig as residuals.

Args:
    phi (B x T x 2048): Image feature.
    name (str): Scope.

Returns:
    Phi (B x T x 2048): Hallucinated movie strip.
"""

`

Model fails on demo video

Hi,

I installed the model following the description. I am using the indicated version of neural renderer.

However, when I try to test the model on the demo videos, only the first few frames are correct. In the remaining frames, the model produce a random output, such as:
https://drive.google.com/open?id=1v9FzSCawfBX_S_3LPtFAG3ZyjvBARAYG

Any suggestions about how to fix this? I checked the intermediate results and both AlphaPose and Poseflow produce the correct output.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.