nghorbani / amass Goto Github PK
View Code? Open in Web Editor NEWData preparation and loader for AMASS
Home Page: https://amass.is.tue.mpg.de/
License: Other
Data preparation and loader for AMASS
Home Page: https://amass.is.tue.mpg.de/
License: Other
Hello,
I have followed the AMASS_DNN tutorial.
I have downloaded all npz files from your website and tried both splits you suggested:
amass_splits = { 'vald': ['SFU',], 'test': ['SSM_synced'], 'train': ['MPI_Limits'] }
vs
amass_splits = { 'vald': ['HumanEva', 'MPI_HDM05', 'SFU', 'MPI_mosh'], 'test': ['Transitions_mocap', 'SSM_synced'], 'train': ['CMU', 'MPI_Limits', 'TotalCapture', 'Eyes_Japan_Dataset', 'KIT', 'BML', 'EKUT', 'TCD_handMocap', 'ACCAD'] }
However, for both splits, I receive the same size as the dataset with only 1182 samples for training, 854 for validation, and 56 for testing. I was under the impression that I could generate a much large dataset of 3D point clouds. Am I missing anything?
Hi there,
please help me understand which "scale factor" or something else am I missing. Here is my experiment:
load the ./amass/TotalCapture/s1/acting1_poses.npz
iterate trough the first 250 frames
load the acting1_BlenderZXY_YmZ.bvh
ground truth data from the TotalCapture
centimeters
into meters
iterate trough the first 250 frames
LeftHand
joint into left_wrist_total_trj
.plot left_wrist_amass_trj
with blue and left_wrist_total_trj
with orange in the coordinates systems of the total capture
and obtain this plot:
The two trajectories describe the first 250 points trough which the left wrist passes relative to the global coordinates system of total capture. They seem to have the same curvature. The orange trajectory (loaded from the *.bvh file) seems to be downscale with respect to the blue trajectory (transformed loaded from amass).
I overlooked the fact that the ground truth from the *.bvh files is in inches. After converting it into centimeters then into meters the plots of the left_wrist_amass_trj
and left_wrist_total_trj
look more similar, though still don't overlap.
Could you please help me figure out why they don't overlap closely? I think the offset between them is still to big.
Many thanks in advance!
Hi,
My kernel is crashing on importing of MeshViewer from human_body_prior when trying to run 01-AMASS_Visualization notebook.
Do you know why this might be happening?
Thank you!
Michaela
I am trying to get some valid parameters for the basic SMPL model, in your documentation, you are mentioning that should be possible (https://github.com/nghorbani/amass/blob/master/notebooks/03-AMASS_Visualization_Advanced.ipynb). However, as far as I understand the SMPL model uses 72 pose parameters and 10 betas, but the notebook uses only 63 for the SMPL-X model. I was wondering which AMASS body parameters correspond to the SMPL parameters?
Hi, thanks for providing this repository!
I can't find the MPI_Limits dataset on the AMASS dataset page, do you know if it goes by another name there or has been pulled off?
Specifically, it is called in the example notebook, 02-AMASS_DNN on code block 5:
amass_splits = { 'vald': ['HumanEva', 'MPI_HDM05', 'SFU', 'MPI_mosh'], 'test': ['Transitions_mocap', 'SSM_synced'], 'train': ['CMU', 'MPI_Limits', 'TotalCapture', 'Eyes_Japan_Dataset', 'KIT', 'BML', 'EKUT', 'TCD_handMocap', 'ACCAD'] } amass_splits['train'] = list(set(amass_splits['train']).difference(set(amass_splits['test'] + amass_splits['vald'])))
Hello,
Both links from section Body Models of the readme are broken (550 error).
Hi,
How are the coordinate axes of the dataset/bdata['poses'] oriented? For example, does AMASS use a right-handed, Y-up coordinate system?
Thank you,
Fabian
Thanks for the great works!
I am now trying to relates the 3D surface shape data to the video data of the original dataset. It seems hard to distinguish the correspondences from the file name of AMASS. I wonder is there any documentations?
Hi there,
I try to use the root_orient
[1] and trans
[2] fields in order to position and orient the SMPL model relative to the global reference frame.
When I instantiate the SMPL model with the following code:
# pose_id goes along frames stored in "./ACCAD/Male2Walking_c3d/B17 - Walk to hop to walk_poses.npz"
root_orient = torch.Tensor(smpl_poses['poses'][pose_id:pose_id+1, :3]).to(computing_device) # controls the global root orientation
pose_body = torch.Tensor(smpl_poses['poses'][pose_id:pose_id+1, 3:66]).to(computing_device) # controls the body
pose_hand = torch.Tensor(smpl_poses['poses'][pose_id:pose_id+1, 66:]).to(computing_device) # controls the finger articulation
betas = torch.Tensor(smpl_poses['betas'][:10][np.newaxis]).to(computing_device) # controls the body shape
dmpls = torch.Tensor(smpl_poses['dmpls'][pose_id:pose_id+1]).to(computing_device) # controls soft tissue dynamics
root_trans = torch.Tensor(smpl_poses['trans'][pose_id]).to(computing_device) # controls the global root orientation
smpl_in_pose_id = smpl_model(
pose_body=pose_body, pose_hand=pose_hand,
betas=betas, dmpls=dmpls,
root_trans=root_trans, root_orient=root_orient)
and plot the trajectories of SMPL vertices with indexes 412
(head) and 3021
(pelvis) then I get this plot:
When I instantiate the SMPL model with the following code:
# pose_id goes along frames stored in "./ACCAD/Male2Walking_c3d/B17 - Walk to hop to walk_poses.npz"
root_orient = torch.Tensor(smpl_poses['poses'][pose_id:pose_id+1, :3]).to(computing_device) # controls the global root orientation
pose_body = torch.Tensor(smpl_poses['poses'][pose_id:pose_id+1, 3:66]).to(computing_device) # controls the body
pose_hand = torch.Tensor(smpl_poses['poses'][pose_id:pose_id+1, 66:]).to(computing_device) # controls the finger articulation
betas = torch.Tensor(smpl_poses['betas'][:10][np.newaxis]).to(computing_device) # controls the body shape
dmpls = torch.Tensor(smpl_poses['dmpls'][pose_id:pose_id+1]).to(computing_device) # controls soft tissue dynamics
root_trans = torch.Tensor(smpl_poses['trans'][pose_id]).to(computing_device) # controls the global root orientation
smpl_in_pose_id = smpl_model(
pose_body=pose_body, betas=betas)
then manually assemble the transformation matrix from root_trans
vector and root_orient
rotation matrix, apply it to the same SMPL vertices with indexes 412
(head) and 3021
(pelvis) and plot their trajectories then I get this plot:
Could you please help me to correctly use the values stored in:
smpl_poses['poses'][pose_id:pose_id+1, :3]
andsmpl_poses['trans'][pose_id]
?Thank you very much!
kind regards,
[1] root_orient
[2] trans
Is AMASS compatible with STAR? If not, is there any way to convert it to make it compatible?
How to filter those stairs and ramps scene?
Thanks for your data, but the data has a global rotation to what the video shows. How can I get the first 3 pose parameters matching the rotated body,whats the rotation matrix?
Hi there,
I am looking at the files:
./Male2_bvh/Male2_B15_WalkTurnAround.bvh
[1] from ACCAD dataset and./ACCAD/Male2Walking_c3d/B15 - Walk turn around_poses.npz
[2] from AMASS_ACCAD.The Frame Time
of [1] is 0.033 and the mocap_framerate
of [2] is 120.0.
mocap_framerate
entry of [2]?mocap_framerate
entry of [2]Thanks a lot!
kind regards,
Hi there,
I would like to compute the trajectory of a vertex belonging to the SMPL model relative to the global coordinate system. More specifically I would like to double check my understanding of the first 3 values of poses
entry (global root orientation
) and of the trans
entry. These are my questions:
global root orientation
describe? The orientation of SMPL's root joint relative to the global coordinate system?trans
describe? The translation of SMPL's root joint relative tot the global coordinate system?Many thanks in advance!
AssertionError Traceback (most recent call last)
in
1 from amass.prepare_data import prepare_amass
----> 2 prepare_amass(amass_splits, amass_dir, work_dir, logger=logger)
~/.local/lib/python3.6/site-packages/amass/prepare_data.py in prepare_amass(amass_splits, amass_dir, work_dir, logger)
156 outpath = makepath(os.path.join(stageI_outdir, split_name, 'pose.pt'), isfile=True)
157 if os.path.exists(outpath): continue
--> 158 dump_amass2pytroch(datasets, amass_dir, outpath, logger=logger)
159
160 logger('Stage II: augment the data and save into h5 files to be used in a cross framework scenario.')
~/.local/lib/python3.6/site-packages/amass/prepare_data.py in dump_amass2pytroch(datasets, amass_dir, out_posepath, logger, rnd_seed, keep_rate)
97 data_gender.extend([gdr2num[str(cdata['gender'].astype(np.str))] for _ in cdata_ids])
98
---> 99 assert len(data_pose) != 0
100
101 torch.save(torch.tensor(np.asarray(data_pose, np.float32)), out_posepath)
AssertionError:
Dear authors,
thanks for releasing this awesome code for using AMASS dataset!
I'm struggling about how to get X_Y coordinate of the 'Jtr's in the image.The type Jtr is the mesh vertices.In the toturial you visualize them by plotting spheres.Is there any way to check their X_Y coordinate after project them on the image?
Hi,
I would like to use this but only with hands/feet, is it possible to have only these parts of the body?
Thanks!
I have downloaded the smpl-x body data from amass website. However, I have found that all the left and right hands poses are equal, which is the mean pose of mano model. I have also check the BMLrub CMU and ACCAD of smpl-h body data, the hands poses are same. I want to know which sequences in the data set have real hand pose parameters. Thank you very much!
When clicking on the download links SMPL+H G for the BMLrub and CMU datasets, the download does not occur.
Hi,
First, thanks for the amazing dataset.
I would like to create a dataset from AMASS where all the meshes face the same direction: is there a way to do this easily? I think I can do this with the root_orient
attribute, but I am not really sure if it is possible.
Thanks
Hello, it is mentioned in #2 that ["poses"][:, :3]
gives the orientation of the root joint in the global frame. However, for KIT/425/walking_slow09_poses.npz
(a basic walking forward without rotating) these entries correspond to the following plot.
I'm struggling to make sense of these root_orient
entries as there is ~15deg rotational range in all 3 axes although the render depicts minimal rotation. Moreover, I can't make sense of nonzero pitch and yaw.
Is this an error with the KIT dataset or I am following a flawed method to extract roll,pitch,yaw of the root joint? If so could you please point out the correct way to extract rpy, thanks!
HI SIR,
When I am trying to run the tutorial , I'm getting an error stating that
TypeError: init() missing 1 required positional argument: 'model_type'
Help me to rectify this error
AMASS uses an extended version of SMPL+H with DMPLs. Here we show how to load different components and visualize a body model with AMASS data.
Are all AMASS data with hand pose parameters?
Hi, @nghorbani
Thanks for your AMASS dataset for the community. I downloaded the SMPL-H G
and SMPL-X G
from the official website. When I checked the SMPL data and the SMPL-X data, I found that the frame rates of both formats are not consistent. I list some cases.
In the case TCD_handMocap/ExperimentDatabase/FingerTP_poses.npz
, the frame rate is 150, and the frame rate in TCD_handMocap/ExperimentDatabase/FingerTP_stageii.npz
is 120.
In the case TotalCapture/s1/walking1_poses.npz
, the frame rate is 60, and the frame rate in TotalCapture/s1/walking1_stageii.npz
is 120.
I found about 5,000 inconsistencies in the data.
Hi, how can I convert motions using a skeleton (e.g. recorded with Kinect) to AMASS. If I understand correct, one can use MoSh to convert from MoCap markers but can it also be used for skeletal representations? Thanks in advance :)
Hi! I was wondering why are the betas parameters constant with frames?
Dear authors,
thanks for releasing this awesome code for using AMASS dataset!
I found a very small bug: in the 01-AMASS_Visualization.ipynb, the "def vis_body_joints" does not work when using the very recent body_visualizer package.
joints_mesh = points_to_spheres(joints, vc = colors['red'], radius=0.005)
should be changed into
joints_mesh = points_to_spheres(joints, point_color = colors['red'], radius=0.005)
I also find this kind of naming issue in body_visualizer happens when using vposer's(human_body_prior) ik_engine.py and ik_example_joints.py.
Thanks a lot for this repository and have a nice day :)
I read your paper and in your paper, you said in the CMU MoCap dataset you collected motion from 96 subjects. However, in the original CMU MoCap there are 104 subjects. Any reason you decided not to put these subjects in your dataset?
Thanks.
Hi,
Thanks for the amazing dataset + easy to use library. When I animate bodies with AMASS motions, I use the same betas along all motions, which results in a "rigid" motion of the body. I understand this is due to the fact I'm not using the DMPL vectors (which are added to betas for dynamics). However I can't find code snippets where you use those vectors, do you have an idea how to incorporate it along the betas? betas have size of 16, dmpl have size 8, so I have no idea how to use them.
Hello nghorbani! Thanks for your great job!
When visualizing some sequence poses, there exist some obvious flicker. For example, in “DFaust_67/50002/50002_one_leg_jump_poses.npz”, the 329th frame and the 330th frame changed rapidly. Is this is normal?
ImportError: ('Unable to load EGL library', "Could not find module 'EGL' (or one of its dependencies). Try using the full path with constructor syntax.", 'EGL', None)
raise ImportError("Unable to load EGL library", *err.args)
Hello, I've read that the SMPL-H and SMPL shape parameters are equivalent; I've also seen that for both cases the betas come in sizes of 10 or 16. Since the betas represent a PCA space, would it be correct to grab the first 10 elements from AMASS datasets that provide 16 beta parameters when we are using a SMPL model with just 10?
Hi, I have tried to follow the instructions to install the amass lib, but unfortunately I am not able to run the notebook.
First in the read me has said
"AMASS uses MoSh++ pipeline to fit SMPL+H body model to human optical marker based motion capture (mocap) data. In the paper we use SMPL+H with extended shape space, i.e. 16 betas, and 8 DMPLs. Please download models and place them them in body_models folder of this repository after you obtained the code from GitHub."
but in the clone of the git I do not have any body_models folder.
this is the folder structure:
.
├── LICENSE.txt
├── README.md
├── notebooks
│ ├── 01-AMASS_Visualization.ipynb
│ ├── 02-AMASS_DNN.ipynb
│ ├── 03-AMASS_Visualization_Advanced.ipynb
│ ├── 04-AMASS_DMPL.ipynb
│ ├── README.md
│ └── init.py
├── requirements.txt
├── setup.py
├── src
│ ├── init.py
│ └── amass
│ ├── init.py
│ ├── data
│ │ ├── init.py
│ │ ├── dfaust_synthetic_mocap.py
│ │ ├── prepare_data.py
│ │ └── ssm_all_marker_placements.json
│ └── tools
│ ├── init.py
│ ├── make_teaser_image.py
│ ├── notebook_tools.py
│ └── teaser.gif
└── support_data
└── github_data
├── amass_sample.npz
├── datasets_preview.png
├── dmpl_sample.npz
└── teaser.gif
I would like to ask that why is the joints rendered under SMPL-H mode (pose_body + pose_hand) has number of 52, which consist of body 20 and 16 for each hand, mean while the hand of MANO should be 21. I also notice the 5 missing joints are the end of finger. Is there any misunderstanding? thank you very much.
Hi,
I am trying to visualize an amass example using its body_pose, root_orient and translation parameters. It looks like the video that I get from the rendering tool has some camera movements. It is not just the body that is moving in the space, but also the camera. I feed all these three parameters as items in the body_params dictionary.
Is that normal to see camera movements here?
`
def render_smpl_params(bm, body_parms, trans=None, rot_body=None, bg_color='white', body_color='neutral'):
'''
:param bm: pytorch body model with batch_size 1
:param pose_body: Nx21x3
:param trans: Nx3
:param betas: Nxnum_betas
:return: N x 400 x 400 x 3
'''
imw, imh = 400, 400
base_trans = [0, 0.5, 3.0]
mv = MeshViewer(width=imw, height=imh, use_offscreen=True)
mv.set_cam_trans(base_trans)
mv.set_background_color(color=bg_color)
faces = c2c(bm.f)
v = c2c(bm(**body_parms).v)
T, num_verts = v.shape[:-1]
images = []
for fIdx in range(T):
verts = v[fIdx]
if rot_body is not None:
verts = rotateXYZ(verts, rot_body)
color_type = body_color
color = np.ones_like(verts) * np.array(colors[color_type])[None, :]
mesh = trimesh.base.Trimesh(verts, faces, vertex_colors=num_verts*colors['grey'])
mv.set_meshes([mesh], 'static')
rendered = mv.render()
images.append(rendered)
return np.array(images).reshape(T, imw, imh, 3)`
Thank you so much for your great work AMASS.
It claims that it covers both hands and body. But I find there are some data always with hand pose as canonical pose.
So may I know how can we identify which dataset is with hand pose parameters (not the canonical pose)? e.g. by name, TCDHANDs.
Thanks!
thx for your nice work. But seems like your download link or server is broken, and i can't connect to the server with your link.
I try to wget your link, and here is the detailed error log:
wget https://download.is.tue.mpg.de/download.php\?domain\=amass\&resume\=1\&sfile\=amass_per_dataset/smplx/gender_specific/mosh_results/ACCAD.tar.bz2
--2023-12-15 15:15:54-- https://download.is.tue.mpg.de/download.php?domain=amass&resume=1&sfile=amass_per_dataset/smplx/gender_specific/mosh_results/ACCAD.tar.bz2
Connecting to 127.0.0.1:7890... connected.
Unable to establish SSL connection.
noticed that all of download urls of Bodydata | Render are broken, something wrong with the server (psfiles.is.tuebingen.mpg.de)?
I was wondering if the data in amass has action labels? or any attributes for each sample?
Well,I have downloaded the whole project file and it seems that it does not include sampled shape parameters (betas).Would it be possible for me to get betas?
Thanks!
I went through the examples in the repo but I think I am currently missing on how to extract the SMPL pose parameters from the given SMPL-H parameters in the provided npz archives.
It says in the paper that AMASS uses 52 joints, where 22 joints are for the body and 30 joints belong to the hands. On the other hand, SMPL has 24 joints (including the root orientation), which is corroborated by the Figure 3 in the AMASS paper.
So, I am not sure on how to close this gap. I assumed the missing 2 joints are the ones for hands. Should I take the first 22x3 body parameters from the 'poses' dictionary entry and somehow append 2x3 parameters from the last 30 hand joints? I'd be happy if anyone can shed some more light on this. Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.