Giter Site home page Giter Site logo

pose_to_smpl's Introduction

Pose_to_SMPL

Fitting SMPL Parameters by 3D-pose Key-points

The repository provides a tool to fit SMPL parameters from 3D-pose datasets that contain key-points of human body.

The SMPL human body layer for Pytorch is from the smplpytorch repository.

Setup

1. The smplpytorch package

  • Run without installing: You will need to install the dependencies listed in environment.yml:

    • conda env update -f environment.yml in an existing environment, or
    • conda env create -f environment.yml, for a new smplpytorch environment
  • Install: To import SMPL_Layer in another project with from smplpytorch.pytorch.smpl_layer import SMPL_Layer do one of the following.

    • Option 1: This should automatically install the dependencies.
      git clone https://github.com/gulvarol/smplpytorch.git
      cd smplpytorch
      pip install .
    • Option 2: You can install smplpytorch from PyPI. Additionally, you might need to install chumpy.
      pip install smplpytorch

2. Download SMPL pickle files

  • Download the models from the SMPL website by choosing "SMPL for Python users". Note that you need to comply with the SMPL model license.
  • Extract and copy the models folder into the smplpytorch/native/ folder (or set the model_root parameter accordingly).

3. Download Dataset

Fitting

1. Executing Code

You can start the fitting procedure by the following code and the configuration file in fit/configs corresponding to the dataset_name will be loaded (the dataset_path can also be set in the configuration file):

python fit/tools/main.py --dataset_name [DATASET NAME] --dataset_path [DATASET PATH]

2. Output

  • Direction: The output SMPL parameters will be stored in fit/output

  • Format: The output are .pkl files, and the data format is:

    {
    	"label": [The label of action],
    	"pose_params": pose parameters of SMPL (shape = [frame_num, 72]),
    	"shape_params": pose parameters of SMPL (shape = [frame_num, 10]),
    	"Jtr": key-point coordinates of SMPL model (shape = [frame_num, 24, 3])
    }
    

pose_to_smpl's People

Contributors

dou-yiming avatar gulvarol avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

pose_to_smpl's Issues

Failed to interpret file

hello. in main.py,the line:
target = torch.from_numpy(transform(args.dataset_name,load(args.dataset_name,
os.path.join(root, file)))).float()
and in load.py,the line: elif name == "NTU":
return np.load(path, allow_pickle=True)
so i can't read the ntu dataset,do you preprocess the skeleton data? np.load can't open the skeleton data maybe

How to adjust the model's pose correctly, and how to apply customized texture to the SMPL model?

I am currently working on applying my generated 3D coordinate human joint key points to the SMPL model.

I am confused about how to:
1. adjust the model's pose
2. apply customized texture to the SMPL model.

The images below depict my generated key points. Currently using the modified demo.py, I am trying to turn 102 dimensions(34 joint key points) into 72 dimensions(24 joint key points). My goal is to generate the SMPL model to match the desired pose shown in Image 1.

I am now able to transform the body key points’ coordinates and apply them to the SMPL model as the result shown in Image 4. But as you can see, its posing looks really different from that shown in Image 1.

Image 1: Generated body key points
MOSA_coordainate

Image 2: The difference between SMPL and my original coordinates direction.
photo_2023-11-07_11-26-18

The handwritten joint numbers are the original key point numbers, shown in Image 3.

Image 3: the SMPL body joint with corresponding original key points.
對應joint

Since the coordinates of (x, y, z) of my original key points are different from the SMPL coordinate system, I have to transform the body joint coordinates to the SMPL coordinates first. Below is the code snippet I wrote for the transformation:

def change_motion_format(motion_frame):
    new_24_joint = np.zeros((24, 3))

    new_24_joint[15] = np.array([-(motion_frame[0]+motion_frame[3])/2, -(motion_frame[2]+motion_frame[5])/2, -(motion_frame[1]+motion_frame[4])/2])
    new_24_joint[12] = np.array([-motion_frame[18], motion_frame[20], motion_frame[19]])
    new_24_joint[3] = np.array([-(motion_frame[54]+motion_frame[57])/2, (motion_frame[56]+motion_frame[59])/2, (motion_frame[55]+motion_frame[58])/2])
    new_24_joint[9] = np.array([-motion_frame[21], motion_frame[23], motion_frame[22]])
    new_24_joint[6] = np.average([motion_frame[3], motion_frame[9]], axis=0)
    check = np.absolute(new_24_joint[6] - new_24_joint[3])
    new_24_joint[0] = new_24_joint[3] - check

    new_24_joint[14] = np.array([(motion_frame[24]+motion_frame[15])/2, (motion_frame[26]+motion_frame[17])/2, (motion_frame[25]+motion_frame[16])/2])
    new_24_joint[17] = np.array([-motion_frame[24], motion_frame[26], motion_frame[25]])
    new_24_joint[19] = np.array([-motion_frame[27], motion_frame[29], motion_frame[28]])
    new_24_joint[21] = np.array([-(motion_frame[30]+motion_frame[33])/2, (motion_frame[32]+motion_frame[35])/2, (motion_frame[31]+motion_frame[34])/2])
    new_24_joint[23] = np.array([-motion_frame[36], motion_frame[38], motion_frame[37]])
    
    new_24_joint[13] = np.array([(motion_frame[39]+motion_frame[15])/2, -(motion_frame[41]+motion_frame[17])/2, (motion_frame[40]+motion_frame[16])/2])
    new_24_joint[16] = np.array([-motion_frame[39], -motion_frame[41], motion_frame[40]])
    new_24_joint[18] = np.array([-motion_frame[42], -motion_frame[44], motion_frame[43]])
    new_24_joint[20] = np.array([-(motion_frame[45]+motion_frame[48])/2, -(motion_frame[47]+motion_frame[50])/2, (motion_frame[46]+motion_frame[49])/2])
    new_24_joint[22] = np.array([-motion_frame[51], -motion_frame[53], motion_frame[52]])

    new_24_joint[2] = np.array([(-motion_frame[54]+motion_frame[66])/4, (motion_frame[56]+motion_frame[68])/4, (motion_frame[55]+motion_frame[67])/4])
    new_24_joint[5] = np.array([-motion_frame[66], motion_frame[68], motion_frame[67]])
    new_24_joint[8] = np.array([-motion_frame[69], motion_frame[71], motion_frame[70]])
    new_24_joint[11] = np.array([-motion_frame[72], motion_frame[74], motion_frame[73]])
    #26:75, 76, 77
    new_24_joint[1] = np.array([-(motion_frame[57]+motion_frame[78])/4, (motion_frame[59]+motion_frame[80])/4, (motion_frame[58]+motion_frame[79])/4])
    new_24_joint[4] = np.array([-motion_frame[78], motion_frame[80], motion_frame[79]])
    new_24_joint[7] = np.array([-motion_frame[81], motion_frame[83], motion_frame[82]])
    new_24_joint[10] = np.array([-motion_frame[84], motion_frame[86], motion_frame[85]])

    for i in range(24):
        # Setting point 7 as the original point.
        new_24_joint[i][0] = new_24_joint[i][0] - new_24_joint[7][0]
        new_24_joint[i][1] = new_24_joint[i][1] - new_24_joint[7][1]
        new_24_joint[i][2] = new_24_joint[i][2] - new_24_joint[7][2]
    
    return new_24_joint.flatten()

The key points' serial number of the 102-dimensional array comparison is shown in the table below:


key point(original 34 joint number) x y z
1 0 1 2
2 3 4 5
3 6 7 8
34 99 100 101

Image 4 shows the pose after the transformation to the SMPL coordinates.

Image 4: The result of the transformation to the SMPL coordinates.
SMPL_平視圖
SMPL_上視圖

Here’s the key points' coordinate values for reconstructing this problem:

The original 34 joint key points value:

tensor([ 6.8808e-02,  1.3655e-01,  9.9387e-01,  5.5250e-03,  9.6803e-02,
         1.0144e+00,  1.1453e-01,  6.7033e-02,  9.9475e-01,  3.5132e-02,
         2.9885e-02,  1.0087e+00,  6.6972e-02, -1.2279e-02,  8.9883e-01,
         7.5143e-02, -4.6622e-02,  7.3468e-01,  5.9664e-02,  7.5100e-02,
         8.5105e-01,  4.9040e-02,  1.2250e-01,  7.1263e-01,  1.6473e-01,
         4.9161e-02,  8.8110e-01,  2.3987e-01,  1.5138e-01,  7.5461e-01,
         1.0578e-01,  1.9731e-01,  7.3396e-01,  1.2650e-01,  2.3269e-01,
         7.2024e-01,  7.3923e-02,  2.1867e-01,  7.3476e-01, -5.9331e-02,
         2.0182e-02,  8.7149e-01, -1.0383e-01,  4.6070e-02,  6.8880e-01,
        -1.6089e-01,  1.5032e-01,  7.6275e-01, -1.2791e-01,  1.7847e-01,
         7.5362e-01, -1.8034e-01,  1.8982e-01,  7.7512e-01,  1.5030e-01,
         1.1482e-01,  5.7148e-01, -2.8307e-02,  8.1120e-02,  5.6636e-01,
         1.2305e-01, -2.7885e-02,  5.9340e-01,  5.0053e-02, -4.2568e-02,
         5.9018e-01,  1.8195e-01,  7.5649e-02,  2.7299e-01,  1.5969e-01,
        -1.1603e-02, -7.8823e-03,  1.8093e-01,  1.1385e-01, -1.1609e-02,
         1.8845e-01,  2.3969e-02,  2.9635e-03, -2.8350e-02,  4.0743e-02,
         2.7095e-01,  3.9562e-02, -1.9456e-02, -8.8135e-03, -2.8389e-02,
         9.0526e-02, -1.3981e-02,  4.4523e-04,  3.0309e-04,  9.0401e-05,
        -2.3316e-01,  1.7788e-01,  8.1466e-01,  1.0221e-02,  1.0220e-01,
         8.8138e-01, -1.1811e-01,  1.2274e-01,  1.0000e+00,  7.3125e-02,
         2.3952e-01,  6.4932e-01])

The pose tensor transforms to SMPL coordinate value:

tensor([[-0.1028,  0.0291,  0.0398,  0.0537,  0.2181,  0.0499,  0.0475,  0.2199,
          0.0671, -0.0214,  0.5777,  0.1174,  0.0679,  0.2798,  0.0602, -0.1424,
          0.2818,  0.0951,  0.0599,  0.0291,  0.0398,  0.0000,  0.0000,  0.0000,
         -0.1597, -0.0079, -0.0116, -0.0490,  0.7126,  0.1225,  0.0284, -0.0140,
          0.0905, -0.1809, -0.0116,  0.1139, -0.0597,  0.8511,  0.0751,  0.0079,
         -0.8031, -0.0132,  0.1199,  0.8079,  0.0013, -0.0372, -1.0041, -0.1167,
          0.0593, -0.8715,  0.0202, -0.1647,  0.8811,  0.0492,  0.1038, -0.6888,
          0.0461, -0.2399,  0.7546,  0.1514,  0.1444, -0.7582,  0.1644, -0.1161,
          0.7271,  0.2150,  0.1803, -0.7751,  0.1898, -0.0739,  0.7348,  0.2187]])

Thank you for taking the time to read my question.
I appreciate your assistance and welcome any advice you may have. If there is an alternative approach to achieve this, I would greatly appreciate your insights and suggestions.

Shape_params values are ZERO

Hello,

Are running the code for CMU_mocap and Human3.6M dataset on the 3d joints, I got the pose_params values, but the shape_params are 0 all the way,

Why could this be the case? Can you provide any insights onto this?

is this because of early stop? since not a single-time, the model was trained till 1000 epochs as stated in the json files.

Query for CMU_Mocap Output Generated

Hello @Dou-Yiming,

I wanted to fit the CMU_Mocap dataset,

I did the following: doanload the dataset and set the path in the config files and then run the command:
(smplpytorch) [ndip@dst-jyk Pose_to_SMPL]$ python fit/tools/main.py --exp cmu --dataset_name CMU_Mocap --dataset_path /mnt/data/ndip/mocap/mocap_3djoints/

(smplpytorch) [ndip@dst-jyk Pose_to_SMPL]$ python fit/tools/main.py --exp cmu --dataset_name CMU_Mocap --dataset_path /mnt/data/ndip/mocap/mocap_3djoints/
2023-05-23 14:40:23,060 - main - INFO - Start print log
2023-05-23 14:40:23,060 - main - INFO - using device: cpu
2023-05-23 14:40:23,248 - main - INFO - Fitting finished! Average loss: 0.000000000

NO output(SMPL Parameters) were generated for the command except:

Screenshot 2023-05-23 140504

Here is how my JSON looks like for CMU_Mocap
Screenshot 2023-05-23 140646

I guess I did some mistake, could you please tell me. Do let me know as I wish to do the same for all the datasets

cost time

the fps in your GPU when use 24 joints to get smpl pose ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.