Giter Site home page Giter Site logo

naman-ntc / pytorch-human-pose-estimation Goto Github PK

View Code? Open in Web Editor NEW
464.0 23.0 77.0 569 KB

Implementation of various human pose estimation models in pytorch on multiple datasets (MPII & COCO) along with pretrained models

License: MIT License

Python 95.99% C++ 0.11% Cuda 3.90%
human-pose-estimation pose-estimation hourglass-network stacked-hourglass-networks deeppose mpii coco keypoints-detector pytorch pytorch-implmention

pytorch-human-pose-estimation's Introduction

Pytorch-Human-Pose-Estimation

This repository provides implementation with training/testing codes of various human pose estimation architectures in Pytorch Authors : Naman Jain and Sahil Shah

Some visualizations from pretrained models:


3.png | 42.png

Networks Implemented

Upcoming Networks

Datasets

Requirements

  • pytorch == 0.4.1
  • torchvision ==0.2.0
  • scipy
  • configargpare
  • progress
  • json_tricks
  • Cython

Installation & Setup

pip install -r requirements.txt

For setting up MPII dataset please follow this link and update the dataDir parameter in mpii.defconf configration file. Also please download and unzip this folder and updates the paths for worldCoors & headSize in the config file.

For setting up COCO dataset please follow this link and update the dataDir parameter in coco.defconf

Usage

There are two important parameters that are required for running, DataConfig and ModelConfig. Corresponding to both datasets (MPII & COCO) config files are provided in the conf/datasets folder. Corresponding to all models implemented config files are provided in conf/models folder.

To train a model please use

python main.py -DataConfig conf/datasets/[DATA].defconf -ModelConfig conf/models/[MODEL_NAME].defconf
-ModelConfig config file for the model to use
-DataConfig config file for the dataset to use

To continue training a pretrained model please use

python main.py -DataConfig conf/datasets/[DATA].defconf -ModelConfig conf/models/[MODEL_NAME].defconf --loadModel [PATH_TO_MODEL]
-ModelConfig config file for the model to use
-DataConfig config file for the dataset to use
--loadModel path to the .pth file for the model (containing state dicts of model, optimizer and epoch number)
(use [-test] to run only the test epoch)

Further options can (and should!) be tweaked from the model and data config files (in the conf folder).

The training window looks like this (Live-Updating Progress Bar Support): progress.png

To download the pretrained-models please use this link.

PreTrained Models

Model DataSet Performance
ChainedPredictions MPII PCKh : 81.8
StachedHourGlass MPII PCKh : 87.6
DeepPose MPII PCKh : 54.2
ChainedPredictions COCO PCK : 82
StachedHourGlass COCO PCK : 84.7
DeepPose COCO PCK : 70.4

Acknowledgements

We used help of various open source implementations. We would like to thank Microsoft Human Pose Estimation for providing dataloader for COCO, Xingi Zhou's 3D Hourglass Repo for MPII dataloader and HourGlass Pytorch Codebase. We would also like to thank PyraNet & Attention-HourGlass for open-sourcing their code in lua.

To Do

  • Implement code for showing the MAP performance on the COCO dataset
  • Add visualization code
  • Add more models
  • Add visdom support

We plan (and will try) to complete these very soon!!

pytorch-human-pose-estimation's People

Contributors

naman-ntc avatar sahil00199 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pytorch-human-pose-estimation's Issues

Performance of Pyranet on MPII

Hello, impressive work. I am wondering whether you can post the results on MPII of the Pyranet. If possible, would you mind posting the pre-trained model as well. Thanks.

One bug about "python main.py"

Thank you so much for your cod. @Naman-ntc I have the code of the Stacked Hourglass Net, but it uses lua,not Python. So your code is very import for me.
However I meet a problem when I "python main.py" recently. it dose not work. The last line in bug is ImportError: Building module datasets.COCO.nms.gpu_nms failed:['ImportError:/home/dev0/.pyxbld/lib.linux-x86_64-3.6/datasets/COCO/nms/gpu_nms.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _nms\n'].
I have not this bug at first. It is the second. When the first bug comes, I use import pyximport pyximport.install(), install pyxbld and copy some files in ./dataset/COCO/nms/ to somewhere and. And I do it. But this bug make me sad. So I really need your help.
PS. My command is python main.py -DataConfig ./conf/datasets/coco.defconf -ModelConfig ./conf/models/StackedHourGlass.defconf , and the bug is
image.
Waitting in hope for your reply. Thanks so much.

Information about worldCoors.mat

Hi,
I was wondering what worldCoors how they are made. I know links are not working and you no longer working in this repo.
I get to understand how headbox matlab is created but couldnt able to understand the logic of worldCoors.
if you able to recall what would worldCoors do with image, it would be very helpful for me to work on my project

Thanks and regards
sathish ram

visualizate

hi, I want to visualizate some specified picture,could you please tell me how to achieve it

how can I train deeppose with muti-gpu?

I want to train deeppose with muti-gpu, what should I do? I try change gpu id to 0, 1, 2, 3, but it is not available. Thank you so much if you can help me !

No module named 'datasets.COCO.nms.cpu_nms'

I followed the instruction and tried to run a test.
However, something seems to be wrong with coco api.
I have installed coco api before which was working. I used it in my other work before.

Traceback (most recent call last):
File "main.py", line 2, in
import builder
File "/scratch/liu.shu/codesPool/Pytorch-Human-Pose-Estimation/builder.py", line 5, in
import dataloaders as dataloaders
File "/scratch/liu.shu/codesPool/Pytorch-Human-Pose-Estimation/dataloaders.py", line 1, in
import datasets
File "/scratch/liu.shu/codesPool/Pytorch-Human-Pose-Estimation/datasets/init.py", line 2, in
from datasets.coco import *
File "/scratch/liu.shu/codesPool/Pytorch-Human-Pose-Estimation/datasets/coco.py", line 2, in
from datasets.COCO.coco import COCODataset
File "/scratch/liu.shu/codesPool/Pytorch-Human-Pose-Estimation/datasets/COCO/init.py", line 11, in
from .coco import COCODataset as coco
File "/scratch/liu.shu/codesPool/Pytorch-Human-Pose-Estimation/datasets/COCO/coco.py", line 24, in
from .nms.nms import oks_nms
File "/scratch/liu.shu/codesPool/Pytorch-Human-Pose-Estimation/datasets/COCO/nms/nms.py", line 13, in
from .cpu_nms import cpu_nms
ModuleNotFoundError: No module named 'datasets.COCO.nms.cpu_nms'

Inference with CPU

Hi.
I read issue #3 (#3) on using pretrained models so I tried to do as you mentioned in the issue:

Screenshot from 2019-06-25 21-20-30

However, I get this error:

Screenshot from 2019-06-25 21-22-58

Since I can not use a gpu. I was wondering if inference is possible with CPU only.
If so, i would really appreciate it that you explain where i'm doing wrong

Issue with Training

I have followed the steps as your README file explains. However, when I am running the model, I am getting the following errors. Could you please help?

 The Model chosen for training is : DeepPose
==> initializing 2D train data.
/localhome/prathmeshmadhu/work/2019/Pytorch-Human-Pose-Estimation/data/mpii/worldCoodstrain.mat
/localhome/prathmeshmadhu/work/2019/Pytorch-Human-Pose-Estimation/data
Loaded 2D train 22246 samples
==> initializing 2D val data.
/localhome/prathmeshmadhu/work/2019/Pytorch-Human-Pose-Estimation/data/mpii/worldCoodstrain.mat
/localhome/prathmeshmadhu/work/2019/Pytorch-Human-Pose-Estimation/data
Loaded 2D val 2958 samples
==>Traceback (most recent call last):
  File "main.py", line 37, in <module>
    Trainer.train(TrainDataLoader, ValDataLoader, Epoch, opts.nEpochs)
  File "/localhome/prathmeshmadhu/work/2019/Pytorch-Human-Pose-Estimation/trainer.py", line 27, in train
    train = self._epoch(traindataloader, epoch)
  File "/localhome/prathmeshmadhu/work/2019/Pytorch-Human-Pose-Estimation/trainer.py", line 77, in _epoch
    output = model(data)
  File "/localhome/prathmeshmadhu/.virtualenvs/dl/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/localhome/prathmeshmadhu/work/2019/Pytorch-Human-Pose-Estimation/models/DeepPose.py", line 16, in forward
    return self.resnet(x)
  File "/localhome/prathmeshmadhu/.virtualenvs/dl/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/localhome/prathmeshmadhu/.virtualenvs/dl/lib/python3.6/site-packages/torchvision/models/resnet.py", line 162, in forward
    x = self.fc(x)
  File "/localhome/prathmeshmadhu/.virtualenvs/dl/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/localhome/prathmeshmadhu/.virtualenvs/dl/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 67, in forward
    return F.linear(input, self.weight, self.bias)
  File "/localhome/prathmeshmadhu/.virtualenvs/dl/lib/python3.6/site-packages/torch/nn/functional.py", line 1352, in linear
    ret = torch.addmm(torch.jit._unwrap_optional(bias), input, weight.t())
RuntimeError: size mismatch, m1: [16 x 512], m2: [2048 x 32] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:266

9

9

worldCoors and headSize

Hi,
I was wondering what worldCoors and headSize represent and how they are made.
Thanks.

where is train.h5?

thanks for your code!!
I can't find your train.h5. Where I can find this? in your model.zip?

Question on DeepPose

Your implementation just includes stage 1 of deep pose, but leave cascade part out,don't you?
I read the paper but did not quite understand how training is done for stage 2 and afterwards?
How is training data for stage 2 generated? Do we need to first wait stage 1 to be trained and run stage 1 on the training data to produce a coarse estimation and use that estimation to generate patches for stage 2?

Time consuming in pose attention

Thanks for your excellent works. But it is time consuming in pose attention comparing with the lua-torch version, more specifically,the AttentionIter(). Is there something wrong?

Predicted result for multiple people in graph

Hi,

I have noted the MPII-Dataset labels only contains one data for each joint.

I am wondering how do you label 2 or more people on one image ? How do you represent it in the label?

Also, if possible. Could you please advise how do you evaluate your model accuracy?

Thanks

Demo

Hi there,

Thanks for the amazing work and I really appreciate that! I was wondering if you are going to post a demo like using pictures, videos or web_cam? If so, that would be awesome!

Thank you again!

What does the nModules mean?

Thank you very much for your contribution!What does the nModules mean?why nModules=2 ?Looking forward to your answer!

Where is the implementation about T step?

In paper "Human Pose Estimation with Iterative Error Feedback", T is the nuber of correction steps taken by the model. I didn't find the detail in the code. Please help me, thanks very much.

How to use pretrained models?

Hi, your README doesn't include how to load pretrained models and infer with a test image. Can you update or show an example here?

Reproducing Results - StackedHourGlass on COCO

Hi,

I tried reproducing the results for StackedHourGlass model on COCO dataset using the pretrained weights but the PCK I got on COCO's validation dataset is 68.5

Am I missing something?

Missing mpii/pureannot/train.h5

After following the instruction, I ran command
python main.py -DataConfig conf/datasets/mpii.defconf -ModelConfig conf/models/ChainedPredictions.defconf

Got error:

File "/scratch/liu.shu/codesPool/Pytorch-Human-Pose-Estimation/datasets/mpii.py", line 22, in init
f = H.File('{}/mpii/pureannot/{}.h5'.format(self.opts.dataDir, split), 'r')
File "/home/liu.shu/.conda/envs/pch1.5/lib/python3.8/site-packages/h5py/_hl/files.py", line 406, in init
...
File "h5py/h5f.pyx", line 88, in h5py.h5f.open
OSError: Unable to open file (unable to open file: name = '/scratch/liu.shu/datasets/MPII/mpii/pureannot/train.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)

I didn't see the instruction how to get the mpii/pureannot/train.h5.
Could you provide it?

how to evaluate multi person?

I think your code have for single person pose estimation. is there any other evaluate code for multi person pose estimation?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.