This tutorial series provides step-by-step instructions for how to perform human pose estimation in Unity with the Barracuda inference library. We’ll be using a pretrained PoseNet model to estimate the 2D locations of key points on an individual’s body.
Part 1: This first post covers how to set up a video player in Unity. We'll be using the video player to check the accuracy of the PoseNet model.
Part 2: This post covers how to implement the preprocessing steps for the PoseNet model.
Part 2.5: This post covers how to view preprocessed images during runtime.
Part 3: This post covers how to perform inference with the PoseNet model.
Part 4: This post covers how to process the output of the PoseNet model.
Part 5: This post covers how to map the key point locations to GameObjects.
Part 6: This post covers how to create a pose skeleton by drawing lines between key points.
Part 7: This post covers how to use a webcam feed as input for the PoseNet model.
Part 8: This post covers how to handle video input with different aspect ratios.