Deep Learning Approach for generating 2D Pose Estimation from Video for Motion Capture Animation
As one of the most critical and frustrating computer vision issues, vision-based monocular humans pose estimation efforts to obtain the location of the human body from images or video. Rapid trends in deep learning techniques ushered meaningful strides and noteworthy advancements in human pose estimation. We propose a quick and efficient approach to detecting a human's 2D pose from a video. The approach uses an Affinity Vector Field (AVF) that learns how the body parts are related. The architecture codes for global interpretation, enabling a bottom-up approach that retains high precision in real time. The architecture was designed to use two branches of a certain method of sequential prediction to learn and link part locations together. Effectively, our approach was carried out over the human pose dataset of MPII.