Patent attributes
Estimating 3D human pose from monocular images is a challenging problem due to the variety and complexity of human poses and the inherent ambiguity in recovering depth from single view. Recent deep learning based methods show promising results by using supervised learning on 3D pose annotated datasets. However, the lack of large-scale 3D annotated training data makes the 3D pose estimation difficult in-the-wild. Embodiments of the present disclosure provide a method which can effectively predict 3D human poses from only 2D pose in a weakly-supervised manner by using both ground-truth 3D pose and ground-truth 2D pose based on re-projection error minimization as a constraint to predict the 3D joint locations. The method may further utilize additional geometric constraints on reconstructed body parts to regularize the pose in 3D along with minimizing re-projection error to improvise on estimating an accurate 3D pose.