Motion+: Robust 3D Motion Tracking using Depth Estimation in a Single Camera Setup

Accurately converting camera coordinate translations of a 3D motion tracking marker into a reliable displacement vector is essential for applications in biomechanics, robotics, and virtual reality. However, existing motion tracking approaches often suffer from significant inaccuracies caused by calibration errors, marker occlusions, and sensor noise, which reduce the reliability of motion analysis and downstream applications.

This project evaluates current state of the art methods for 3D displacement estimation and systematically identifies their key limitations. To address these challenges, a novel hybrid algorithm is proposed that combines depth mapping, calibrated camera constants, and quaternion based rotation to improve both accuracy and robustness. The integration of these components allows precise transformation from camera space to world space while minimizing accumulated error during motion tracking.

The proposed method is validated through extensive simulation based experiments under varying conditions, including noise and partial occlusion scenarios. Results demonstrate that the algorithm consistently delivers accurate and stable 3D displacement vectors, highlighting its potential for high precision motion analysis in advanced biomechanical, robotic, and virtual reality systems.

Get in Touch

Whether you have a question about my projects, want to collaborate, or just want to say hello, I’d love to hear from you.