论文标题

从单眼视频中基于物理的3D人姿势重建的轨迹优化

Trajectory Optimization for Physics-Based Reconstruction of 3d Human Pose from Monocular Video

论文作者

Gärtner, Erik, Andriluka, Mykhaylo, Xu, Hongyi, Sminchisescu, Cristian

论文摘要

我们专注于从单眼视频中估算物理上合理的人类运动的任务。现有的不考虑物理的方法通常会与运动伪影产生时间不一致的输出,而最先进的基于物理的方法仅显示仅在受控的实验室条件下起作用,或者考虑限于脚的简化车身接触。本文探讨了如何通过将功能齐全的物理引擎直接纳入姿势估计过程来解决这些缺点。给定一个不受控制的现实世界场景作为输入,我们的方法估计了地面平面位置和物理体模型的尺寸。然后,它通过执行轨迹优化来恢复身体运动。我们表述的优点是,它很容易将其推广到可能具有多种地面特性的各种场景,并支持任何形式的自我接触和铰接的身体和场景几何形状之间的接触。我们表明,我们的方法在人为36M基准的现有基于物理的方法方面取得了竞争成果,同时直接适用而无需重新训练来自AIST基准测试和不受控制的互联网视频的更复杂的动态动作。

We focus on the task of estimating a physically plausible articulated human motion from monocular video. Existing approaches that do not consider physics often produce temporally inconsistent output with motion artifacts, while state-of-the-art physics-based approaches have either been shown to work only in controlled laboratory conditions or consider simplified body-ground contact limited to feet. This paper explores how these shortcomings can be addressed by directly incorporating a fully-featured physics engine into the pose estimation process. Given an uncontrolled, real-world scene as input, our approach estimates the ground-plane location and the dimensions of the physical body model. It then recovers the physical motion by performing trajectory optimization. The advantage of our formulation is that it readily generalizes to a variety of scenes that might have diverse ground properties and supports any form of self-contact and contact between the articulated body and scene geometry. We show that our approach achieves competitive results with respect to existing physics-based methods on the Human3.6M benchmark, while being directly applicable without re-training to more complex dynamic motions from the AIST benchmark and to uncontrolled internet videos.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源