论文标题
LIDAR-AID惯性姿势:大规模的人类运动捕获稀疏惯性和激光雷达传感器
LiDAR-aid Inertial Poser: Large-scale Human Motion Capture by Sparse Inertial and LiDAR Sensors
论文作者
论文摘要
我们提出了一种多传感器融合方法,用于在大规模的场景中使用准确的连续局部姿势和全球轨迹捕获具有挑战性的3D人体动作,仅使用单个LIDAR和4个IMU,这些姿势和4个Imus可以方便地设置并轻轻地磨损。具体而言,要充分利用LiDAR捕获的全局几何信息和IMU捕获的局部动态运动,我们以粗略的方式设计了一个两阶段的姿势估计器,其中点云提供了粗糙的身体形状和IMU测量值,可优化局部动作。此外,考虑到依赖视图的部分点云引起的翻译偏差,我们提出了姿势引导的翻译校正器。它预测了捕获的点与真实根位置之间的偏移,这使得连续的运动和轨迹更加精确和自然。此外,我们收集了LIDAR-IMU多模式MOCAP数据集LIPD,在远程方案中具有不同的人类行动。在LIPD和其他开放数据集上进行的广泛的定量和定性实验都证明了我们在大规模场景中强迫运动捕获的能力,在大规模的场景中,这可以超过其他方法,从而明显的余量。我们将发布我们的代码并捕获数据集以刺激未来的研究。
We propose a multi-sensor fusion method for capturing challenging 3D human motions with accurate consecutive local poses and global trajectories in large-scale scenarios, only using single LiDAR and 4 IMUs, which are set up conveniently and worn lightly. Specifically, to fully utilize the global geometry information captured by LiDAR and local dynamic motions captured by IMUs, we design a two-stage pose estimator in a coarse-to-fine manner, where point clouds provide the coarse body shape and IMU measurements optimize the local actions. Furthermore, considering the translation deviation caused by the view-dependent partial point cloud, we propose a pose-guided translation corrector. It predicts the offset between captured points and the real root locations, which makes the consecutive movements and trajectories more precise and natural. Moreover, we collect a LiDAR-IMU multi-modal mocap dataset, LIPD, with diverse human actions in long-range scenarios. Extensive quantitative and qualitative experiments on LIPD and other open datasets all demonstrate the capability of our approach for compelling motion capture in large-scale scenarios, which outperforms other methods by an obvious margin. We will release our code and captured dataset to stimulate future research.