论文标题

除了几何约束:一种无模型的方法姿势估计的无模型方法

Nothing But Geometric Constraints: A Model-Free Method for Articulated Object Pose Estimation

论文作者

Liu, Qihao, Qiu, Weichao, Wang, Weiyao, Hager, Gregory D., Yuille, Alan L.

论文摘要

我们提出了一个基于视觉的系统,可以从RGB或RGB-D图像序列中估算机器人组的联合配置,而无需先验模型A,然后将其适应与独立于类别的铰接式对象姿势估计的任务。我们将经典的几何公式​​与深度学习结合在一起,并将外观约束的使用扩展到多刚体系统来解决这一任务。给定视频序列,估计光流可以使像素密度对应关系。之后,通过修改的PNP算法计算6D姿势。关键思想是利用多帧之间的几何约束和约束。此外,我们构建了一个合成数据集,该数据集具有不同种类的机器人和多关节铰接的对象,以研究基于视觉的机器人控制和机器人视觉。我们证明了我们的方法在三个基准数据集上的有效性,并表明我们的方法比最先进的监督方法在估计机器人臂和明确对象的关节角度方面具有更高的准确性。

We propose an unsupervised vision-based system to estimate the joint configurations of the robot arm from a sequence of RGB or RGB-D images without knowing the model a priori, and then adapt it to the task of category-independent articulated object pose estimation. We combine a classical geometric formulation with deep learning and extend the use of epipolar constraint to multi-rigid-body systems to solve this task. Given a video sequence, the optical flow is estimated to get the pixel-wise dense correspondences. After that, the 6D pose is computed by a modified PnP algorithm. The key idea is to leverage the geometric constraints and the constraint between multiple frames. Furthermore, we build a synthetic dataset with different kinds of robots and multi-joint articulated objects for the research of vision-based robot control and robotic vision. We demonstrate the effectiveness of our method on three benchmark datasets and show that our method achieves higher accuracy than the state-of-the-art supervised methods in estimating joint angles of robot arms and articulated objects.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源