论文标题

Contact2Grasp:3D抓取通过手动触点约束的综合

Contact2Grasp: 3D Grasp Synthesis via Hand-Object Contact Constraint

论文作者

Li, Haoming, Lin, Xinzhuo, Zhou, Yang, Li, Xiang, Huo, Yuchi, Chen, Jiming, Ye, Qi

论文摘要

3D Grasp合成产生了给定输入对象的抓握姿势。现有作品通过学习从对象到握把姿势的分布的直接映射来解决问题。但是,由于物理接触对姿势的微小变化敏感,因此3D对象表示与有效姿势之间的高非线性映射非常不平滑,从而导致发电效率差和限制性的一般性。为了应对挑战,我们引入了一个中间变量,以限制接触区域以限制掌握生成;换句话说,我们通过假设抓握姿势受到完全限制的接触图,将映射分为两个顺序阶段:1)我们首先学习触点图分布以生成grasps的潜在接触图; 2)然后学习从触点地图到握把姿势的映射。此外,我们提出了与生成的接触的渗透性意识优化,以此作为掌握细化的一致性约束。在两个公共数据集上进行了广泛的验证表明,我们的方法优于有关掌握各种指标的最先进方法。

3D grasp synthesis generates grasping poses given an input object. Existing works tackle the problem by learning a direct mapping from objects to the distributions of grasping poses. However, because the physical contact is sensitive to small changes in pose, the high-nonlinear mapping between 3D object representation to valid poses is considerably non-smooth, leading to poor generation efficiency and restricted generality. To tackle the challenge, we introduce an intermediate variable for grasp contact areas to constrain the grasp generation; in other words, we factorize the mapping into two sequential stages by assuming that grasping poses are fully constrained given contact maps: 1) we first learn contact map distributions to generate the potential contact maps for grasps; 2) then learn a mapping from the contact maps to the grasping poses. Further, we propose a penetration-aware optimization with the generated contacts as a consistency constraint for grasp refinement. Extensive validations on two public datasets show that our method outperforms state-of-the-art methods regarding grasp generation on various metrics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源