论文标题
Kepoint-Graspnet:基于Kepoint的6-DOF GRASP生成,来自单眼RGB-D输入
Keypoint-GraspNet: Keypoint-based 6-DoF Grasp Generation from the Monocular RGB-D input
论文作者
论文摘要
从点云输入中的6-DOF GRASP学习中取得了巨大的成功,但是由于点集无秩序而引起的计算成本仍然是一个令人关注的问题。另外,我们从本文中的RGB-D输入中探讨了GRASP生成。提出的解决方案Kepoint-GraspNet检测图像空间中抓地力器关键点的投影,然后用PNP算法恢复SE(3)姿势。建立了基于原始形状和Grasp家族的合成数据集来检查我们的想法。基于公制的评估表明,我们的方法在掌握建议的准确性,多样性和时间成本方面优于基准。最后,机器人实验显示出很高的成功率,证明了在现实世界应用中的想法的潜力。
Great success has been achieved in the 6-DoF grasp learning from the point cloud input, yet the computational cost due to the point set orderlessness remains a concern. Alternatively, we explore the grasp generation from the RGB-D input in this paper. The proposed solution, Keypoint-GraspNet, detects the projection of the gripper keypoints in the image space and then recover the SE(3) poses with a PnP algorithm. A synthetic dataset based on the primitive shape and the grasp family is constructed to examine our idea. Metric-based evaluation reveals that our method outperforms the baselines in terms of the grasp proposal accuracy, diversity, and the time cost. Finally, robot experiments show high success rate, demonstrating the potential of the idea in the real-world applications.