论文标题
抓住不起眼的
Grasping the Inconspicuous
论文作者
论文摘要
透明的物体在日常生活中很常见,因此发现许多需要机器人抓住的应用。对于非透明对象,存在许多针对对象抓握的解决方案。但是,由于透明对象的独特视觉特性,标准3D传感器会产生嘈杂或扭曲的测量。现代方法通过完善嘈杂的深度测量或使用深度的中间表示来解决这个问题。为此,我们研究深度学习6D构成了RGB图像的估计,仅用于透明对象抓握。为了训练和测试基于RGB的对象姿势估计的适用性,我们构建了具有6D姿势注释的仅RGB图像的数据集。该实验证明了RGB图像空间对掌握透明对象的有效性。
Transparent objects are common in day-to-day life and hence find many applications that require robot grasping. Many solutions toward object grasping exist for non-transparent objects. However, due to the unique visual properties of transparent objects, standard 3D sensors produce noisy or distorted measurements. Modern approaches tackle this problem by either refining the noisy depth measurements or using some intermediate representation of the depth. Towards this, we study deep learning 6D pose estimation from RGB images only for transparent object grasping. To train and test the suitability of RGB-based object pose estimation, we construct a dataset of RGB-only images with 6D pose annotations. The experiments demonstrate the effectiveness of RGB image space for grasping transparent objects.