论文标题

我知道您绘制的内容:在几个徒手草图上学习抓取检测

I Know What You Draw: Learning Grasp Detection Conditioned on a Few Freehand Sketches

论文作者

Lin, Haitao, Cheang, Chilam, Fu, Yanwei, Xue, Xiangyang

论文摘要

在本文中,我们对通过理解徒手草图生成目标grasp的问题感兴趣。该草图对于无法制定语言的人和文本描述不可用的案例很有用。但是,很少有作品意识到人类和机器人之间这种新颖的互动方式的可用性。为此,我们提出了一种生成与草图干燥对象相关的潜在掌握配置的方法。由于草图的固有歧义和抽象的细节,我们通过合并草图的结构来增强表示能力来利用图的优势。该图表的草图得到了进一步的验证,以改善网络的概括,能够通过使用小集合(约100个样本)手绘草图来学习素描征服的Grasp检测。此外,我们的模型以端到端的方式进行了训练和测试,在现实世界应用程序中很容易实现。多对象VMRD和Graspnet 1亿亿数据集的实验证明了该方法的良好概括。物理机器人实验证实了我们方法在对象整理场景中的实用性。

In this paper, we are interested in the problem of generating target grasps by understanding freehand sketches. The sketch is useful for the persons who cannot formulate language and the cases where a textual description is not available on the fly. However, very few works are aware of the usability of this novel interactive way between humans and robots. To this end, we propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects. Due to the inherent ambiguity of sketches with abstract details, we take the advantage of the graph by incorporating the structure of the sketch to enhance the representation ability. This graph-represented sketch is further validated to improve the generalization of the network, capable of learning the sketch-queried grasp detection by using a small collection (around 100 samples) of hand-drawn sketches. Additionally, our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications. Experiments on the multi-object VMRD and GraspNet-1Billion datasets demonstrate the good generalization of the proposed method. The physical robot experiments confirm the utility of our method in object-cluttered scenes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源