论文标题
通过合成训练抓住形状的选择:对汉内斯假体的共同控制
Grasp Pre-shape Selection by Synthetic Training: Eye-in-hand Shared Control on the Hannes Prosthesis
论文作者
论文摘要
我们考虑对物体抓住的任务,可以用多种掌握类型的假肢手抓住。在这种情况下,传达预期的抓紧类型通常需要高的用户认知负载,可以减少采用共享自主框架。其中,基于手腕上的摄像头的视觉输入,所谓的眼睛系统会自动控制掌握前的手工绘制。在本文中,我们提出了一种基于目光的学习方法,用于从RGB序列中进行手部形状分类。与以前的工作不同,我们设计了该系统,以支持以不同的掌握类型掌握每个被认为的对象部分的可能性。为了克服缺乏此类数据并减少对系统训练繁琐的数据收集会话的需求,我们设计了一条渲染手部轨迹的合成视觉序列的管道。我们开发了一种传感器的设置,以获取实际的人类握把序列以进行基准测试,并表明,与实际数据相比,使用合成数据集训练的实用案例相比,使用合成数据集训练的模型比对真实数据培训的模型更好。我们最终将模型整合到Hannes假肢手上,并显示其实际有效性。我们使代码和数据集公开可用,以复制提出的结果。
We consider the task of object grasping with a prosthetic hand capable of multiple grasp types. In this setting, communicating the intended grasp type often requires a high user cognitive load which can be reduced adopting shared autonomy frameworks. Among these, so-called eye-in-hand systems automatically control the hand pre-shaping before the grasp, based on visual input coming from a camera on the wrist. In this paper, we present an eye-in-hand learning-based approach for hand pre-shape classification from RGB sequences. Differently from previous work, we design the system to support the possibility to grasp each considered object part with a different grasp type. In order to overcome the lack of data of this kind and reduce the need for tedious data collection sessions for training the system, we devise a pipeline for rendering synthetic visual sequences of hand trajectories. We develop a sensorized setup to acquire real human grasping sequences for benchmarking and show that, compared on practical use cases, models trained with our synthetic dataset achieve better generalization performance than models trained on real data. We finally integrate our model on the Hannes prosthetic hand and show its practical effectiveness. We make publicly available the code and dataset to reproduce the presented results.