论文标题
机器人使用视觉和关节扭矩传感器方式移交到人类物体移交
Robot to Human Object Handover using Vision and Joint Torque Sensor Modalities
论文作者
论文摘要
我们提出了一种机器人到人类对象交换算法,并在配备3指机械手的7-DOF臂上实现。该系统实时对人接收器执行完全自主且健壮的对象交换。我们的算法依赖于两种互补传感器方式:手臂上的联合扭矩传感器和一个携带的RGB-D摄像头用于传感器反馈。我们的方法完全是隐含的,即机器人和人类接收器之间没有明确的沟通。通过上述传感器模式获得的信息用作其相关深层神经网络的输入。尽管扭矩传感器网络检测到人类接收器的“意图”,例如:拉,握住或撞击,视觉传感器网络检测到接收器的手指是否已包裹在物体上。然后将网络的输出融合,基于决定是否释放对象。尽管传感器反馈同步,对象和人体检测方面面临着实质性的挑战,但我们的系统在使用人类接收器的初步实际实验中以98 \%的精度实现了强大的机器人到人类移交。
We present a robot-to-human object handover algorithm and implement it on a 7-DOF arm equipped with a 3-finger mechanical hand. The system performs a fully autonomous and robust object handover to a human receiver in real-time. Our algorithm relies on two complementary sensor modalities: joint torque sensors on the arm and an eye-in-hand RGB-D camera for sensor feedback. Our approach is entirely implicit, i.e., there is no explicit communication between the robot and the human receiver. Information obtained via the aforementioned sensor modalities is used as inputs to their related deep neural networks. While the torque sensor network detects the human receiver's "intention" such as: pull, hold, or bump, the vision sensor network detects if the receiver's fingers have wrapped around the object. Networks' outputs are then fused, based on which a decision is made to either release the object or not. Despite substantive challenges in sensor feedback synchronization, object, and human hand detection, our system achieves robust robot-to-human handover with 98\% accuracy in our preliminary real experiments using human receivers.