论文标题

听起来正确:动​​态机器人操纵的听觉自我审视

That Sounds Right: Auditory Self-Supervision for Dynamic Robot Manipulation

论文作者

Thankaraj, Abitha, Pinto, Lerrel

论文摘要

学习从原始感觉数据产生接触丰富的动态行为一直是机器人技术的长期挑战。突出的方法主要集中在使用视觉或触觉感测上,不幸的是,一个方法未能捕获高频相互作用,而另一个则无法捕获大规模数据收集。在这项工作中,我们提出了一种以数据为中心的动态操作方法,该方法使用经常被忽略的信息来源:声音。我们首先使用商品接触麦克风在五个动态任务上收集了25K相互作用对的数据集。然后,鉴于这些数据,我们利用自我监督的学习来加速声音的行为预测。我们的实验表明,这种自我监督的“预处理”对于达到高性能至关重要,MSE的MSE比普通的监督学习低34.5%,而视觉训练的MSE低54.3%。重要的是,我们发现,当被要求生成所需的声音配置文件时,我们的模型在UR10机器人上的在线推出可以产生动态行为,而对音频相似性指标的监督学习平均提高了11.5%。

Learning to produce contact-rich, dynamic behaviors from raw sensory data has been a longstanding challenge in robotics. Prominent approaches primarily focus on using visual or tactile sensing, where unfortunately one fails to capture high-frequency interaction, while the other can be too delicate for large-scale data collection. In this work, we propose a data-centric approach to dynamic manipulation that uses an often ignored source of information: sound. We first collect a dataset of 25k interaction-sound pairs across five dynamic tasks using commodity contact microphones. Then, given this data, we leverage self-supervised learning to accelerate behavior prediction from sound. Our experiments indicate that this self-supervised 'pretraining' is crucial to achieving high performance, with a 34.5% lower MSE than plain supervised learning and a 54.3% lower MSE over visual training. Importantly, we find that when asked to generate desired sound profiles, online rollouts of our models on a UR10 robot can produce dynamic behavior that achieves an average of 11.5% improvement over supervised learning on audio similarity metrics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源