论文标题

Videodex:从互联网视频中学习灵巧性

VideoDex: Learning Dexterity from Internet Videos

论文作者

Shaw, Kenneth, Bahl, Shikhar, Pathak, Deepak

论文摘要

要建立可以在许多环境中运行的一般机器人代理,机器人通常必须收集现实世界中的经验。但是,由于安全,时间和硬件限制,这通常不可行。因此,我们建议将下一个最好的事情作为现实世界的体验:借助他们的人类的互联网视频。视觉特征等视觉先验通常是从视频中学到的,但我们认为,可以将更多视频中的信息用作更强的先验。我们构建了一种学习算法,Videodex,该算法利用人类视频数据集中的视觉,动作和物理先验来指导机器人行为。神经网络中的这些动作和身体先验决定了特定机器人任务的典型人类行为。我们在机器人臂和灵活的手动系统上测试我们的方法,并在各种操纵任务上显示出强大的结果,表现优于各种最新方法。 https://video-dex.github.io的视频

To build general robotic agents that can operate in many environments, it is often imperative for the robot to collect experience in the real world. However, this is often not feasible due to safety, time, and hardware restrictions. We thus propose leveraging the next best thing as real-world experience: internet videos of humans using their hands. Visual priors, such as visual features, are often learned from videos, but we believe that more information from videos can be utilized as a stronger prior. We build a learning algorithm, VideoDex, that leverages visual, action, and physical priors from human video datasets to guide robot behavior. These actions and physical priors in the neural network dictate the typical human behavior for a particular robot task. We test our approach on a robot arm and dexterous hand-based system and show strong results on various manipulation tasks, outperforming various state-of-the-art methods. Videos at https://video-dex.github.io

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源