论文标题

PRIM-LAFD:从演示中学习和适应基于原始的技能的框架,以插入插入任务

Prim-LAfD: A Framework to Learn and Adapt Primitive-Based Skills from Demonstrations for Insertion Tasks

论文作者

Wu, Zheng, Lian, Wenzhao, Wang, Changhao, Li, Mengxi, Schaal, Stefan, Tomizuka, Masayoshi

论文摘要

在机器人学习社区中,以数据有效的方式学习可通用的插入技能一直是一个挑战。尽管加固学习(RL)的当前最新方法在获取操纵技能方面表现出了有希望的表现,但这些算法是数据渴望的,很难概括。为了克服问题,在本文中,我们提出了Prim-Lafd,这是一个简单而有效的框架,可以从演示中学习和适应基于原始的插入技能。 PRIM-LAFD利用黑框功能优化来学习和调整利用先前经验的原始参数。人类的示范被建模为密集的奖励指导参数学习。我们在八个PEG孔和连接器插入任务上验证了所提出的方法的有效性。实验结果表明,我们提出的框架需要不到一个小时才能获得插入技能,并且只有15分钟才能适应物理机器人上看不见的插入任务。

Learning generalizable insertion skills in a data-efficient manner has long been a challenge in the robot learning community. While the current state-of-the-art methods with reinforcement learning (RL) show promising performance in acquiring manipulation skills, the algorithms are data-hungry and hard to generalize. To overcome the issues, in this paper we present Prim-LAfD, a simple yet effective framework to learn and adapt primitive-based insertion skills from demonstrations. Prim-LAfD utilizes black-box function optimization to learn and adapt the primitive parameters leveraging prior experiences. Human demonstrations are modeled as dense rewards guiding parameter learning. We validate the effectiveness of the proposed method on eight peg-hole and connector-socket insertion tasks. The experimental results show that our proposed framework takes less than one hour to acquire the insertion skills and as few as fifteen minutes to adapt to an unseen insertion task on a physical robot.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源