论文标题
在神经机构平台中解决机器人达到任务的强化学习
Reinforcement Learning for Solving Robotic Reaching Tasks in the Neurorobotics Platform
论文作者
论文摘要
近年来,强化学习(RL)在游戏或机器人技术等定义明确的环境中解决了巨大潜力。本文旨在在神经机构平台(NRP)上运行的模拟中解决机器人达到任务。目标位置是随机初始化的,机器人具有6个自由度。我们比较各种无模型算法的性能。首先,对代理商的地面真相数据进行了训练,从模拟中只有一个连续的运动才能达到目标位置。后来,通过使用图像数据作为仿真环境的输入来增加任务的复杂性。实验结果表明,通过适当的动态培训时间表功能,可以提高培训效率和结果,以进行课程学习。
In recent years, reinforcement learning (RL) has shown great potential for solving tasks in well-defined environments like games or robotics. This paper aims to solve the robotic reaching task in a simulation run on the Neurorobotics Platform (NRP). The target position is initialized randomly and the robot has 6 degrees of freedom. We compare the performance of various state-of-the-art model-free algorithms. At first, the agent is trained on ground truth data from the simulation to reach the target position in only one continuous movement. Later the complexity of the task is increased by using image data as input from the simulation environment. Experimental results show that training efficiency and results can be improved with appropriate dynamic training schedule function for curriculum learning.