论文标题

Karolos:机器人任务环境的开源加固学习框架

Karolos: An Open-Source Reinforcement Learning Framework for Robot-Task Environments

论文作者

Bitter, Christian, Thun, Timo, Meisen, Tobias

论文摘要

在增强学习(RL)研究中,模拟可以在算法之间以及原子的原型和高参数调整之间进行基准。为了在研究和现实世界中促进RL,需要在尽可能快地进行实验方面有效地有效地进行框架。另一方面,它们必须足够灵活,以允许新开发的优化技术的整合,例如新的RL算法是由活跃的研究社区不断提出的。在本文中,我们介绍了为机器人应用开发的RL框架Karolos,特别关注转移方案,并在模块化环境体系结构中反映出不同的机器人任务组合。此外,我们还提供了最先进的RL算法的实现,以及共同的学习实现增强功能,以及在跨多个过程中平行环境的体系结构,以显着加快实验。该代码是开源的,并在GitHub上发布,目的是促进对机器人技术中RL应用的研究。

In reinforcement learning (RL) research, simulations enable benchmarks between algorithms, as well as prototyping and hyper-parameter tuning of agents. In order to promote RL both in research and real-world applications, frameworks are required which are on the one hand efficient in terms of running experiments as fast as possible. On the other hand, they must be flexible enough to allow the integration of newly developed optimization techniques, e.g. new RL algorithms, which are continuously put forward by an active research community. In this paper, we introduce Karolos, a RL framework developed for robotic applications, with a particular focus on transfer scenarios with varying robot-task combinations reflected in a modular environment architecture. In addition, we provide implementations of state-of-the-art RL algorithms along with common learning-facilitating enhancements, as well as an architecture to parallelize environments across multiple processes to significantly speed up experiments. The code is open source and published on GitHub with the aim of promoting research of RL applications in robotics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源