论文标题
半监督的奖励学习离线增强学习
Semi-supervised reward learning for offline reinforcement learning
论文作者
论文摘要
在离线增强学习(RL)中,使用记录数据集对代理进行培训。它似乎是攻击现实生活应用的最自然途径,因为在医疗保健和机器人技术与环境相互作用等领域中,它是昂贵的或不道德的。培训代理通常需要奖励功能,但是不幸的是,奖励很少在实践中获得,而且他们的工程具有挑战性和费力。为了克服这一点,我们在最大程度地减少人类奖励注释的约束下研究了奖励学习。我们考虑两种类型的监督类型:时间段注释和演示。我们提出了半监督的学习算法,该算法从有限的注释中学习并包含未标记的数据。在我们使用模拟机器人臂的实验中,我们对行为克隆进行了极大的改进,并通过地面真相奖励紧密地接近了表现。我们进一步研究奖励模型的质量与最终政策之间的关系。例如,我们注意到,奖励模型不需要完美就可以制定有用的政策。
In offline reinforcement learning (RL) agents are trained using a logged dataset. It appears to be the most natural route to attack real-life applications because in domains such as healthcare and robotics interactions with the environment are either expensive or unethical. Training agents usually requires reward functions, but unfortunately, rewards are seldom available in practice and their engineering is challenging and laborious. To overcome this, we investigate reward learning under the constraint of minimizing human reward annotations. We consider two types of supervision: timestep annotations and demonstrations. We propose semi-supervised learning algorithms that learn from limited annotations and incorporate unlabelled data. In our experiments with a simulated robotic arm, we greatly improve upon behavioural cloning and closely approach the performance achieved with ground truth rewards. We further investigate the relationship between the quality of the reward model and the final policies. We notice, for example, that the reward models do not need to be perfect to result in useful policies.