论文标题

瞬态的非平稳性和深度强化学习的概括

Transient Non-Stationarity and Generalisation in Deep Reinforcement Learning

论文作者

Igl, Maximilian, Farquhar, Gregory, Luketina, Jelena, Boehmer, Wendelin, Whiteson, Shimon

论文摘要

即使在固定环境中,也可以在增强学习(RL)中出现非平稳性。例如,大多数RL算法使用非平稳行为策略在整个培训中收集新数据。由于这种非平稳性的瞬息万变,通常不会在深度RL中明确解决,并且单个神经网络不断更新。但是,我们发现证据表明神经网络表现出记忆效应,这些瞬态非平稳性可以永久影响潜在表示并不利地影响泛化性能。因此,为了改善深度RL代理的概括,我们提出了迭代的重新学习(ITER)。 ITER通过重复将当前政策转移到新的初始化网络中来增强标准RL培训,从而在培训期间经历的非平稳性较少。在实验上,我们表明ITER提高了具有挑战性的概括基准Procgen和Multirom的性能。

Non-stationarity can arise in Reinforcement Learning (RL) even in stationary environments. For example, most RL algorithms collect new data throughout training, using a non-stationary behaviour policy. Due to the transience of this non-stationarity, it is often not explicitly addressed in deep RL and a single neural network is continually updated. However, we find evidence that neural networks exhibit a memory effect where these transient non-stationarities can permanently impact the latent representation and adversely affect generalisation performance. Consequently, to improve generalisation of deep RL agents, we propose Iterated Relearning (ITER). ITER augments standard RL training by repeated knowledge transfer of the current policy into a freshly initialised network, which thereby experiences less non-stationarity during training. Experimentally, we show that ITER improves performance on the challenging generalisation benchmarks ProcGen and Multiroom.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源