论文标题

检测情节增强学习中的奖励恶化

Detecting Rewards Deterioration in Episodic Reinforcement Learning

论文作者

Greenberg, Ido, Mannor, Shie

论文摘要

在许多RL应用程序中,一旦培训结束,至关重要的是尽快检测代理性能的任何恶化。此外,通常必须在不修改政策的情况下完成,并且在对环境的最小假设下。在本文中,我们通过直接关注奖励和降解测试来解决此问题。我们考虑了一个情节框架,其中每个情节中的奖励不是独立的,也不是分布的,也不是马尔可夫。我们将这个问题作为一个可能部分观察结果的多元均值迁移检测问题。我们以与时间信号恶化(例如奖励)相对应的方式定义平均移位,并通过最佳统计能力得出该问题的测试。从经验上讲,在控制问题中的奖励恶化(使用各种环境修改生成)上,该测试被证明比标准测试更强大 - 通常是通过数量级。我们还建议一种用于虚假警报率控制(BFAR)的新型引导机制,适用于情节(non-i.i.d)信号,并允许我们的测试以在线方式顺序运行。我们的方法不依赖于所学习的环境模型,完全是代理的外部,并且实际上可以应用于任何情节信号中的变化或漂移。

In many RL applications, once training ends, it is vital to detect any deterioration in the agent performance as soon as possible. Furthermore, it often has to be done without modifying the policy and under minimal assumptions regarding the environment. In this paper, we address this problem by focusing directly on the rewards and testing for degradation. We consider an episodic framework, where the rewards within each episode are not independent, nor identically-distributed, nor Markov. We present this problem as a multivariate mean-shift detection problem with possibly partial observations. We define the mean-shift in a way corresponding to deterioration of a temporal signal (such as the rewards), and derive a test for this problem with optimal statistical power. Empirically, on deteriorated rewards in control problems (generated using various environment modifications), the test is demonstrated to be more powerful than standard tests - often by orders of magnitude. We also suggest a novel Bootstrap mechanism for False Alarm Rate control (BFAR), applicable to episodic (non-i.i.d) signal and allowing our test to run sequentially in an online manner. Our method does not rely on a learned model of the environment, is entirely external to the agent, and in fact can be applied to detect changes or drifts in any episodic signal.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源