论文标题
瞬态和经常性状态的近似无折现政策评估
Approximate discounting-free policy evaluation from transient and recurrent states
论文作者
论文摘要
为了区分规定良好的政策与瞬态状态的不良行为,我们需要评估临时国家的政策的所谓偏见。但是,我们观察到,到目前为止,大多数(如果不是全部)在近似无折扣的无折扣政策评估中起作用是为了估算复发状态的偏见。因此,我们提出了一个来自瞬态和复发状态的偏差(特别是其相对值)的近似系统系统。它的关键成分是半词LSTD(最小二乘时间差),为此,我们得出了其最小化的表达,可以通过在无模型增强学习中进行采样来实现近似。该分子LSTD还促进了基于LSTD的策略值近似器的一般统一程序的制定。实验结果证明了我们提出的方法的有效性。
In order to distinguish policies that prescribe good from bad actions in transient states, we need to evaluate the so-called bias of a policy from transient states. However, we observe that most (if not all) works in approximate discounting-free policy evaluation thus far are developed for estimating the bias solely from recurrent states. We therefore propose a system of approximators for the bias (specifically, its relative value) from transient and recurrent states. Its key ingredient is a seminorm LSTD (least-squares temporal difference), for which we derive its minimizer expression that enables approximation by sampling required in model-free reinforcement learning. This seminorm LSTD also facilitates the formulation of a general unifying procedure for LSTD-based policy value approximators. Experimental results validate the effectiveness of our proposed method.