论文标题

具有线性功能近似和强盗反馈的MDP中的在线学习

Online learning in MDPs with linear function approximation and bandit feedback

论文作者

Neu, Gergely, Olkhovskaya, Julia

论文摘要

我们考虑了一个在线学习问题,学习者以一系列情节与马尔可夫决策过程进行互动,在该过程中,允许奖励功能以对抗性方式在情节之间进行更改,而学习者只能观察与其行为相关的奖励。我们允许状态空间任意较大,但是我们假设所有动作值函数都可以用已知的低维特征图表示为线性函数,并且学习者可以访问环境的模拟器,该模拟器允许从真实MDP动力学中生成轨迹。我们的主要贡献是开发一种我们称为MDP-linexp3的计算有效算法,并证明其遗憾是由$ \ widetilde {\ Mathcal {o}} \ big(h^2 t^{2/3}(2/3}(dk)(dk)(dk)^{1/3} \ big is $ is $ t $ topootion, $ k $是动作的数量,而$ d $是功能图的维度。我们还表明,在MDP动力学上更强大的假设下,我们可以将遗憾提高到$ \ widetilde {\ Mathcal {o}} \ big(H^2 \ sqrt {tdk} \ big)$。据我们所知,MDP-linexp3是此问题设置的第一种可证明有效的算法。

We consider an online learning problem where the learner interacts with a Markov decision process in a sequence of episodes, where the reward function is allowed to change between episodes in an adversarial manner and the learner only gets to observe the rewards associated with its actions. We allow the state space to be arbitrarily large, but we assume that all action-value functions can be represented as linear functions in terms of a known low-dimensional feature map, and that the learner has access to a simulator of the environment that allows generating trajectories from the true MDP dynamics. Our main contribution is developing a computationally efficient algorithm that we call MDP-LinExp3, and prove that its regret is bounded by $\widetilde{\mathcal{O}}\big(H^2 T^{2/3} (dK)^{1/3}\big)$, where $T$ is the number of episodes, $H$ is the number of steps in each episode, $K$ is the number of actions, and $d$ is the dimension of the feature map. We also show that the regret can be improved to $\widetilde{\mathcal{O}}\big(H^2 \sqrt{TdK}\big)$ under much stronger assumptions on the MDP dynamics. To our knowledge, MDP-LinExp3 is the first provably efficient algorithm for this problem setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源