论文标题

贝叶斯剩余政策优化:可扩展的贝叶斯加固学习与千里眼专家

Bayesian Residual Policy Optimization: Scalable Bayesian Reinforcement Learning with Clairvoyant Experts

论文作者

Lee, Gilwoo, Hou, Brian, Choudhury, Sanjiban, Srinivasa, Siddhartha S.

论文摘要

面对不确定性的知情和强大的决策对于与人一起执行身体任务的机器人至关重要。我们将其作为贝叶斯的强化学习对潜在的马尔可夫决策过程(MDPS)进行。虽然理论上是黄金标准,但现有算法并不能很好地扩展到连续状态和动作空间。我们的建议基于以下见解:在没有不确定性的情况下,每个潜在的MDP都更容易解决。我们首先获得专家合奏,每个潜在MDP,并融合他们的建议以计算基线政策。接下来,我们培训贝叶斯剩余政策,以改善合奏的建议,并学会减少不确定性。我们的算法,贝叶斯剩余政策优化(BRPO)导入了策略梯度方法和特定于任务的专家技能的可扩展性。 BRPO显着提高了专家的合奏,并极大地胜过现有的自适应RL方法。

Informed and robust decision making in the face of uncertainty is critical for robots that perform physical tasks alongside people. We formulate this as Bayesian Reinforcement Learning over latent Markov Decision Processes (MDPs). While Bayes-optimality is theoretically the gold standard, existing algorithms do not scale well to continuous state and action spaces. Our proposal builds on the following insight: in the absence of uncertainty, each latent MDP is easier to solve. We first obtain an ensemble of experts, one for each latent MDP, and fuse their advice to compute a baseline policy. Next, we train a Bayesian residual policy to improve upon the ensemble's recommendation and learn to reduce uncertainty. Our algorithm, Bayesian Residual Policy Optimization (BRPO), imports the scalability of policy gradient methods and task-specific expert skills. BRPO significantly improves the ensemble of experts and drastically outperforms existing adaptive RL methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源