论文标题
q-pensieve:通过q-snapshots的内存共享,提高多目标RL的样本效率
Q-Pensieve: Boosting Sample Efficiency of Multi-Objective RL Through Memory Sharing of Q-Snapshots
论文作者
论文摘要
许多现实世界中的连续控制问题都在权衡优点和缺点,多目标增强学习(MORL)的困境中,是学习控制策略的通用框架,以实现与目标不同偏好的偏好。但是,现有的MORL方法要么依赖于多个明确搜索的通行证来查找Pareto前沿,因此不是样本效率高效,或者利用共享的策略网络在策略之间进行粗略的知识共享。为了提高MORL的样本效率,我们提出了Q-Pensieve,这是一种策略改进方案,该计划存储了Q-Snapshots的集合,以共同确定策略更新方向,从而使数据共享在策略级别上。我们表明,Q-Pensieve可以自然地与软性政策迭代和融合保证一起集成。为了证实这一概念,我们提出了Q重播缓冲液的技术,该技术从过去的迭代中存储了学到的Q-networks,并实现了实用的参与者 - 批判性的实施。通过广泛的实验和消融研究,我们证明了较少的样本,所提出的算法可以优于各种Morl基准任务的基准MORL方法。
Many real-world continuous control problems are in the dilemma of weighing the pros and cons, multi-objective reinforcement learning (MORL) serves as a generic framework of learning control policies for different preferences over objectives. However, the existing MORL methods either rely on multiple passes of explicit search for finding the Pareto front and therefore are not sample-efficient, or utilizes a shared policy network for coarse knowledge sharing among policies. To boost the sample efficiency of MORL, we propose Q-Pensieve, a policy improvement scheme that stores a collection of Q-snapshots to jointly determine the policy update direction and thereby enables data sharing at the policy level. We show that Q-Pensieve can be naturally integrated with soft policy iteration with convergence guarantee. To substantiate this concept, we propose the technique of Q replay buffer, which stores the learned Q-networks from the past iterations, and arrive at a practical actor-critic implementation. Through extensive experiments and an ablation study, we demonstrate that with much fewer samples, the proposed algorithm can outperform the benchmark MORL methods on a variety of MORL benchmark tasks.