论文标题
多目标增强学习中的福利和公平性
Welfare and Fairness in Multi-objective Reinforcement Learning
论文作者
论文摘要
我们研究公平的多目标增强学习,其中代理必须学习一项在矢量值奖励的多个维度上同时获得高度奖励的政策。在公平资源分配文献中,我们将其模拟为预期的福利最大化问题,对于长期累积奖励向量的某些非线性公平福利功能。这种功能的规范示例是NASH社会福利或几何均值,其对数变换也称为比例公平目标。我们表明,即使在表格情况下,即使是预期的NASH社会福利的大约最佳优化也是计算上的棘手。然而,我们提供了Q学习的新颖适应性,将非线性标量的学习更新和非平稳动作选择结合在一起,以学习优化非线性福利功能的有效策略。我们证明我们的算法是可行的,我们在实验上证明了我们的方法优于基于线性标量,最佳线性标量化的混合物或NASH社会福利目标的固定行动选择的技术。
We study fair multi-objective reinforcement learning in which an agent must learn a policy that simultaneously achieves high reward on multiple dimensions of a vector-valued reward. Motivated by the fair resource allocation literature, we model this as an expected welfare maximization problem, for some nonlinear fair welfare function of the vector of long-term cumulative rewards. One canonical example of such a function is the Nash Social Welfare, or geometric mean, the log transform of which is also known as the Proportional Fairness objective. We show that even approximately optimal optimization of the expected Nash Social Welfare is computationally intractable even in the tabular case. Nevertheless, we provide a novel adaptation of Q-learning that combines nonlinear scalarized learning updates and non-stationary action selection to learn effective policies for optimizing nonlinear welfare functions. We show that our algorithm is provably convergent, and we demonstrate experimentally that our approach outperforms techniques based on linear scalarization, mixtures of optimal linear scalarizations, or stationary action selection for the Nash Social Welfare Objective.