论文标题
通过深厚的加固学习处理网络控制
Processing Network Controls via Deep Reinforcement Learning
论文作者
论文摘要
新颖的高级政策梯度(APG)算法,例如近端政策优化(PPO),信任区域政策优化及其变化,由于其易于实施和良好的实践绩效,已成为主要的强化学习(RL)算法。该论文与APG算法用于解决处理网络控制优化问题的理论理由和实际应用有关。处理网络控制问题通常被提出为马尔可夫决策过程(MDP)或半马多夫决策过程(SMDP)问题,这些问题对于RL功能具有多个非常规的特征:无限状态空间,无限制的成本,长期的平均成本目标。政策改进范围在APG算法的理论理由中起着至关重要的作用。在本文中,我们完善了具有有限状态空间的MDP的现有界限,并证明了用于建模处理网络操作的MDP和SMDP类别的新型政策改进范围。我们考虑了处理网络控制问题的两个示例,并自定义了PPO算法以解决它们。首先,我们考虑并行服务器和多类排队网络控件。其次,我们认为驾驶员在乘车服务系统中重新定位问题。对于这两个示例,具有辅助修改的PPO算法都一致地生成胜过最新启发式方法的控制策略。
Novel advanced policy gradient (APG) algorithms, such as proximal policy optimization (PPO), trust region policy optimization, and their variations, have become the dominant reinforcement learning (RL) algorithms because of their ease of implementation and good practical performance. This dissertation is concerned with theoretical justification and practical application of the APG algorithms for solving processing network control optimization problems. Processing network control problems are typically formulated as Markov decision process (MDP) or semi-Markov decision process (SMDP) problems that have several unconventional for RL features: infinite state spaces, unbounded costs, long-run average cost objectives. Policy improvement bounds play a crucial role in the theoretical justification of the APG algorithms. In this thesis we refine existing bounds for MDPs with finite state spaces and prove novel policy improvement bounds for classes of MDPs and SMDPs used to model processing network operations. We consider two examples of processing network control problems and customize the PPO algorithm to solve them. First, we consider parallel-server and multiclass queueing networks controls. Second, we consider the drivers repositioning problem in a ride-hailing service system. For both examples the PPO algorithm with auxiliary modifications consistently generates control policies that outperform state-of-art heuristics.