论文标题
通过行动分支和联合强化学习的车辆合作感
Vehicular Cooperative Perception Through Action Branching and Federated Reinforcement Learning
论文作者
论文摘要
合作感知在将车辆的传感范围扩展到其视线之外的过程中起着至关重要的作用。但是,在有限的通信资源下交换原始感觉数据是不可行的。为了实现有效的合作感,车辆需要解决以下基本问题:需要共享哪些感官数据?,以哪种解决方案?以及哪些车辆?为了回答这个问题,在本文中,提出了一个新颖的框架,以允许使用基于Quadtree的点云压缩机制,允许基于强化的车辆关联,资源块(RB)分配(RB)分配(RB)分配以及合作感知消息(CPMS)的内容选择。此外,引入了联合RL方法,以加快跨车辆的训练过程。仿真结果表明,RL代理有效学习车辆协会,RB分配和消息内容选择的能力,同时根据接收到的感官信息最大化车辆的满意度。结果还表明,联邦RL改善了培训过程,与非赋予方法相比,可以在相同时间内实现更好的政策。
Cooperative perception plays a vital role in extending a vehicle's sensing range beyond its line-of-sight. However, exchanging raw sensory data under limited communication resources is infeasible. Towards enabling an efficient cooperative perception, vehicles need to address the following fundamental question: What sensory data needs to be shared?, at which resolution?, and with which vehicles? To answer this question, in this paper, a novel framework is proposed to allow reinforcement learning (RL)-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs) by utilizing a quadtree-based point cloud compression mechanism. Furthermore, a federated RL approach is introduced in order to speed up the training process across vehicles. Simulation results show the ability of the RL agents to efficiently learn the vehicles' association, RB allocation, and message content selection while maximizing vehicles' satisfaction in terms of the received sensory information. The results also show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.