论文标题
基于多代理深入学习的多代理学习轨迹计划,用于多工会辅助移动边缘计算
Multi-Agent Deep Reinforcement Learning Based Trajectory Planning for Multi-UAV Assisted Mobile Edge Computing
论文作者
论文摘要
提出了一个无人驾驶飞机(UAV)辅助移动边缘计算(MEC)框架,其中有几个具有不同轨迹的无人机飞越目标区域并支持地面上的用户设备(UES)。我们的目标是共同优化所有UES之间的地理公平性,每个无人机的载荷的公平性以及UES的整体能源消耗。上述优化问题包括整数和持续变量,解决方案具有挑战性。为了解决上述问题,提出了基于多代理的深入学习轨迹控制算法来独立管理每个无人机的轨迹,其中采用了流行的多代理深度确定性策略梯度(MADDPG)方法。鉴于无人机的轨迹,引入了一种低复杂的方法,以优化UES的卸载决策。我们表明,我们提出的解决方案在其他传统算法上都具有相当大的性能,无论是在为UES服务的公平性,在每个无人机上的UE负载公平和所有UES的能源消耗而言。
An unmanned aerial vehicle (UAV)-aided mobile edge computing (MEC) framework is proposed, where several UAVs having different trajectories fly over the target area and support the user equipments (UEs) on the ground. We aim to jointly optimize the geographical fairness among all the UEs, the fairness of each UAV' UE-load and the overall energy consumption of UEs. The above optimization problem includes both integer and continues variables and it is challenging to solve. To address the above problem, a multi-agent deep reinforcement learning based trajectory control algorithm is proposed for managing the trajectory of each UAV independently, where the popular Multi-Agent Deep Deterministic Policy Gradient (MADDPG) method is applied. Given the UAVs' trajectories, a low-complexity approach is introduced for optimizing the offloading decisions of UEs. We show that our proposed solution has considerable performance over other traditional algorithms, both in terms of the fairness for serving UEs, fairness of UE-load at each UAV and energy consumption for all the UEs.