论文标题

将分布式DRL纳入空间空气地面集成无线通信网络的存储资源优化

Incorporating Distributed DRL into Storage Resource Optimization of Space-Air-Ground Integrated Wireless Communication Network

论文作者

Wang, Chao, Liu, Lei, Jiang, Chunxiao, Wang, Shangguang, Zhang, Peiying, Shen, Shigen

论文摘要

太空空气地面集成网络(Sagin)是一种新型的无线网络模式。 Sagin Resources的有效管理是进行高可靠性通信的先决条件。但是,太空空气网络细分市场的存储容量极为有限。空气服务器也没有足够的存储资源来集中容纳每个边缘服务器上传的信息。因此,如何协调萨金的存储资源的问题已经出现。本文提出了基于分布式深入学习(DRL)的Sagin存储资源管理算法。资源管理过程被建模为马尔可夫决策模型。在每个边缘物理域中,我们提取由代理商构建培训环境的存储资源代表的网络属性,以实现分布式培训。此外,我们建议基于分布式DRL的Sagin Resource Management框架。仿真结果表明,代理具有理想的训练效果。与其他算法相比,提出的算法的资源分配收入和用户请求接受率分别增加了约18.15 \%和8.35 \%。此外,所提出的算法在处理资源条件的变化方面具有良好的灵活性。

Space-air-ground integrated network (SAGIN) is a new type of wireless network mode. The effective management of SAGIN resources is a prerequisite for high-reliability communication. However, the storage capacity of space-air network segment is extremely limited. The air servers also do not have sufficient storage resources to centrally accommodate the information uploaded by each edge server. So the problem of how to coordinate the storage resources of SAGIN has arisen. This paper proposes a SAGIN storage resource management algorithm based on distributed deep reinforcement learning (DRL). The resource management process is modeled as a Markov decision model. In each edge physical domain, we extract the network attributes represented by storage resources for the agent to build a training environment, so as to realize the distributed training. In addition, we propose a SAGIN resource management framework based on distributed DRL. Simulation results show that the agent has an ideal training effect. Compared with other algorithms, the resource allocation revenue and user request acceptance rate of the proposed algorithm are increased by about 18.15\% and 8.35\% respectively. Besides, the proposed algorithm has good flexibility in dealing with the changes of resource conditions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源