论文标题

基于NOMA的远程估算中的无线电资源分配的深度强化学习

Deep Reinforcement Learning for Radio Resource Allocation in NOMA-based Remote State Estimation

论文作者

Pang, Gaoyang, Liu, Wanchun, Li, Yonghui, Vucetic, Branka

论文摘要

远程状态估计值,许多传感器将分布式动态工厂的测量值发送给远程估计器,而不是共享无线资源,这对于行业关键任务应用4.0至关重要。远程状态估计的大多数现有作品假定正交多访问,并且提出的动态无线电资源分配算法只能用于非常小的设置。在这项工作中,我们考虑了一个具有非正交多重访问的远程估计系统。我们制定了一个新的动态资源分配问题,以实现最低总体长期平均估计均值误差。估计质量状态和渠道质量状态都在每次决策中考虑到决策状态。该问题具有较大的混合离散和连续的动作空间,可用于联合通道分配和功率分配。我们提出了一种新型的动作空间压缩方法,并开发了一种先进的深度增强学习算法来解决该问题。数值结果表明,我们的算法有效地解决了资源分配问题,比文献提出了更好的可伸缩性,并且与某些基准相比,提供了显着的性能增长。

Remote state estimation, where many sensors send their measurements of distributed dynamic plants to a remote estimator over shared wireless resources, is essential for mission-critical applications of Industry 4.0. Most of the existing works on remote state estimation assumed orthogonal multiple access and the proposed dynamic radio resource allocation algorithms can only work for very small-scale settings. In this work, we consider a remote estimation system with non-orthogonal multiple access. We formulate a novel dynamic resource allocation problem for achieving the minimum overall long-term average estimation mean-square error. Both the estimation quality state and the channel quality state are taken into account for decision making at each time. The problem has a large hybrid discrete and continuous action space for joint channel assignment and power allocation. We propose a novel action-space compression method and develop an advanced deep reinforcement learning algorithm to solve the problem. Numerical results show that our algorithm solves the resource allocation problem effectively, presents much better scalability than the literature, and provides significant performance gain compared to some benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源