论文标题

在合作激发条件下分布式SG算法的收敛性

Convergence of the Distributed SG Algorithm Under Cooperative Excitation Condition

论文作者

Gan, Die, Liu, Zhixin

论文摘要

在本文中,提出了分布式随机梯度(SG)算法,其中估计器的目的是从分布式传感器获得的一组嘈杂的测量值中共同估计未知的时间不变参数。所提出的分布式SG算法结合了估计邻居的共识策略与回归向量的扩散。引入了合作激发条件,根据该条件,可以在不依赖于现有文献中常用的回归向量的独立性和平稳性假设而获得分布式SG算法的收敛性。此外,可以建立算法的收敛速率。最后,我们表明,即使任何单个传感器都无法通过模拟示例,所有传感器都可以合作完成估计任务。

In this paper, a distributed stochastic gradient (SG) algorithm is proposed where the estimators are aimed to collectively estimate an unknown time-invariant parameter from a set of noisy measurements obtained by distributed sensors. The proposed distributed SG algorithm combines the consensus strategy of the estimation of neighbors with the diffusion of regression vectors. A cooperative excitation condition is introduced, under which the convergence of the distributed SG algorithm can be obtained without relying on the independency and stationarity assumptions of regression vectors which are commonly used in existing literature. Furthermore, the convergence rate of the algorithm can be established. Finally, we show that all sensors can cooperate to fulfill the estimation task even though any individual sensor can not by a simulation example.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源