论文标题
贝叶斯学习的分散方法
A Decentralized Approach to Bayesian Learning
论文作者
论文摘要
通过分散的机器学习方法,我们提出了一种协作的贝叶斯学习算法,以非登录设置的分散式Langevin Dynamics的形式。我们的分析表明,马尔可夫链和目标后分布之间的初始KL差异正在呈指数下降,而在多项式时间内,添加噪声对总体KL差异的误差贡献正在减少。我们进一步表明,多项式期经验加速了数量的代理,并为时间变化的台阶提供了足够的条件,以确保收敛到所需的分布。在各种机器学习任务上评估了所提出的算法的性能。经验结果表明,具有本地可用数据的个体代理的性能与集中式设置相当,并且收敛速率有了显着提高。
Motivated by decentralized approaches to machine learning, we propose a collaborative Bayesian learning algorithm taking the form of decentralized Langevin dynamics in a non-convex setting. Our analysis show that the initial KL-divergence between the Markov Chain and the target posterior distribution is exponentially decreasing while the error contributions to the overall KL-divergence from the additive noise is decreasing in polynomial time. We further show that the polynomial-term experiences speed-up with number of agents and provide sufficient conditions on the time-varying step-sizes to guarantee convergence to the desired distribution. The performance of the proposed algorithm is evaluated on a wide variety of machine learning tasks. The empirical results show that the performance of individual agents with locally available data is on par with the centralized setting with considerable improvement in the convergence rate.