论文标题
基于RNN的在线学习:具有收敛保证的高效一阶优化算法
RNN-based Online Learning: An Efficient First-Order Optimization Algorithm with a Convergence Guarantee
论文作者
论文摘要
我们使用不断运行的经常性神经网络(RNN)(即基于RNN的在线学习)进行了在线非线性回归。对于基于RNN的在线学习,我们介绍了一种高效的一阶培训算法,理论上保证可以收敛到最佳网络参数。我们的算法是真正的在线算法,因此它不会对学习环境做出任何假设以确保融合。通过数值模拟,我们验证了我们的理论结果,并说明了我们算法在最先进的RNN训练方法方面所取得的显着性能改善。
We investigate online nonlinear regression with continually running recurrent neural network networks (RNNs), i.e., RNN-based online learning. For RNN-based online learning, we introduce an efficient first-order training algorithm that theoretically guarantees to converge to the optimum network parameters. Our algorithm is truly online such that it does not make any assumption on the learning environment to guarantee convergence. Through numerical simulations, we verify our theoretical results and illustrate significant performance improvements achieved by our algorithm with respect to the state-of-the-art RNN training methods.