论文标题
选择性记忆递归最小二乘:重铸不忘记基于RBF神经网络的实时学习中的记忆
Selective Memory Recursive Least Squares: Recast Forgetting into Memory in RBF Neural Network Based Real-Time Learning
论文作者
论文摘要
在基于径向基础功能神经网络(RBFNN)的实时学习任务中,忘记机制被广泛使用,以使神经网络可以保持其对新数据的敏感性。但是,随着忘记机制,一些有用的知识将丢失,只是因为它们是很久以前就学会的,我们称这是被动知识忘记现象。为了解决这个问题,本文提出了一种名为“选择性内存递归最小二乘正方形”(SMRL)的实时训练方法,其中经典遗忘机制被重塑为记忆机制。与遗忘机制不同,该机制主要根据收集样品的时间评估样品的重要性,记忆机制通过样品的时间和空间分布来评估样品的重要性。使用SMRL,将RBFNN的输入空间均匀地分为有限数量的分区,并使用每个分区中的合成样品开发合成的目标函数。除了当前的近似错误外,神经网络还根据所访问的分区的记录数据更新其权重。与经典的训练方法(包括遗忘因子递归最小二乘(FFRL)和随机梯度下降(SGD)方法)相比,SMRL可以提高学习速度和概括能力,通过相应的模拟结果证明了这一点。
In radial basis function neural network (RBFNN) based real-time learning tasks, forgetting mechanisms are widely used such that the neural network can keep its sensitivity to new data. However, with forgetting mechanisms, some useful knowledge will get lost simply because they are learned a long time ago, which we refer to as the passive knowledge forgetting phenomenon. To address this problem, this paper proposes a real-time training method named selective memory recursive least squares (SMRLS) in which the classical forgetting mechanisms are recast into a memory mechanism. Different from the forgetting mechanism, which mainly evaluates the importance of samples according to the time when samples are collected, the memory mechanism evaluates the importance of samples through both temporal and spatial distribution of samples. With SMRLS, the input space of the RBFNN is evenly divided into a finite number of partitions and a synthesized objective function is developed using synthesized samples from each partition. In addition to the current approximation error, the neural network also updates its weights according to the recorded data from the partition being visited. Compared with classical training methods including the forgetting factor recursive least squares (FFRLS) and stochastic gradient descent (SGD) methods, SMRLS achieves improved learning speed and generalization capability, which are demonstrated by corresponding simulation results.