论文标题

在具有峰值功能的单次任务场景中持续学习

Continuous Learning in a Single-Incremental-Task Scenario with Spike Features

论文作者

Vaila, Ruthvik, Chiasson, John, Saxena, Vishal

论文摘要

深神经网络(DNN)具有两个关键的缺陷,它们对高精度计算的依赖性以及无法执行顺序学习,也就是说,当DNN接受第一个任务训练时,并且对下一个任务进行了相同的DNN培训,却忘记了第一个任务。这种忘记以前任务的现象也被称为灾难性遗忘。另一方面,哺乳动物的大脑在能源效率方面优于DNN,并且在没有灾难性遗忘的情况下依次学习的能力。在这里,我们在带有瞬时神经元的网络的特征提取层中,使用以生物启发的峰值定时依赖性可塑性(STDP)来提取有意义的特征。在网络的分类部分中,我们使用了修改后的突触智能,我们将其称为每次突触度量的成本作为常规化器,以使网络免受单个秘密任务的灾难性遗忘(SIT)的灾难性遗忘。在这项研究中,我们使用MNIST手写数字数据集,该数据集分为五个子任务。

Deep Neural Networks (DNNs) have two key deficiencies, their dependence on high precision computing and their inability to perform sequential learning, that is, when a DNN is trained on a first task and the same DNN is trained on the next task it forgets the first task. This phenomenon of forgetting previous tasks is also referred to as catastrophic forgetting. On the other hand a mammalian brain outperforms DNNs in terms of energy efficiency and the ability to learn sequentially without catastrophically forgetting. Here, we use bio-inspired Spike Timing Dependent Plasticity (STDP)in the feature extraction layers of the network with instantaneous neurons to extract meaningful features. In the classification sections of the network we use a modified synaptic intelligence that we refer to as cost per synapse metric as a regularizer to immunize the network against catastrophic forgetting in a Single-Incremental-Task scenario (SIT). In this study, we use MNIST handwritten digits dataset that was divided into five sub-tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源