论文标题

Desire Backpropagation:一种基于峰值依赖性可塑性的多层尖峰神经网络的轻量级训练算法

Desire Backpropagation: A Lightweight Training Algorithm for Multi-Layer Spiking Neural Networks based on Spike-Timing-Dependent Plasticity

论文作者

Gerlinghoff, Daniel, Luo, Tao, Goh, Rick Siow Mong, Wong, Weng-Fai

论文摘要

当资源效率和计算复杂性非常重要时,尖峰神经网络(SNN)是传统人工神经网络的可行替代方法。 SNN的主要优点是它们通过尖峰列车传输二进制信息,从而消除了乘法操作。但是,SNN的培训是一个挑战,因为神经元模型是不可差异的,传统的基于梯度的反向传播算法不能直接应用。此外,尽管是基于尖峰的学习规则,但依赖于峰值的依赖性可塑性(STDP)在本地更新权重,并且不会针对网络的输出误差进行优化。我们提出了Desire Backpropagation,这是一种从输出误差中得出所有神经元(包括隐藏神经)所需的尖峰活动的方法。通过将此欲望值纳入本地STDP权重更新中,我们可以有效地捕获神经元动态,同时最大程度地减少全局误差并达到高分类精度。这使得欲望反向传播成为基于尖峰的监督学习规则。我们培训了三层网络,以对MNIST和时尚般的图像进行分类,并分别达到98.41%和87.56%的精度。此外,通过消除向后通过期间的乘法,我们降低了向前和向后传球之间的计算复杂性并平衡算术资源,从而使Desire Backpropagation成为低资源设备培训的候选人。

Spiking neural networks (SNNs) are a viable alternative to conventional artificial neural networks when resource efficiency and computational complexity are of importance. A major advantage of SNNs is their binary information transfer through spike trains which eliminates multiplication operations. The training of SNNs has, however, been a challenge, since neuron models are non-differentiable and traditional gradient-based backpropagation algorithms cannot be applied directly. Furthermore, spike-timing-dependent plasticity (STDP), albeit being a spike-based learning rule, updates weights locally and does not optimize for the output error of the network. We present desire backpropagation, a method to derive the desired spike activity of all neurons, including the hidden ones, from the output error. By incorporating this desire value into the local STDP weight update, we can efficiently capture the neuron dynamics while minimizing the global error and attaining a high classification accuracy. That makes desire backpropagation a spike-based supervised learning rule. We trained three-layer networks to classify MNIST and Fashion-MNIST images and reached an accuracy of 98.41% and 87.56%, respectively. In addition, by eliminating a multiplication during the backward pass, we reduce computational complexity and balance arithmetic resources between forward and backward pass, making desire backpropagation a candidate for training on low-resource devices.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源