论文标题
SNN中的合奏可塑性和网络适应性
Ensemble plasticity and network adaptability in SNNs
论文作者
论文摘要
人工尖峰神经网络(ASNNS)承诺,由于基于事件的离散(即尖峰)计算,因此提高信息处理效率。几种机器学习(ML)应用程序使用以生物学启发的可塑性机制作为无监督的学习技术,以提高ASNN的鲁棒性,同时保持效率。峰值时间依赖性可塑性(STDP)和固有可塑性(IP)(即动态尖峰阈值适应)是两种结合在一起以形成合奏学习方法的机制。但是,尚不清楚如何根据峰值活动来调节这种整体学习。此外,以前的研究尝试在STDP之后尝试基于阈值的突触修剪,以提高推断效率,以ASNN的性能成本。但是,这种采用单个体重机制的结构适应性并不考虑用于修剪的尖峰活动,这是对输入刺激的更好表示。我们设想,基于可塑性的尖峰调节和基于尖峰的修剪将导致在低资源情况下表现更好的协会。在本文中,引入了一种基于熵和网络激活的新型集合学习方法,该方法使用尖峰速率神经元修剪技术合并,专门使用尖峰活动进行操作。两个脑电图(EEG)数据集用作分类实验的输入,该输入是使用一通学习训练的三层馈送的ASNN。在学习过程中,我们观察到基于峰值率的神经元组装成簇的层次结构。据发现,修剪较低的尖峰速率神经元簇导致概括或可预测的性能下降。
Artificial Spiking Neural Networks (ASNNs) promise greater information processing efficiency because of discrete event-based (i.e., spike) computation. Several Machine Learning (ML) applications use biologically inspired plasticity mechanisms as unsupervised learning techniques to increase the robustness of ASNNs while preserving efficiency. Spike Time Dependent Plasticity (STDP) and Intrinsic Plasticity (IP) (i.e., dynamic spiking threshold adaptation) are two such mechanisms that have been combined to form an ensemble learning method. However, it is not clear how this ensemble learning should be regulated based on spiking activity. Moreover, previous studies have attempted threshold based synaptic pruning following STDP, to increase inference efficiency at the cost of performance in ASNNs. However, this type of structural adaptation, that employs individual weight mechanisms, does not consider spiking activity for pruning which is a better representation of input stimuli. We envisaged that plasticity-based spike-regulation and spike-based pruning will result in ASSNs that perform better in low resource situations. In this paper, a novel ensemble learning method based on entropy and network activation is introduced, which is amalgamated with a spike-rate neuron pruning technique, operated exclusively using spiking activity. Two electroencephalography (EEG) datasets are used as the input for classification experiments with a three-layer feed forward ASNN trained using one-pass learning. During the learning process, we observed neurons assembling into a hierarchy of clusters based on spiking rate. It was discovered that pruning lower spike-rate neuron clusters resulted in increased generalization or a predictable decline in performance.