论文标题
探索峰值神经网络中的时间信息动态
Exploring Temporal Information Dynamics in Spiking Neural Networks
论文作者
论文摘要
大多数现有的尖峰神经网络(SNN)的作品状态,SNN可以使用尖峰的时间信息动态。但是,仍然缺少对时间信息动态的明确分析。在本文中,我们提出了几个重要问题,以提供对SNN的基本理解:SNN中的时间信息动态是什么?我们如何测量时间信息动态?时间信息动态如何影响整体学习绩效?为了回答这些问题,我们估计了权重的渔民信息,以测量以经验方式培训期间时间信息的分布。令人惊讶的是,随着培训的进行,Fisher信息开始集中在早期时间段。训练后,我们观察到信息高度集中在较早的时间段中,这是一种现象,我们称为时间信息集中。我们观察到,时间信息集中现象是SNN的共同学习特征,通过对各种配置进行广泛的实验,例如体系结构,数据集,优化策略,时间常数和时间段。此外,为了揭示时间信息集中如何影响SNN的性能,我们设计了一个损失功能来改变时间信息的趋势。我们发现时间信息集中对于建立强大的SNN至关重要,但对分类准确性的影响很小。最后,我们根据我们对时间信息集中的观察结果提出了一种有效的迭代修剪方法。代码可在https://github.com/intelligent-computing-lab-yale/exploring-temporal-information-dynamics-in-in-spiking-spiking-netr-networks获得。
Most existing Spiking Neural Network (SNN) works state that SNNs may utilize temporal information dynamics of spikes. However, an explicit analysis of temporal information dynamics is still missing. In this paper, we ask several important questions for providing a fundamental understanding of SNNs: What are temporal information dynamics inside SNNs? How can we measure the temporal information dynamics? How do the temporal information dynamics affect the overall learning performance? To answer these questions, we estimate the Fisher Information of the weights to measure the distribution of temporal information during training in an empirical manner. Surprisingly, as training goes on, Fisher information starts to concentrate in the early timesteps. After training, we observe that information becomes highly concentrated in earlier few timesteps, a phenomenon we refer to as temporal information concentration. We observe that the temporal information concentration phenomenon is a common learning feature of SNNs by conducting extensive experiments on various configurations such as architecture, dataset, optimization strategy, time constant, and timesteps. Furthermore, to reveal how temporal information concentration affects the performance of SNNs, we design a loss function to change the trend of temporal information. We find that temporal information concentration is crucial to building a robust SNN but has little effect on classification accuracy. Finally, we propose an efficient iterative pruning method based on our observation on temporal information concentration. Code is available at https://github.com/Intelligent-Computing-Lab-Yale/Exploring-Temporal-Information-Dynamics-in-Spiking-Neural-Networks.