论文标题

Tinysnn:迈向记忆和节能尖峰神经网络

tinySNN: Towards Memory- and Energy-Efficient Spiking Neural Networks

论文作者

Putra, Rachmad Vidya Wicaksana, Shafique, Muhammad

论文摘要

较大的尖峰神经网络(SNN)模型通常是有利的,因为它们可以提供更高的精度。但是,在资源和能源约束的嵌入式平台上采用此类模型效率低下。在此方面,我们提出了一个Tinysnn框架,该框架优化了训练和推理阶段中SNN处理的记忆和能量需求,同时保持准确度很高。它是通过减少SNN操作,提高学习质量,量化SNN参数并选择适当的SNN模型来实现的。此外,我们的Tinysnn量化了不同的SNN参数(即权重和神经元参数),以最大程度地提高压缩,同时探索量化方案,精度级别和舍入的不同组合,以找到提供可接受准确性的模型。实验结果表明,与基线网络相比,我们的Tinysnn显着降低了SNN的记忆足迹和无准确损失的能源消耗。因此,我们的Tinysnn有效地压缩了给定的SNN模型,以记忆和节能的方式获得高精度,从而使SNN能够用于资源和能源受限的嵌入式应用程序。

Larger Spiking Neural Network (SNN) models are typically favorable as they can offer higher accuracy. However, employing such models on the resource- and energy-constrained embedded platforms is inefficient. Towards this, we present a tinySNN framework that optimizes the memory and energy requirements of SNN processing in both the training and inference phases, while keeping the accuracy high. It is achieved by reducing the SNN operations, improving the learning quality, quantizing the SNN parameters, and selecting the appropriate SNN model. Furthermore, our tinySNN quantizes different SNN parameters (i.e., weights and neuron parameters) to maximize the compression while exploring different combinations of quantization schemes, precision levels, and rounding schemes to find the model that provides acceptable accuracy. The experimental results demonstrate that our tinySNN significantly reduces the memory footprint and the energy consumption of SNNs without accuracy loss as compared to the baseline network. Therefore, our tinySNN effectively compresses the given SNN model to achieve high accuracy in a memory- and energy-efficient manner, hence enabling the employment of SNNs for the resource- and energy-constrained embedded applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源