论文标题
关于尖峰神经网络的计算能力和复杂性
On the computational power and complexity of Spiking Neural Networks
论文作者
论文摘要
在过去的十年中,基于人工尖峰神经网络(例如大三角器,Truenorth和Loihi Systems)的神经形态架构的兴起。这些体系结构中的计算和记忆的大量并行性和共同定位可能会使能量用法比传统的von Neumann架构要低的数量级。但是,迄今为止,与更传统的计算体系结构(尤其是在能源使用方面)的比较受到了缺乏正式的机器模型和神经形态计算的计算复杂性理论的阻碍。在本文中,我们采取了迈向这样一个理论的第一步。我们将尖峰神经网络作为机器模型介绍,与熟悉的图灵机器相比,信息及其操作是在机器中共置的。我们介绍了规范问题,定义了复杂性类别的层次结构,并提供了一些首先完整的结果。
The last decade has seen the rise of neuromorphic architectures based on artificial spiking neural networks, such as the SpiNNaker, TrueNorth, and Loihi systems. The massive parallelism and co-locating of computation and memory in these architectures potentially allows for an energy usage that is orders of magnitude lower compared to traditional Von Neumann architectures. However, to date a comparison with more traditional computational architectures (particularly with respect to energy usage) is hampered by the lack of a formal machine model and a computational complexity theory for neuromorphic computation. In this paper we take the first steps towards such a theory. We introduce spiking neural networks as a machine model where---in contrast to the familiar Turing machine---information and the manipulation thereof are co-located in the machine. We introduce canonical problems, define hierarchies of complexity classes and provide some first completeness results.