论文标题
超低的力量在片上学习语音命令,并带有阶段变化记忆
Ultra-low power on-chip learning of speech commands with phase-change memories
论文作者
论文摘要
将人工智能嵌入边缘(边缘-AI)是一种优雅的解决方案,可以解决快速扩展的物联网中的力量和延迟问题。由于边缘设备通常会在睡眠模式下花费大部分时间,并且只能唤醒到收集和处理传感器数据,因此非挥发性内存计算(NVIMC)是设计下一代Edge-AI设备的有前途的方法。最近,我们建议使用相变内存(PCM)(我们称为Raven)提出了一个基于NVIMC的神经形态加速器。在这项工作中,我们演示了使用Raven的超低芯片训练和语音命令的推断。我们表明,乌鸦可以在片上训练,功耗低至30 〜UW,这适用于边缘应用。此外,我们表明,在推理和训练过程中,乌鸦在ISO精确度上,分别需要70.36倍和269.23倍的计算数量。由于如此低的功率和计算要求,Raven为超低功率训练和边缘推断提供了有希望的途径。
Embedding artificial intelligence at the edge (edge-AI) is an elegant solution to tackle the power and latency issues in the rapidly expanding Internet of Things. As edge devices typically spend most of their time in sleep mode and only wake-up infrequently to collect and process sensor data, non-volatile in-memory computing (NVIMC) is a promising approach to design the next generation of edge-AI devices. Recently, we proposed an NVIMC-based neuromorphic accelerator using the phase change memories (PCMs), which we call as Raven. In this work, we demonstrate the ultra-low-power on-chip training and inference of speech commands using Raven. We showed that Raven can be trained on-chip with power consumption as low as 30~uW, which is suitable for edge applications. Furthermore, we showed that at iso-accuracies, Raven needs 70.36x and 269.23x less number of computations to be performed than a deep neural network (DNN) during inference and training, respectively. Owing to such low power and computational requirements, Raven provides a promising pathway towards ultra-low-power training and inference at the edge.