论文标题

通过随着时间的推移将内部状态串联来减少储层计算的模型尺寸

Model-Size Reduction for Reservoir Computing by Concatenating Internal States Through Time

论文作者

Sakemi, Yusuke, Morino, Kai, Leleu, Timothée, Aihara, Kazuyuki

论文摘要

储层计算(RC)是一种机器学习算法,可以根据使用高维动力学系统(例如神经元的随机网络,称为“储层”)从数据中学习复杂的时间序列。要在边缘计算中实现RC,减少RC所需的计算资源的数量非常重要。在这项研究中,我们提出的方法通过在当前时间步骤中输入或漂流到输出层来减少储层的大小。这些提出的方法是根据信息处理能力分析的,这是Dambre等人提出的RC的性能度量。 (2012年)。此外,我们评估了所提出的方法对时间序列预测任务的有效性:广义的亨逊​​·图和Narma。在这些任务上,我们发现所提出的方法能够将储层的大小降低到十分之一,而不会大幅增加回归误差。由于所提出的方法的应用不限于储层的特定网络结构,因此所提出的方法可以进一步提高基于RC的系统(例如FPGA和光子系统)的能源效率。

Reservoir computing (RC) is a machine learning algorithm that can learn complex time series from data very rapidly based on the use of high-dimensional dynamical systems, such as random networks of neurons, called "reservoirs." To implement RC in edge computing, it is highly important to reduce the amount of computational resources that RC requires. In this study, we propose methods that reduce the size of the reservoir by inputting the past or drifting states of the reservoir to the output layer at the current time step. These proposed methods are analyzed based on information processing capacity, which is a performance measure of RC proposed by Dambre et al. (2012). In addition, we evaluate the effectiveness of the proposed methods on time-series prediction tasks: the generalized Henon-map and NARMA. On these tasks, we found that the proposed methods were able to reduce the size of the reservoir up to one tenth without a substantial increase in regression error. Because the applications of the proposed methods are not limited to a specific network structure of the reservoir, the proposed methods could further improve the energy efficiency of RC-based systems, such as FPGAs and photonic systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源