论文标题

硬币:图形卷积网络的通信感知内存加速度

COIN: Communication-Aware In-Memory Acceleration for Graph Convolutional Networks

论文作者

Mandal, Sumit K., Krishnan, Gokul, Goksoy, A. Alper, Nair, Gopikrishnan Ravindran, Cao, Yu, Ogras, Umit Y.

论文摘要

当处理在许多应用领域固有发现的图形结构数据时,图形卷积网络(GCN)在处理图形结构的数据时表现出了显着的学习能力。 GCN在多个迭代中分布了嵌入每个顶点中的神经网络的输出,以利用基础图捕获的关系。因此,他们会产生大量的计算和不规则的通信开销,这些沟通要求特定于GCN特定的硬件加速器。为此,本文介绍了用于GCN硬件加速的通信感知内存计算体系结构(COIN)。除了使用自定义计算元素(CE)和内存计算加速计算外,硬币还旨在最大程度地降低GCN操作中的内部和CEN-CE内通信,以优化性能和能源效率。与最先进的GCN加速器相比,使用广泛使用的数据集的实验评估可提高105倍。

Graph convolutional networks (GCNs) have shown remarkable learning capabilities when processing graph-structured data found inherently in many application areas. GCNs distribute the outputs of neural networks embedded in each vertex over multiple iterations to take advantage of the relations captured by the underlying graphs. Consequently, they incur a significant amount of computation and irregular communication overheads, which call for GCN-specific hardware accelerators. To this end, this paper presents a communication-aware in-memory computing architecture (COIN) for GCN hardware acceleration. Besides accelerating the computation using custom compute elements (CE) and in-memory computing, COIN aims at minimizing the intra- and inter-CE communication in GCN operations to optimize the performance and energy efficiency. Experimental evaluations with widely used datasets show up to 105x improvement in energy consumption compared to state-of-the-art GCN accelerator.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源