论文标题

还不错:在图形卷积神经网络中探索半精度

Not Half Bad: Exploring Half-Precision in Graph Convolutional Neural Networks

论文作者

Brennan, John, Bonner, Stephen, Atapour-Abarghouei, Amir, Jackson, Philip T, Obara, Boguslaw, McGough, Andrew Stephen

论文摘要

随着图形作为数据在众多应用中的有效表示的越来越重要的意义,使用现代机器学习的有效图分析正在受到越来越多的关注。深度学习方法通​​常在整个邻接矩阵中运行 - 因为输入和中间网络层都与邻接矩阵的大小成比例地设计 - 随着图形尺寸的增加而导致密集的计算和较大的内存需求。因此,希望确定有效的措施以减少运行时和内存要求,以允许分析可能的最大图。在深度神经网络的前进和向后传递中使用降低的精度操作以及现代GPU中的新型专业硬件可以为效率提供有希望的途径。在本文中,我们对减少过度操作的使用进行了深入的探索,易于集成到非常流行的Pytorch框架中,并分析了张量核对图卷积神经网络的影响。我们使用众所周知的基准和合成生成的数据集对三个GPU架构和两个广泛使用的图形分析任务(顶点分类和链接预测)进行了广泛的实验评估。因此,使我们能够对减少精确操作以及张量芯对图卷积神经网络的计算和记忆使用的影响进行重要观察,这在文献中通常忽略了。

With the growing significance of graphs as an effective representation of data in numerous applications, efficient graph analysis using modern machine learning is receiving a growing level of attention. Deep learning approaches often operate over the entire adjacency matrix -- as the input and intermediate network layers are all designed in proportion to the size of the adjacency matrix -- leading to intensive computation and large memory requirements as the graph size increases. It is therefore desirable to identify efficient measures to reduce both run-time and memory requirements allowing for the analysis of the largest graphs possible. The use of reduced precision operations within the forward and backward passes of a deep neural network along with novel specialised hardware in modern GPUs can offer promising avenues towards efficiency. In this paper, we provide an in-depth exploration of the use of reduced-precision operations, easily integrable into the highly popular PyTorch framework, and an analysis of the effects of Tensor Cores on graph convolutional neural networks. We perform an extensive experimental evaluation of three GPU architectures and two widely-used graph analysis tasks (vertex classification and link prediction) using well-known benchmark and synthetically generated datasets. Thus allowing us to make important observations on the effects of reduced-precision operations and Tensor Cores on computational and memory usage of graph convolutional neural networks -- often neglected in the literature.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源