论文标题
计算图的神经拓扑排序
Neural Topological Ordering for Computation Graphs
论文作者
论文摘要
有关组合优化的机器学习的最新作品表明,基于学习的方法可以优于速度和性能方面的启发式方法。在本文中,我们考虑了在定向的无环图上找到最佳拓扑顺序的问题,重点是编译器中出现的记忆最小化问题。我们提出了一种基于端到端的机器学习方法,用于使用编码器框架框架进行拓扑排序。我们的编码器是一种基于注意力的图形神经网络体系结构,称为\ emph {topoformer},它使用DAG的不同拓扑转换来传递。由编码器产生的节点嵌入被转换为节点优先级,解码器使用该节点来生成概率分布,而不是拓扑顺序。我们在称为分层图的合成生成图的数据集上训练我们的模型。我们表明,我们的模型的表现优于或占据了PAR,具有几种拓扑排序基线,同时在最高2K节点的合成图上明显更快。我们还可以在一组现实世界的计算图上训练和测试我们的模型,以显示性能的改进。
Recent works on machine learning for combinatorial optimization have shown that learning based approaches can outperform heuristic methods in terms of speed and performance. In this paper, we consider the problem of finding an optimal topological order on a directed acyclic graph with focus on the memory minimization problem which arises in compilers. We propose an end-to-end machine learning based approach for topological ordering using an encoder-decoder framework. Our encoder is a novel attention based graph neural network architecture called \emph{Topoformer} which uses different topological transforms of a DAG for message passing. The node embeddings produced by the encoder are converted into node priorities which are used by the decoder to generate a probability distribution over topological orders. We train our model on a dataset of synthetically generated graphs called layered graphs. We show that our model outperforms, or is on-par, with several topological ordering baselines while being significantly faster on synthetic graphs with up to 2k nodes. We also train and test our model on a set of real-world computation graphs, showing performance improvements.