论文标题

通过提升对结构化卷积模型的无损压缩

Lossless Compression of Structured Convolutional Models via Lifting

论文作者

Sourek, Gustav, Zelezny, Filip, Kuzelka, Ondrej

论文摘要

提升是一种有效的技术,可以通过利用基本对称性来扩展将其推广到关系域的图形模型。同时,神经模型从类似网格的张量数据不断扩展为结构化表示,例如各种属性图和关系数据库。为了解决数据的不规则结构,这些模型通常会推断出卷积的想法,从而有效地引入了在其动态展开的计算图中引入参数共享。然后,计算图本身反映了基础数据的对称性,类似于提起的图形模型。受抬高的启发,我们引入了一种简单有效的技术来检测对称性并压缩神经模型而不会丢失任何信息。我们通过实验证明,这种压缩可以导致跨各种任务(例如分子分类和知识碱完成)在各种任务上的结构化卷积模型(例如各种图神经网络)的显着加速。

Lifting is an efficient technique to scale up graphical models generalized to relational domains by exploiting the underlying symmetries. Concurrently, neural models are continuously expanding from grid-like tensor data into structured representations, such as various attributed graphs and relational databases. To address the irregular structure of the data, the models typically extrapolate on the idea of convolution, effectively introducing parameter sharing in their, dynamically unfolded, computation graphs. The computation graphs themselves then reflect the symmetries of the underlying data, similarly to the lifted graphical models. Inspired by lifting, we introduce a simple and efficient technique to detect the symmetries and compress the neural models without loss of any information. We demonstrate through experiments that such compression can lead to significant speedups of structured convolutional models, such as various Graph Neural Networks, across various tasks, such as molecule classification and knowledge-base completion.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源