论文标题
语言模型可以捕获图形语义吗?从图表到语言模型,反之亦然
Can Language Models Capture Graph Semantics? From Graphs to Language Model and Vice-Versa
论文作者
论文摘要
知识图是一个很好的资源,可以根据实体和实体之间的关系捕获语义知识。但是,当前的深度学习模型将其作为输入分布式表示或向量。因此,该图在矢量化表示中被压缩。我们进行了一项研究,以检查深度学习模型是否可以压缩图形,然后以大多数语义完整输出相同的图。我们的实验表明,变压器模型无法表达输入知识图的完整语义。我们发现这是由于知识图中包含的定向,关系和基于类型的信息与变压器注意矩阵的完全连接的令牌无向图形解释之间的差异。
Knowledge Graphs are a great resource to capture semantic knowledge in terms of entities and relationships between the entities. However, current deep learning models takes as input distributed representations or vectors. Thus, the graph is compressed in a vectorized representation. We conduct a study to examine if the deep learning model can compress a graph and then output the same graph with most of the semantics intact. Our experiments show that Transformer models are not able to express the full semantics of the input knowledge graph. We find that this is due to the disparity between the directed, relationship and type based information contained in a Knowledge Graph and the fully connected token-token undirected graphical interpretation of the Transformer Attention matrix.