论文标题

FIMP:图形神经网络的基础模型信息传递

FIMP: Foundation Model-Informed Message Passing for Graph Neural Networks

论文作者

Rizvi, Syed Asad, Pallikkavaliyaveetil, Nazreen, Zhang, David, Lyu, Zhuoyang, Nguyen, Nhi, Lyu, Haoran, Christensen, Benjamin, Caro, Josue Ortega, Fonseca, Antonio H. O., Zappala, Emanuele, Bagherian, Maryam, Averill, Christopher, Abdallah, Chadi G., Karbasi, Amin, Ying, Rex, Brbic, Maria, Dhodapkar, Rahul Madhav, van Dijk, David

论文摘要

基金会模型在许多领域取得了巨大的成功,依靠在大量数据上进行预处理。图形结构化数据通常缺乏与非结构化数据相同的规模,从而使图基础模型的开发具有挑战性。在这项工作中,我们提出了基础信息消息传递(FIMP),这是一个图形神经网络(GNN)消息通话框架,该框架利用了基于图的任务中预修了非文本基础模型。我们表明,基础模型的自发层可以有效地在图表上重新利用,以执行基于跨节点注意的消息通话。我们的模型将在现实世界图像网络数据集以及两个生物学应用程序(单细胞RNA测序数据和fMRI大脑活动记录)上进行评估。 FIMP胜过强大的基线,表明它可以有效地利用图形任务中的最新基础模型。

Foundation models have achieved remarkable success across many domains, relying on pretraining over vast amounts of data. Graph-structured data often lacks the same scale as unstructured data, making the development of graph foundation models challenging. In this work, we propose Foundation-Informed Message Passing (FIMP), a Graph Neural Network (GNN) message-passing framework that leverages pretrained non-textual foundation models in graph-based tasks. We show that the self-attention layers of foundation models can effectively be repurposed on graphs to perform cross-node attention-based message-passing. Our model is evaluated on a real-world image network dataset and two biological applications (single-cell RNA sequencing data and fMRI brain activity recordings) in both finetuned and zero-shot settings. FIMP outperforms strong baselines, demonstrating that it can effectively leverage state-of-the-art foundation models in graph tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源