论文标题

通过基于表面的卷积网络从fMRI数据中解码自然图像刺激

Decoding natural image stimuli from fMRI data with a surface-based convolutional network

论文作者

Gu, Zijin, Jamison, Keith, Kuceyeski, Amy, Sabuncu, Mert

论文摘要

由于信噪比低和功能性MRI数据的分辨率有限,并且自然图像的高复杂性,因此从人脑fMRI测量值中重建视觉刺激是一项艰巨的任务。在这项工作中,我们为这项任务提出了一种新颖的方法,我们称之为Cortex2Image,以用高语义忠诚和丰富的细粒细节来解码视觉刺激。特别是,我们训练一个基于表面的卷积网络模型,该模型将大脑响应对语义图像特征(Cortex2smantic)绘制。然后,我们将该模型与高质量的图像生成器(实例条件结合的GAN)相结合,以使用各种方法(Cortex2Detail)训练从大脑响应到细粒度图像特征的另一个映射。通过我们提出的方法获得的图像重建达到了最新的语义保真度,同时与地面刺激产生了良好的细粒度相似性。我们的代码可在以下网址找到:https://github.com/zijin-gu/meshconv-decoding.git。

Due to the low signal-to-noise ratio and limited resolution of functional MRI data, and the high complexity of natural images, reconstructing a visual stimulus from human brain fMRI measurements is a challenging task. In this work, we propose a novel approach for this task, which we call Cortex2Image, to decode visual stimuli with high semantic fidelity and rich fine-grained detail. In particular, we train a surface-based convolutional network model that maps from brain response to semantic image features first (Cortex2Semantic). We then combine this model with a high-quality image generator (Instance-Conditioned GAN) to train another mapping from brain response to fine-grained image features using a variational approach (Cortex2Detail). Image reconstructions obtained by our proposed method achieve state-of-the-art semantic fidelity, while yielding good fine-grained similarity with the ground-truth stimulus. Our code is available at: https://github.com/zijin-gu/meshconv-decoding.git.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源