论文标题

MF2-MVQA:医学视觉问题的多阶段功能融合方法回答

MF2-MVQA: A Multi-stage Feature Fusion method for Medical Visual Question Answering

论文作者

Song, Shanshan, Li, Jiangyun, Wang, Jing, Cai, Yuanxiu, Dong, Wenkai

论文摘要

医学视觉问题回答任务中有一个关键问题,如何有效地实现语言和医学图像的特征融合,并与有限的数据集融合。为了更好地利用医学图像的多尺度信息,以前的方法分别将多个阶段的视觉特征图直接嵌入了具有相同大小的令牌,并将其与文本表示形式融合在一起。但是,这将在不同阶段引起视觉特征的混乱。为此,我们提出了一种简单但功能强大的多阶段功能融合方法MF2-MVQA,该方法通过舞台融合了具有文本语义的多层次视觉特征。 MF2-MVQA在VQA-MED 2019和VQA-RAD数据集上实现了最先进的性能。可视化的结果还验证了我们的模型表现是否优于先前的工作。

There is a key problem in the medical visual question answering task that how to effectively realize the feature fusion of language and medical images with limited datasets. In order to better utilize multi-scale information of medical images, previous methods directly embed the multi-stage visual feature maps as tokens of same size respectively and fuse them with text representation. However, this will cause the confusion of visual features at different stages. To this end, we propose a simple but powerful multi-stage feature fusion method, MF2-MVQA, which stage-wise fuses multi-level visual features with textual semantics. MF2-MVQA achieves the State-Of-The-Art performance on VQA-Med 2019 and VQA-RAD dataset. The results of visualization also verify that our model outperforms previous work.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源