论文标题
Radtex:从文本报告中学习有效的X光片表示
RadTex: Learning Efficient Radiograph Representations from Text Reports
论文作者
论文摘要
使用深度学习对胸部射线照相的自动分析具有增强患者疾病临床诊断的巨大潜力。但是,深度学习模型通常需要大量注释的数据才能达到高性能 - 通常是医疗领域适应的障碍。在本文中,我们构建了一个利用放射学报告来通过有限的标记数据(少于1000个示例)来改善医学图像分类性能,以提高医学图像分类性能。具体而言,我们检查了捕捉图像预告片,以学习以更少的例子进行训练的高质量医学图像表示。在对卷积编码器和变压器解码器进行联合预处理后,我们将学习的编码器转移到各种分类任务中。平均9多种病理学,我们发现我们的模型在标记培训数据受到限制时,与ImageNet的监督和内域监督预处理相比,我们的模型具有更高的分类性能。
Automated analysis of chest radiography using deep learning has tremendous potential to enhance the clinical diagnosis of diseases in patients. However, deep learning models typically require large amounts of annotated data to achieve high performance -- often an obstacle to medical domain adaptation. In this paper, we build a data-efficient learning framework that utilizes radiology reports to improve medical image classification performance with limited labeled data (fewer than 1000 examples). Specifically, we examine image-captioning pretraining to learn high-quality medical image representations that train on fewer examples. Following joint pretraining of a convolutional encoder and transformer decoder, we transfer the learned encoder to various classification tasks. Averaged over 9 pathologies, we find that our model achieves higher classification performance than ImageNet-supervised and in-domain supervised pretraining when labeled training data is limited.