论文标题
MOCO-CXR:MOCO预训练可提高胸部X射线模型的表示和可转移性
MoCo-CXR: MoCo Pretraining Improves Representation and Transferability of Chest X-ray Models
论文作者
论文摘要
对比学习是一种自学的一种形式,可以利用未标记的数据来产生预贴心的模型。虽然对比度学习在自然图像分类任务上表现出了令人鼓舞的结果,但其在诸如胸部X射线解释之类的医学成像任务中的应用是有限的。在这项工作中,我们提出了MOCO-CXR,这是对比度学习方法动量对比度(MOCO)的适应,以产生具有更好表示和初始化的模型,以检测胸部X射线中的病理。在检测胸腔积液时,我们发现在MoCo-CXR预言的表示上训练的线性模型优于没有Moco-CXR预言表示的那些,这表明MOCO-CXR预言的表示形式具有更高的质量。端到端的微调实验表明,通过MOCO-CXR预处理初始初始化的模型优于其非MOCO-CXR预处理的对应物。我们发现,Moco-CXR预处理通过有限的标记培训数据为最大的好处提供了最大的好处。最后,我们在预审进期间看不到的目标结核数据集显示了相似的结果,这表明MOCO-CXR预处理endows模型具有表示和可传递性,可以在胸部X射线数据集和任务上应用。
Contrastive learning is a form of self-supervision that can leverage unlabeled data to produce pretrained models. While contrastive learning has demonstrated promising results on natural image classification tasks, its application to medical imaging tasks like chest X-ray interpretation has been limited. In this work, we propose MoCo-CXR, which is an adaptation of the contrastive learning method Momentum Contrast (MoCo), to produce models with better representations and initializations for the detection of pathologies in chest X-rays. In detecting pleural effusion, we find that linear models trained on MoCo-CXR-pretrained representations outperform those without MoCo-CXR-pretrained representations, indicating that MoCo-CXR-pretrained representations are of higher-quality. End-to-end fine-tuning experiments reveal that a model initialized via MoCo-CXR-pretraining outperforms its non-MoCo-CXR-pretrained counterpart. We find that MoCo-CXR-pretraining provides the most benefit with limited labeled training data. Finally, we demonstrate similar results on a target Tuberculosis dataset unseen during pretraining, indicating that MoCo-CXR-pretraining endows models with representations and transferability that can be applied across chest X-ray datasets and tasks.