论文标题
比较学习:通过比较图像表示,超过X光片预处理图像
Comparing to Learn: Surpassing ImageNet Pretraining on Radiographs By Comparing Image Representations
论文作者
论文摘要
在深度学习时代,预估计的模型在医学图像分析中起着重要的作用,其中影像网被广泛采用为最好的方法。但是,不可否认的是,自然图像和医学图像之间存在明显的域间隙。为了弥合这一差距,我们提出了一种新的预训练方法,该方法从没有手动注释的情况下从700k X光片中学习。我们称我们的方法与学习(C2L)进行比较,因为它通过比较不同的图像表示来学习强大的功能。为了验证C2L的有效性,我们进行了全面的消融研究,并在不同的任务和数据集上进行评估。 X光片的实验结果表明,C2L可以明显胜过ImageNet预处理和先前的最先进方法。代码和型号可用。
In deep learning era, pretrained models play an important role in medical image analysis, in which ImageNet pretraining has been widely adopted as the best way. However, it is undeniable that there exists an obvious domain gap between natural images and medical images. To bridge this gap, we propose a new pretraining method which learns from 700k radiographs given no manual annotations. We call our method as Comparing to Learn (C2L) because it learns robust features by comparing different image representations. To verify the effectiveness of C2L, we conduct comprehensive ablation studies and evaluate it on different tasks and datasets. The experimental results on radiographs show that C2L can outperform ImageNet pretraining and previous state-of-the-art approaches significantly. Code and models are available.