论文标题
基于涂鸦的域通过co段适应
Scribble-based Domain Adaptation via Co-segmentation
论文作者
论文摘要
尽管深度卷积网络在许多医学图像分割任务中都达到了最先进的性能,但它们通常表现出较差的概括能力。要能够从一个域(例如一种成像模态)概括到另一种域,必须进行域的适应性。尽管有监督的方法可能会导致良好的性能,但他们需要完全注释其他数据,这在实践中可能不是一个选择。相反,无监督的方法不需要其他注释,但通常不稳定且难以训练。在这项工作中,我们提出了一种新型的弱监督方法。目标域上的涂鸦无需详细但耗时的注释,而是用于执行域的适应性。本文介绍了基于结构化学习和共段的域适应性的新表述。由于引入正规损失,我们的方法很容易训练。该框架在前庭造型块分段(T1至T2扫描)上进行了验证。我们提出的方法优于无监督的方法,并实现与完全监督的方法相当的性能。
Although deep convolutional networks have reached state-of-the-art performance in many medical image segmentation tasks, they have typically demonstrated poor generalisation capability. To be able to generalise from one domain (e.g. one imaging modality) to another, domain adaptation has to be performed. While supervised methods may lead to good performance, they require to fully annotate additional data which may not be an option in practice. In contrast, unsupervised methods don't need additional annotations but are usually unstable and hard to train. In this work, we propose a novel weakly-supervised method. Instead of requiring detailed but time-consuming annotations, scribbles on the target domain are used to perform domain adaptation. This paper introduces a new formulation of domain adaptation based on structured learning and co-segmentation. Our method is easy to train, thanks to the introduction of a regularised loss. The framework is validated on Vestibular Schwannoma segmentation (T1 to T2 scans). Our proposed method outperforms unsupervised approaches and achieves comparable performance to a fully-supervised approach.