论文标题
从部分标记的多器官和肿瘤分割的数据中学习
Learning from partially labeled data for multi-organ and tumor segmentation
论文作者
论文摘要
由于劳动力和专业知识的密集成本,用于器官和肿瘤分割的医疗图像基准遭受了部分标签问题的损失。当前主流方法遵循一个解决一项任务的网络的实践。使用此管道,不仅性能受到单个任务的通常小数据集的限制,而且计算成本随着任务数量线性增加。为了解决这个问题,我们提出了一个基于变压器的动态按需网络(Transdodnet),该网络学会了在多个部分标记的数据集上分割器官和肿瘤。具体而言,跨模型具有由卷积神经网络和变压器组成的混合主链。动态头使网络可以灵活地完成多个分段任务。与训练后固定核的现有方法不同,动态头中的内核是由变压器适应产生的,后者采用自我发项机制来对远程器官的依赖性进行建模并解码可以代表每个器官的器官嵌入。我们创建了一个大规模的部分标记的多器官和肿瘤分割基准,称为MOTS,并在七个器官和肿瘤分段任务上证明了我们的跨模型的卓越性能。这项研究还提供了一个一般的3D医学图像分割模型,该模型已在大规模的MOTS基准上进行了预训练,并已证明了Byol(当前主要是自我监督的学习方法)的先进性能。代码将在\ url {https://git.io/dodnet}上可用。
Medical image benchmarks for the segmentation of organs and tumors suffer from the partially labeling issue due to its intensive cost of labor and expertise. Current mainstream approaches follow the practice of one network solving one task. With this pipeline, not only the performance is limited by the typically small dataset of a single task, but also the computation cost linearly increases with the number of tasks. To address this, we propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple partially labeled datasets. Specifically, TransDoDNet has a hybrid backbone that is composed of the convolutional neural network and Transformer. A dynamic head enables the network to accomplish multiple segmentation tasks flexibly. Unlike existing approaches that fix kernels after training, the kernels in the dynamic head are generated adaptively by the Transformer, which employs the self-attention mechanism to model long-range organ-wise dependencies and decodes the organ embedding that can represent each organ. We create a large-scale partially labeled Multi-Organ and Tumor Segmentation benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors on seven organ and tumor segmentation tasks. This study also provides a general 3D medical image segmentation model, which has been pre-trained on the large-scale MOTS benchmark and has demonstrated advanced performance over BYOL, the current predominant self-supervised learning method. Code will be available at \url{https://git.io/DoDNet}.