论文标题
因解器:一种可扩展的可解释方法,用于医学图像分割的上下文建模
Factorizer: A Scalable Interpretable Approach to Context Modeling for Medical Image Segmentation
论文作者
论文摘要
具有U形架构的卷积神经网络(CNN)主导了医学图像分割,这对于各种临床目的至关重要。但是,卷积的固有位置使得CNN无法完全利用全球环境,这对于更好地识别某些结构,例如脑部病变至关重要。最近,变形金刚在视觉任务上证明了有希望的表现,包括语义分割,这主要是由于它们有能力对长期依赖性进行建模。然而,注意力的二次复杂性使现有的基于变压器的模型仅在以某种方式减少图像分辨率之后才能使用自发层,这限制了捕获较高分辨率上存在的全球环境的能力。因此,这项工作介绍了一个称为配音因子的模型家族,该模型将低级矩阵分解的功能用于构建端到端分段模型。具体而言,我们提出了一种线性扩展的方法来进行上下文建模,将非负矩阵分解(NMF)作为集成到U形体系结构中的可区分层。移动的窗口技术还与NMF结合使用,可有效汇总本地信息。因素化因素就准确性,可伸缩性和可解释性而与CNN和变压器竞争,在BRATS数据集中获得了最新的脑肿瘤分割结果,并获得了脑肿瘤分段的最新结果,并获得了中风病变细分的Isles'22数据集。高度有意义的NMF组件比CNN和变压器具有额外的解释性优势。此外,我们的消融研究揭示了因子化的独特特征,该特征能够在没有任何额外步骤的情况下对训练的因子化处理器进行大幅加速,而不会牺牲太多的准确性。代码和模型可在https://github.com/pashtari/factorizer上公开获得。
Convolutional Neural Networks (CNNs) with U-shaped architectures have dominated medical image segmentation, which is crucial for various clinical purposes. However, the inherent locality of convolution makes CNNs fail to fully exploit global context, essential for better recognition of some structures, e.g., brain lesions. Transformers have recently proven promising performance on vision tasks, including semantic segmentation, mainly due to their capability of modeling long-range dependencies. Nevertheless, the quadratic complexity of attention makes existing Transformer-based models use self-attention layers only after somehow reducing the image resolution, which limits the ability to capture global contexts present at higher resolutions. Therefore, this work introduces a family of models, dubbed Factorizer, which leverages the power of low-rank matrix factorization for constructing an end-to-end segmentation model. Specifically, we propose a linearly scalable approach to context modeling, formulating Nonnegative Matrix Factorization (NMF) as a differentiable layer integrated into a U-shaped architecture. The shifted window technique is also utilized in combination with NMF to effectively aggregate local information. Factorizers compete favorably with CNNs and Transformers in terms of accuracy, scalability, and interpretability, achieving state-of-the-art results on the BraTS dataset for brain tumor segmentation and ISLES'22 dataset for stroke lesion segmentation. Highly meaningful NMF components give an additional interpretability advantage to Factorizers over CNNs and Transformers. Moreover, our ablation studies reveal a distinctive feature of Factorizers that enables a significant speed-up in inference for a trained Factorizer without any extra steps and without sacrificing much accuracy. The code and models are publicly available at https://github.com/pashtari/factorizer.