论文标题
CT-Lungnet:3D胸CT扫描中精确肺组织分割的深度学习框架
CT-LungNet: A Deep Learning Framework for Precise Lung Tissue Segmentation in 3D Thoracic CT Scans
论文作者
论文摘要
计算机断层扫描(CT)图像中肺组织的分割是大多数肺部图像分析应用的前体。近年来,使用深度学习的语义分割方法表现出顶级性能,但是由于形状,大小和方向的变化,设计准确且稳健的分割模型对肺组织具有挑战性。此外,医学图像伪影和噪声会影响肺组织分割并降解下游分析的准确性。当前对肺组织分割的深度学习方法的实用性受到限制,因为它们需要大量的计算资源,并且可能不容易在临床环境中部署。本文提出了一种全自动方法,该方法使用深网和转移学习来识别三维(3D)肺CT图像中的肺。我们介绍了(1)一种新颖的2.5维图像表示,来自连续的CT切片,简洁地表示体积信息,并且(2)配备有预先训练的InceptionV3块的U-NET体系结构,以保持3D CT CT扫描,同时将可学习参数的数量保持在可能的情况下。使用一个公共数据集Luna16对我们的方法进行了定量评估,用于培训和测试,以及两个公共数据集,即Vessel12和CRPF,仅用于测试。由于可学习的参数数量较少,我们的方法获得了对看不见的船只12和CRPF数据集的高推广性,而与现有方法相比,获得的性能高于luna16(分别在luna16,luna16,bessel12和crpf数据集中)。我们通过图形用户界面在Medvispy.ee.kntu.ac.ir上公开访问我们的方法。
Segmentation of lung tissue in computed tomography (CT) images is a precursor to most pulmonary image analysis applications. Semantic segmentation methods using deep learning have exhibited top-tier performance in recent years, however designing accurate and robust segmentation models for lung tissue is challenging due to the variations in shape, size, and orientation. Additionally, medical image artifacts and noise can affect lung tissue segmentation and degrade the accuracy of downstream analysis. The practicality of current deep learning methods for lung tissue segmentation is limited as they require significant computational resources and may not be easily deployable in clinical settings. This paper presents a fully automatic method that identifies the lungs in three-dimensional (3D) pulmonary CT images using deep networks and transfer learning. We introduce (1) a novel 2.5-dimensional image representation from consecutive CT slices that succinctly represents volumetric information and (2) a U-Net architecture equipped with pre-trained InceptionV3 blocks to segment 3D CT scans while maintaining the number of learnable parameters as low as possible. Our method was quantitatively assessed using one public dataset, LUNA16, for training and testing and two public datasets, namely, VESSEL12 and CRPF, only for testing. Due to the low number of learnable parameters, our method achieved high generalizability to the unseen VESSEL12 and CRPF datasets while obtaining superior performance over Luna16 compared to existing methods (Dice coefficients of 99.7, 99.1, and 98.8 over LUNA16, VESSEL12, and CRPF datasets, respectively). We made our method publicly accessible via a graphical user interface at medvispy.ee.kntu.ac.ir.