论文标题
心脏脂肪组织通过图像级注释进行分割
Cardiac Adipose Tissue Segmentation via Image-Level Annotations
论文作者
论文摘要
自动确定心脏异常的结构底物可以潜在地为介入程序提供实时指导。有了心脏组织底物的了解,可以通过检测心律不齐的底物来进一步优化复杂的心律不齐和心室心动过速等复杂心律不齐的治疗,以进一步优化心律不齐的底物以靶向治疗(即脂肪)和鉴定关键结构以避免。光学相干断层扫描(OCT)是一种实时成像方式,有助于满足这一需求。心脏图像分析的现有方法主要依赖于完全监督的学习技术,这些技术遇到了在像素标签的劳动密集型注释过程中工作量的缺点。为了减少对像素标签的需求,我们使用人类心脏底物的OCT图像上的图像级注释开发了一个两阶段的深度学习框架,用于心脏脂肪组织分割。特别是,我们将类激活映射与超像素分割进行了整合,以解决心脏组织分割中提出的稀疏组织种子挑战。我们的研究弥合了自动组织分析需求与缺乏高质量像素注释之间的差距。据我们所知,这是第一项尝试通过弱监督的学习技术来解决OCT图像上心脏组织分割的研究。在体外人类心脏OCT数据集中,我们证明了我们对图像级注释的弱监督方法可相当地作为对像素的注释进行培训的完全监督方法。
Automatically identifying the structural substrates underlying cardiac abnormalities can potentially provide real-time guidance for interventional procedures. With the knowledge of cardiac tissue substrates, the treatment of complex arrhythmias such as atrial fibrillation and ventricular tachycardia can be further optimized by detecting arrhythmia substrates to target for treatment (i.e., adipose) and identifying critical structures to avoid. Optical coherence tomography (OCT) is a real-time imaging modality that aids in addressing this need. Existing approaches for cardiac image analysis mainly rely on fully supervised learning techniques, which suffer from the drawback of workload on labor-intensive annotation process of pixel-wise labeling. To lessen the need for pixel-wise labeling, we develop a two-stage deep learning framework for cardiac adipose tissue segmentation using image-level annotations on OCT images of human cardiac substrates. In particular, we integrate class activation mapping with superpixel segmentation to solve the sparse tissue seed challenge raised in cardiac tissue segmentation. Our study bridges the gap between the demand on automatic tissue analysis and the lack of high-quality pixel-wise annotations. To the best of our knowledge, this is the first study that attempts to address cardiac tissue segmentation on OCT images via weakly supervised learning techniques. Within an in-vitro human cardiac OCT dataset, we demonstrate that our weakly supervised approach on image-level annotations achieves comparable performance as fully supervised methods trained on pixel-wise annotations.