论文标题
学会从物体大小细分
Learning to segment from object sizes
论文作者
论文摘要
深度学习已被证明对于语义细分特别有用,这是一项基本的图像分析任务。但是,标准的深度学习方法需要许多带有基本像素注释的培训图像,这些培训图像通常努力地获得,在某些情况下(例如,医学图像)需要域专业知识。因此,我们专注于图像注释而不是像素的注释,而不是明显地获取但仍然有益的图像注释,即前景对象的大小。我们将对象大小定义为前景和最近背景像素之间的最大chebyshev距离。我们提出了一种算法,用于训练从几个像素的注释图像和许多具有已知对象大小的图像的数据集中的深层分割网络。该算法可通过采样梯度,然后使用标准的后传播算法来最大程度地最大程度地减少在对象大小上定义的离散(非差异)损耗函数。实验表明,新方法改善了细分性能。
Deep learning has proved particularly useful for semantic segmentation, a fundamental image analysis task. However, the standard deep learning methods need many training images with ground-truth pixel-wise annotations, which are usually laborious to obtain and, in some cases (e.g., medical images), require domain expertise. Therefore, instead of pixel-wise annotations, we focus on image annotations that are significantly easier to acquire but still informative, namely the size of foreground objects. We define the object size as the maximum Chebyshev distance between a foreground and the nearest background pixel. We propose an algorithm for training a deep segmentation network from a dataset of a few pixel-wise annotated images and many images with known object sizes. The algorithm minimizes a discrete (non-differentiable) loss function defined over the object sizes by sampling the gradient and then using the standard back-propagation algorithm. Experiments show that the new approach improves the segmentation performance.