论文标题

CRNET:几次分割的跨参考网络

CRNet: Cross-Reference Networks for Few-Shot Segmentation

论文作者

Liu, Weide, Zhang, Chi, Lin, Guosheng, Liu, Fayao

论文摘要

在过去的几年中,最先进的图像分割算法基于深度卷积神经网络。为了使深层网络能够理解一个概念,人类需要收集大量像素级的注释数据来训练模型,这既耗时又乏味。最近,提出了很少的分割来解决此问题。很少有分段旨在学习一个可以推广到只有几个培训图像的新课程的细分模型。在本文中,我们提出了一个交叉引用网络(CRNET),以进行几次分割。与以前仅预测查询图像中掩码的作品不同,我们提出的模型同时对支持图像和查询图像进行预测。使用交叉引用机制,我们的网络可以更好地在两个图像中找到共同出现的对象,从而有助于少量分割任务。我们还开发了一个面膜修补模块,以偶尔完善前景区域的预测。对于$ k $ shout的学习,我们建议对网络的填补部分,以利用多个标记的支持图像。 Pascal VOC 2012数据集的实验表明,我们的网络实现了最先进的性能。

Over the past few years, state-of-the-art image segmentation algorithms are based on deep convolutional neural networks. To render a deep network with the ability to understand a concept, humans need to collect a large amount of pixel-level annotated data to train the models, which is time-consuming and tedious. Recently, few-shot segmentation is proposed to solve this problem. Few-shot segmentation aims to learn a segmentation model that can be generalized to novel classes with only a few training images. In this paper, we propose a cross-reference network (CRNet) for few-shot segmentation. Unlike previous works which only predict the mask in the query image, our proposed model concurrently make predictions for both the support image and the query image. With a cross-reference mechanism, our network can better find the co-occurrent objects in the two images, thus helping the few-shot segmentation task. We also develop a mask refinement module to recurrently refine the prediction of the foreground regions. For the $k$-shot learning, we propose to finetune parts of networks to take advantage of multiple labeled support images. Experiments on the PASCAL VOC 2012 dataset show that our network achieves state-of-the-art performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源