论文标题

使用条件生成对抗网络对中分辨率卫星图像的语义分割

Semantic Segmentation of Medium-Resolution Satellite Imagery using Conditional Generative Adversarial Networks

论文作者

Kulkarni, Aditya, Mohandoss, Tharun, Northrup, Daniel, Mwebaze, Ernest, Alemohammad, Hamed

论文摘要

卫星图像的语义分割是一种常见方法,可以识别模式并检测地球周围的变化。大多数最先进的语义分割模型都是使用卷积神经网络(CNN)以完全监督的方式培训的。 CNN的概括对于卫星图像的概括性很差,因为在景观类型,图像分辨率和不同地理位置和季节的标签上,数据可能非常多样化。因此,CNN的性能不能很好地转化为看不见地区或季节的图像。受到有条件生成的对抗网络(CGAN)的启发,基于图像到图像翻译的高分辨率卫星图像的方法,我们提出了使用中分辨率Sentinel-2 Imagery的CGAN框架,用于土地覆盖分类。我们发现,CGAN模型在看不见的不平衡测试数据集上的显着余量优于类似复杂性的CNN模型。

Semantic segmentation of satellite imagery is a common approach to identify patterns and detect changes around the planet. Most of the state-of-the-art semantic segmentation models are trained in a fully supervised way using Convolutional Neural Network (CNN). The generalization property of CNN is poor for satellite imagery because the data can be very diverse in terms of landscape types, image resolutions, and scarcity of labels for different geographies and seasons. Hence, the performance of CNN doesn't translate well to images from unseen regions or seasons. Inspired by Conditional Generative Adversarial Networks (CGAN) based approach of image-to-image translation for high-resolution satellite imagery, we propose a CGAN framework for land cover classification using medium-resolution Sentinel-2 imagery. We find that the CGAN model outperforms the CNN model of similar complexity by a significant margin on an unseen imbalanced test dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源