论文标题

从用户绘制的图像标记中学习CNN过滤器,用于椰子树图像分类

Learning CNN filters from user-drawn image markers for coconut-tree image classification

论文作者

de Souza, Italos Estilon, Falcão, Alexandre Xavier

论文摘要

在空中图像中识别树木物种对于土地利用分类,种植园监测和自然灾害的影响评估至关重要。航空图像中树木的手动识别是乏味,昂贵且容易出错的,因此需要自动分类方法。卷积神经网络(CNN)模型在来自不同领域的图像分类应用程序中都取得了良好的成功。但是,CNN模型通常需要密集的手动注释来创建大型培训集。可以从概念上将CNN划分为卷积层,以进行特征提取和完全连接的图层,以减少特征空间和分类。我们提出了一种需要最少的用户选择图像来训练CNN的特征提取器的方法,从而减少了训练完全连接的层的所需图像数量。该方法从图像区域中的用户绘制标记中学习了每个卷积层的过滤器,以区分类别的类别,从而可以更好地控制用户控制和理解培训过程。它不依赖于基于反向传播的优化,我们证明了它在最受欢迎的CNN模型之一将椰子树航空图像的二进制分类中的优势。

Identifying species of trees in aerial images is essential for land-use classification, plantation monitoring, and impact assessment of natural disasters. The manual identification of trees in aerial images is tedious, costly, and error-prone, so automatic classification methods are necessary. Convolutional Neural Network (CNN) models have well succeeded in image classification applications from different domains. However, CNN models usually require intensive manual annotation to create large training sets. One may conceptually divide a CNN into convolutional layers for feature extraction and fully connected layers for feature space reduction and classification. We present a method that needs a minimal set of user-selected images to train the CNN's feature extractor, reducing the number of required images to train the fully connected layers. The method learns the filters of each convolutional layer from user-drawn markers in image regions that discriminate classes, allowing better user control and understanding of the training process. It does not rely on optimization based on backpropagation, and we demonstrate its advantages on the binary classification of coconut-tree aerial images against one of the most popular CNN models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源