论文标题
弱监督物体本地化的双重注意引导下降模块
Dual-attention Guided Dropblock Module for Weakly Supervised Object Localization
论文作者
论文摘要
注意机制通常用于学习更好的特征表示的判别特征。在本文中,我们将注意力机制扩展到弱监督物体定位(WSOL)的任务,并提出了双重注意引导的Dropblock模块(DGDM),该模块(DGDM)旨在学习WSOL的信息和互补的视觉模式。该模块包含两个关键组件:通道注意引导的辍学(CAGD)和空间注意力引导的下降块(SAGD)。为了模拟通道相互依赖的模型,CAGD对通道的兴趣进行排名,并以最大幅度为重要的CAGD。如果培训在培训中变得重要,它还可以保持一些低价值的要素,以增加其价值。 SAGD可以通过擦除特征地图而不是单个像素的连续区域来有效地删除最歧视性信息。这指导该模型捕获分类较少的判别零件。此外,它还可以将前景对象与背景区域区分开,以减轻注意力错误。实验结果表明,所提出的方法实现了新的最新定位性能。
Attention mechanisms is frequently used to learn the discriminative features for better feature representations. In this paper, we extend the attention mechanism to the task of weakly supervised object localization (WSOL) and propose the dual-attention guided dropblock module (DGDM), which aims at learning the informative and complementary visual patterns for WSOL. This module contains two key components, the channel attention guided dropout (CAGD) and the spatial attention guided dropblock (SAGD). To model channel interdependencies, the CAGD ranks the channel attentions and treats the top-k attentions with the largest magnitudes as the important ones. It also keeps some low-valued elements to increase their value if they become important during training. The SAGD can efficiently remove the most discriminative information by erasing the contiguous regions of feature maps rather than individual pixels. This guides the model to capture the less discriminative parts for classification. Furthermore, it can also distinguish the foreground objects from the background regions to alleviate the attention misdirection. Experimental results demonstrate that the proposed method achieves new state-of-the-art localization performance.