论文标题

深度无监督的图像异常检测:信息理论框架

Deep Unsupervised Image Anomaly Detection: An Information Theoretic Framework

论文作者

Ye, Fei, Zheng, Huangjie, Huang, Chaoqin, Zhang, Ya

论文摘要

基于替代任务的方法最近显示了无监督图像异常检测的巨大希望。但是,不能保证替代任务与异常检测共享一致的优化方向。在本文中,我们返回具有信息理论异常检测的直接目标函数,从图像的关节分布及其表示方面,它最大程度地提高了正常数据和异常数据之间的距离。不幸的是,在训练过程中未提供异常数据的无监督环境下,该目标函数无法直接优化。通过对上述目标函数的数学分析,我们设法将其分解为四个组成部分。为了以无监督的方式进行优化,我们表明,在假设正常数据和异常数据的分布中可以在潜在空间中分离,因此可以将其下限视为加权相互信息和熵之间权衡的函数。该目标函数能够解释为什么基于替代任务的方法对异常检测有效,并进一步指出潜在的改进方向。基于此对象函数,我们引入了一个新的信息理论框架,以实现无监督的图像异常检测。广泛的实验表明,所提出的框架在多个基准数据集上显着优于几个最先进的框架。

Surrogate task based methods have recently shown great promise for unsupervised image anomaly detection. However, there is no guarantee that the surrogate tasks share the consistent optimization direction with anomaly detection. In this paper, we return to a direct objective function for anomaly detection with information theory, which maximizes the distance between normal and anomalous data in terms of the joint distribution of images and their representation. Unfortunately, this objective function is not directly optimizable under the unsupervised setting where no anomalous data is provided during training. Through mathematical analysis of the above objective function, we manage to decompose it into four components. In order to optimize in an unsupervised fashion, we show that, under the assumption that distribution of the normal and anomalous data are separable in the latent space, its lower bound can be considered as a function which weights the trade-off between mutual information and entropy. This objective function is able to explain why the surrogate task based methods are effective for anomaly detection and further point out the potential direction of improvement. Based on this object function we introduce a novel information theoretic framework for unsupervised image anomaly detection. Extensive experiments have demonstrated that the proposed framework significantly outperforms several state-of-the-arts on multiple benchmark data sets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源