论文标题
自我监管的变异自动编码器
Self-Supervised Variational Auto-Encoders
论文作者
论文摘要
密度估计,压缩和数据生成是人工智能的关键任务。变异自动编码器(VAE)构成了实现这些目标的单个框架。在这里,我们提出了一种新颖的生成模型,称为自我监督的变分自动编码器(Selfvae),该模型利用确定性和离散的变异后代。这类模型允许执行条件和无条件采样,同时简化目标函数。首先,我们将单个自我监督的转换用作潜在变量,其中转换为降尺度或边缘检测。接下来,我们考虑一个层次结构,即多个转换,与VAE相比,我们显示了它的好处。数据重建中的自我的灵活性在数据压缩任务中发现了一个特别有趣的用例,我们可以在其中权衡内存以获得更好的数据质量,反之亦然。我们在三个基准图像数据(CIFAR10,Imagenette64和Celeba)上介绍了方法的性能。
Density estimation, compression and data generation are crucial tasks in artificial intelligence. Variational Auto-Encoders (VAEs) constitute a single framework to achieve these goals. Here, we present a novel class of generative models, called self-supervised Variational Auto-Encoder (selfVAE), that utilizes deterministic and discrete variational posteriors. This class of models allows to perform both conditional and unconditional sampling, while simplifying the objective function. First, we use a single self-supervised transformation as a latent variable, where a transformation is either downscaling or edge detection. Next, we consider a hierarchical architecture, i.e., multiple transformations, and we show its benefits compared to the VAE. The flexibility of selfVAE in data reconstruction finds a particularly interesting use case in data compression tasks, where we can trade-off memory for better data quality, and vice-versa. We present performance of our approach on three benchmark image data (Cifar10, Imagenette64, and CelebA).