论文标题

半监督数据有效回归的自动编码缓慢表示

Autoencoding Slow Representations for Semi-supervised Data Efficient Regression

论文作者

Struckmeier, Oliver, Tiwari, Kshitij, Kyrki, Ville

论文摘要

缓慢的原理是受大脑视觉皮层启发的概念。它假设在较慢的时间范围内,快速变化的感觉信号变化的潜在生成因子。可以利用丰富的未标记感觉数据的中间表示的无监督学习来执行数据有效的监督下游回归。在本文中,我们提出了一种对无监督的表示的一般表述,为beta-vae的估计下限增加了一个缓慢的正则化项,以鼓励观察和潜在空间的时间相似。在此框架内,我们比较了现有的缓慢正则术语,例如现有端到端方法中使用的L1和L2损失,Slowvae并提出了基于Brownian Motion的新术语。我们在其下游任务绩效和数据效率方面经验评估了这些缓慢的正则术语。我们发现,与没有缓慢正则化的表示相比,缓慢的表示会导致不同实验域的同等或更好的下游任务性能和数据效率。最后,我们讨论了Frechet Inception距离(FID)如何用于确定GAN的生成能力,可以作为预测预训练的自动编码器模型在有监督的下游任务和加速超参数搜索中的性能的措施。

The slowness principle is a concept inspired by the visual cortex of the brain. It postulates that the underlying generative factors of a quickly varying sensory signal change on a slower time scale. Unsupervised learning of intermediate representations utilizing abundant unlabeled sensory data can be leveraged to perform data-efficient supervised downstream regression. In this paper, we propose a general formulation of slowness for unsupervised representation learning adding a slowness regularization term to the estimate lower bound of the beta-VAE to encourage temporal similarity in observation and latent space. Within this framework we compare existing slowness regularization terms such as the L1 and L2 loss used in existing end-to-end methods, the SlowVAE and propose a new term based on Brownian motion. We empirically evaluate these slowness regularization terms with respect to their downstream task performance and data efficiency. We find that slow representations lead to equal or better downstream task performance and data efficiency in different experiment domains when compared to representations without slowness regularization. Finally, we discuss how the Frechet Inception Distance (FID), traditionally used to determine the generative capabilities of GANs, can serve as a measure to predict the performance of pre-trained Autoencoder model in a supervised downstream task and accelerate hyperparameter search.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源