论文标题
通过在线合作记忆进行持续的变异自动编码器学习
Continual Variational Autoencoder Learning via Online Cooperative Memorization
论文作者
论文摘要
由于其推断,数据表示和重建属性,变异自动编码器(VAE)已成功地用于连续学习分类任务中。但是,它们具有与持续学习过程中学到的类和数据库相对应的规格生成图像的能力(CL)尚不清楚,而灾难性遗忘仍然是一个重大挑战。在本文中,我们首先通过开发一种将CL作为动态最佳传输问题的新理论框架来分析VAE的遗忘行为。该框架证明了与数据可能性相似的范围,而无需任务信息,并解释了在培训过程中如何丢失先验知识。然后,我们提出了一种新颖的记忆缓冲方法,即在线合作记忆(OCM)框架,该框架由短期内存(STM)组成,该框架不断存储最近的样本以为模型提供未来的信息,以及旨在保留各种样本的长期记忆(LTM)。拟议的OCM根据信息多样性选择标准将某些样本从STM转移到LTM,而无需任何监督信号。然后将OCM框架与动态VAE扩展混合物网络相结合,以进一步提高其性能。
Due to their inference, data representation and reconstruction properties, Variational Autoencoders (VAE) have been successfully used in continual learning classification tasks. However, their ability to generate images with specifications corresponding to the classes and databases learned during Continual Learning (CL) is not well understood and catastrophic forgetting remains a significant challenge. In this paper, we firstly analyze the forgetting behaviour of VAEs by developing a new theoretical framework that formulates CL as a dynamic optimal transport problem. This framework proves approximate bounds to the data likelihood without requiring the task information and explains how the prior knowledge is lost during the training process. We then propose a novel memory buffering approach, namely the Online Cooperative Memorization (OCM) framework, which consists of a Short-Term Memory (STM) that continually stores recent samples to provide future information for the model, and a Long-Term Memory (LTM) aiming to preserve a wide diversity of samples. The proposed OCM transfers certain samples from STM to LTM according to the information diversity selection criterion without requiring any supervised signals. The OCM framework is then combined with a dynamic VAE expansion mixture network for further enhancing its performance.