论文标题
学习想象:使用未标记的数据使记忆多样化,以增量学习
Learning to Imagine: Diversify Memory for Incremental Learning using Unlabeled Data
论文作者
论文摘要
深度神经网络(DNN)在逐步学习时遭受了灾难性的遗忘,这极大地限制了其应用。尽管每个任务的少数样本(称为“示例”)可能会在某种程度上减轻遗忘,但现有方法仍然受到少量示例的限制,因为这些示例太少了,无法携带足够的特定任务知识,因此遗忘的保留仍然存在。为了克服这个问题,我们建议给定示例的“想象”各种各样的示例,指的是未经标记的数据中丰富的语义 - iRrelevant信息。具体而言,我们开发了一个可学习的功能生成器来通过根据示例中的语义信息和来自未标记的数据的语义信息信息来自适应生成示例的各种示例来使示例性多样化。我们介绍了语义对比度学习,以实施生成的样本与示例一致,并执行语义偶联对比度学习,以鼓励产生的样本的多样性。多样化的样品可以有效地阻止DNN在学习新任务时忘记。我们的方法不会带来任何额外的推断成本,并且在两个基准CIFAR-100上的最先进方法和图像缩放的余量清晰。
Deep neural network (DNN) suffers from catastrophic forgetting when learning incrementally, which greatly limits its applications. Although maintaining a handful of samples (called `exemplars`) of each task could alleviate forgetting to some extent, existing methods are still limited by the small number of exemplars since these exemplars are too few to carry enough task-specific knowledge, and therefore the forgetting remains. To overcome this problem, we propose to `imagine` diverse counterparts of given exemplars referring to the abundant semantic-irrelevant information from unlabeled data. Specifically, we develop a learnable feature generator to diversify exemplars by adaptively generating diverse counterparts of exemplars based on semantic information from exemplars and semantically-irrelevant information from unlabeled data. We introduce semantic contrastive learning to enforce the generated samples to be semantic consistent with exemplars and perform semanticdecoupling contrastive learning to encourage diversity of generated samples. The diverse generated samples could effectively prevent DNN from forgetting when learning new tasks. Our method does not bring any extra inference cost and outperforms state-of-the-art methods on two benchmarks CIFAR-100 and ImageNet-Subset by a clear margin.