论文标题
通过域随机化和元学习持续适应视觉表示
Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning
论文作者
论文摘要
大多数标准学习方法会导致脆弱的模型,这些模型很容易在不同性质的样本(众所周知的“灾难性遗忘”问题)中进行依次训练。特别是,当模型连续从不同的视觉域中学习时,它往往会忘记过去的域而偏爱最近的域。在这种情况下,我们表明,学习模型本质上更坚固的一种方法是遗忘的是域随机化 - 对于视觉任务,用重型图像操纵随机将当前域的分布随机分配。在此结果的基础上,我们制定了一种元学习策略,其中正规器明确惩罚了将模型从当前域转移到不同“辅助”元元域的任何损失,同时也可以放松对它们的适应性。此类元域也通过随机图像操作产生。我们在各种实验中进行了经验证明,从分类到语义分割 - 我们的方法导致模型不太容易转移到新领域时遇到灾难性遗忘的模型。
Most standard learning approaches lead to fragile models which are prone to drift when sequentially trained on samples of a different nature - the well-known "catastrophic forgetting" issue. In particular, when a model consecutively learns from different visual domains, it tends to forget the past domains in favor of the most recent ones. In this context, we show that one way to learn models that are inherently more robust against forgetting is domain randomization - for vision tasks, randomizing the current domain's distribution with heavy image manipulations. Building on this result, we devise a meta-learning strategy where a regularizer explicitly penalizes any loss associated with transferring the model from the current domain to different "auxiliary" meta-domains, while also easing adaptation to them. Such meta-domains are also generated through randomized image manipulations. We empirically demonstrate in a variety of experiments - spanning from classification to semantic segmentation - that our approach results in models that are less prone to catastrophic forgetting when transferred to new domains.