论文标题
关于减少在线持续学习中突然表示变化的新见解
New Insights on Reducing Abrupt Representation Change in Online Continual Learning
论文作者
论文摘要
在在线持续学习范式中,代理必须从变化的分布中学习,同时尊重记忆并计算约束。经验重播(ER),其中一小部分过去的数据与新数据一起存储并重播,已经成为一种简单有效的学习策略。在这项工作中,我们关注的是在传入数据流中出现以前未观察到的类时观察到的数据的表示的变化,并且必须将新类与以前的类别区分开。我们通过表明应用ER导致新添加的类的表示形式与以前的类显着重叠,从而导致高度破坏性参数更新,从而为这个问题提供了新的启示。基于这种经验分析,我们提出了一种新方法,该方法通过屏蔽了急剧适应以适应新阶层来减轻此问题。我们表明,使用不对称更新规则可以推动新类以适应较旧的类(而不是反向),这更有效,尤其是在任务边界上,大部分忘记通常会发生。经验结果表明,对标准持续学习基准的强大基准的增长很大
In the online continual learning paradigm, agents must learn from a changing distribution while respecting memory and compute constraints. Experience Replay (ER), where a small subset of past data is stored and replayed alongside new data, has emerged as a simple and effective learning strategy. In this work, we focus on the change in representations of observed data that arises when previously unobserved classes appear in the incoming data stream, and new classes must be distinguished from previous ones. We shed new light on this question by showing that applying ER causes the newly added classes' representations to overlap significantly with the previous classes, leading to highly disruptive parameter updates. Based on this empirical analysis, we propose a new method which mitigates this issue by shielding the learned representations from drastic adaptation to accommodate new classes. We show that using an asymmetric update rule pushes new classes to adapt to the older ones (rather than the reverse), which is more effective especially at task boundaries, where much of the forgetting typically occurs. Empirical results show significant gains over strong baselines on standard continual learning benchmarks