论文标题
一般持续学习的黑暗体验:强大,简单的基线
Dark Experience for General Continual Learning: a Strong, Simple Baseline
论文作者
论文摘要
持续的学习激发了大量方法和评估环境。但是,他们中的大多数都忽略了实际情况的属性,在该方案中,数据流无法作为一系列任务和离线培训的顺序不可行。我们致力于一般的持续学习(GCL),在该任务边界逐渐或突然变化的域和类别分布都会变化。我们通过将彩排与知识蒸馏和正则化混合来解决;我们简单的基线,黑暗的体验重播,与在整个优化轨迹中采样的网络逻辑相匹配,从而促进了其过去的一致性。通过对标准基准和新颖的GCL评估设置(MNIST-360)进行广泛的分析,我们表明,这种看似简单的基线优于合并方法和利用有限的资源。我们进一步探讨了目标的概括能力,表明其正则化不仅仅是绩效。
Continual Learning has inspired a plethora of approaches and evaluation settings; however, the majority of them overlooks the properties of a practical scenario, where the data stream cannot be shaped as a sequence of tasks and offline training is not viable. We work towards General Continual Learning (GCL), where task boundaries blur and the domain and class distributions shift either gradually or suddenly. We address it through mixing rehearsal with knowledge distillation and regularization; our simple baseline, Dark Experience Replay, matches the network's logits sampled throughout the optimization trajectory, thus promoting consistency with its past. By conducting an extensive analysis on both standard benchmarks and a novel GCL evaluation setting (MNIST-360), we show that such a seemingly simple baseline outperforms consolidated approaches and leverages limited resources. We further explore the generalization capabilities of our objective, showing its regularization being beneficial beyond mere performance.