论文标题
以风格为导向的重播和对域敏感的美白
Domain-incremental Cardiac Image Segmentation with Style-oriented Replay and Domain-sensitive Feature Whitening
论文作者
论文摘要
当代方法已经在心脏图像分割上显示出令人鼓舞的结果,而仅在静态学习中,即一次优化网络一次,忽略了模型更新的潜在需求。在实际情况下,随着时间的流逝,新数据继续从多个机构收集,而新的需求不断增长,以追求更令人满意的绩效。所需的模型应逐步从每个传入数据集中学习,并随着时间的流逝而逐渐通过改进的功能进行更新。由于数据集从多个站点依次传递的数据集通常与域差异有关,因此每个更新的模型都不应灾难性地忘记先前学到的域,同时对当前到达的域甚至看不见的域进行了很好的推广。在医疗方案中,这尤其具有挑战性,因为由于数据隐私,通常不允许访问或存储过去的数据。为此,我们提出了一个新颖的域内收入学习框架,以首先恢复过去的域输入,然后在模型优化期间定期重录。特别是,我们首先提出了一个面向样式的重播模块,以实现过去数据的结构现实和记忆有效的再现,然后将过去的数据合并到使用当前数据共同优化模型以减轻灾难性遗忘。在优化过程中,我们还执行对域敏感的特征美白,以抑制模型对域更改敏感的特征(例如,域名赋予性样式特征),以帮助域不变特征探索并逐渐提高网络的通用性能。我们已经通过单域和复合域增量学习设置对M&M数据集进行了广泛的评估,并且与其他比较方法相比,性能的提高了。
Contemporary methods have shown promising results on cardiac image segmentation, but merely in static learning, i.e., optimizing the network once for all, ignoring potential needs for model updating. In real-world scenarios, new data continues to be gathered from multiple institutions over time and new demands keep growing to pursue more satisfying performance. The desired model should incrementally learn from each incoming dataset and progressively update with improved functionality as time goes by. As the datasets sequentially delivered from multiple sites are normally heterogenous with domain discrepancy, each updated model should not catastrophically forget previously learned domains while well generalizing to currently arrived domains or even unseen domains. In medical scenarios, this is particularly challenging as accessing or storing past data is commonly not allowed due to data privacy. To this end, we propose a novel domain-incremental learning framework to recover past domain inputs first and then regularly replay them during model optimization. Particularly, we first present a style-oriented replay module to enable structure-realistic and memory-efficient reproduction of past data, and then incorporate the replayed past data to jointly optimize the model with current data to alleviate catastrophic forgetting. During optimization, we additionally perform domain-sensitive feature whitening to suppress model's dependency on features that are sensitive to domain changes (e.g., domain-distinctive style features) to assist domain-invariant feature exploration and gradually improve the generalization performance of the network. We have extensively evaluated our approach with the M&Ms Dataset in single-domain and compound-domain incremental learning settings with improved performance over other comparison approaches.