论文标题

重新安排:对调整语义分割对不利条件的适应和完善

Refign: Align and Refine for Adaptation of Semantic Segmentation to Adverse Conditions

论文作者

Bruggemann, David, Sakaridis, Christos, Truong, Prune, Van Gool, Luc

论文摘要

由于在不良视觉条件下记录的图像的密集像素级语义注释稀缺,因此对此类图像的语义分割的无监督域适应性(UDA)引起了兴趣。 UDA适应了在正常条件下训练的模型,以适应目标不利条件域。同时,多个带有驾驶场景的数据集提供了跨多种条件的相同场景的相应图像,这可以用作适应域的弱监督的一种形式。我们提出了重新设计,这是对基于自训练的UDA方法的一般扩展,该方法利用了这些跨域对应关系。重新调整由两个步骤组成:(1)使用不确定性感知的密集匹配网络将正常条件图像与相应的不良条件图像对齐,以及(2)使用自适应标签校正机制来完善不良预测,并使用正常预测。我们设计自定义模块,以简化这两个步骤,并在几个不利条件基准的基准(包括ACDC和Dark Zurich)上为域自适应的语义分割设置新的最新技术。该方法不引入额外的训练参数,只有在训练期间最小的计算开销 - 可以用作撤离扩展,以改善任何给定的基于自我训练的UDA方法。代码可从https://github.com/brdav/refign获得。

Due to the scarcity of dense pixel-level semantic annotations for images recorded in adverse visual conditions, there has been a keen interest in unsupervised domain adaptation (UDA) for the semantic segmentation of such images. UDA adapts models trained on normal conditions to the target adverse-condition domains. Meanwhile, multiple datasets with driving scenes provide corresponding images of the same scenes across multiple conditions, which can serve as a form of weak supervision for domain adaptation. We propose Refign, a generic extension to self-training-based UDA methods which leverages these cross-domain correspondences. Refign consists of two steps: (1) aligning the normal-condition image to the corresponding adverse-condition image using an uncertainty-aware dense matching network, and (2) refining the adverse prediction with the normal prediction using an adaptive label correction mechanism. We design custom modules to streamline both steps and set the new state of the art for domain-adaptive semantic segmentation on several adverse-condition benchmarks, including ACDC and Dark Zurich. The approach introduces no extra training parameters, minimal computational overhead -- during training only -- and can be used as a drop-in extension to improve any given self-training-based UDA method. Code is available at https://github.com/brdav/refign.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源