论文标题
增加模型的通用性,以使无监督域的适应性
Increasing Model Generalizability for Unsupervised Domain Adaptation
论文作者
论文摘要
解决无监督域的适应性的主要方法是将源域和目标域的数据点映射到一个嵌入式空间中,该空间被建模为共享深层编码器的输出空间。对编码器进行了训练,以使嵌入式空间域 - 敏捷剂,以使源训练的分类器可在目标域上推广。进一步提高UDA性能的二次机制是使源域分布更加紧凑,以提高模型的通用性。我们证明,增加嵌入空间中的阶级边缘可以帮助开发具有改善性能的UDA算法。我们估计了源域的内部学习的多模式分布,该分布是由于训练而学到的,并使用它来增加源域中的类间类别分离以减少域移位的效果。我们证明,使用我们的方法导致改进了四个标准基准UDA图像分类数据集的模型概括性,并与退出方法进行了优惠的比较。
A dominant approach for addressing unsupervised domain adaptation is to map data points for the source and the target domains into an embedding space which is modeled as the output-space of a shared deep encoder. The encoder is trained to make the embedding space domain-agnostic to make a source-trained classifier generalizable on the target domain. A secondary mechanism to improve UDA performance further is to make the source domain distribution more compact to improve model generalizability. We demonstrate that increasing the interclass margins in the embedding space can help to develop a UDA algorithm with improved performance. We estimate the internally learned multi-modal distribution for the source domain, learned as a result of pretraining, and use it to increase the interclass class separation in the source domain to reduce the effect of domain shift. We demonstrate that using our approach leads to improved model generalizability on four standard benchmark UDA image classification datasets and compares favorably against exiting methods.