论文标题
通过假设转移和标记传输,来源不受监督的无监督域的适应性
Source Data-absent Unsupervised Domain Adaptation through Hypothesis Transfer and Labeling Transfer
论文作者
论文摘要
无监督的域适应性(UDA)旨在将知识从相关但不同的标记源域转移到新的未标记的目标域。大多数现有的UDA方法都需要访问源数据,因此,由于隐私问题,数据是机密的并且无法共享时不适用。本文旨在通过仅可用的分类模型来解决现实的设置,而不是访问源数据。为了有效利用源模型进行适应,我们提出了一种称为源假设转移(SHOT)的新方法,该方法通过将目标数据特征拟合到冷冻源分类模块(表示分类假设)来了解目标域的特征提取模块。具体而言,Shot利用了特征提取模块学习的信息最大化和自我监督的学习,以确保目标特征通过相同的假设隐含地与看不见的源数据的特征对齐。此外,我们提出了一种新的标记转移策略,该策略将目标数据根据预测的置信(标签信息)将目标数据分为两个分裂,然后采用半监督的学习来提高目标域中较不符合的预测的准确性。如果通过SHOT获得预测,我们将标记传输作为Shot ++。关于数字分类和对象识别任务的广泛实验表明,SHOT和SHOT ++取得的结果超过或与最先进的作品相当,证明了我们方法在各种视觉域适应问题上的有效性。代码可在\ url {https://github.com/tim-learn/shot-plus}中获得。
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain. Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns. This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to, the source data. To effectively utilize the source model for adaptation, we propose a novel approach called Source HypOthesis Transfer (SHOT), which learns the feature extraction module for the target domain by fitting the target data features to the frozen source classification module (representing classification hypothesis). Specifically, SHOT exploits both information maximization and self-supervised learning for the feature extraction module learning to ensure the target features are implicitly aligned with the features of unseen source data via the same hypothesis. Furthermore, we propose a new labeling transfer strategy, which separates the target data into two splits based on the confidence of predictions (labeling information), and then employ semi-supervised learning to improve the accuracy of less-confident predictions in the target domain. We denote labeling transfer as SHOT++ if the predictions are obtained by SHOT. Extensive experiments on both digit classification and object recognition tasks show that SHOT and SHOT++ achieve results surpassing or comparable to the state-of-the-arts, demonstrating the effectiveness of our approaches for various visual domain adaptation problems. Code is available at \url{https://github.com/tim-learn/SHOT-plus}.