论文标题
从异构目标中学习的一流协作过滤的共识
Consensus Learning from Heterogeneous Objectives for One-Class Collaborative Filtering
论文作者
论文摘要
在过去的几十年中,对于一流的协作过滤(OCCF),许多学习目标都是基于各种潜在的概率模型进行了研究的。从我们的分析中,我们观察到,经过不同OCCF目标训练的模型捕获了用户项目关系的不同方面,这反过来又产生了互补的建议。本文提出了一个新颖的OCCF框架,名为Concf,该框架在整个训练过程中利用了异质目标的互补性,从而产生了更具概括性的模型。 CONCF通过添加辅助头来构建给定目标模型的多分支变体,每个辅助头都经过异质目标训练。然后,它通过巩固头部的各种视图而产生共识,并根据共识来指导头部。头部基于整个培训中的互补性进行协作演化,这再次导致更准确的共识迭代。训练后,我们通过删除辅助负责人将多分支体系结构转换回原始目标模型,因此部署没有额外的推断成本。我们对现实世界数据集的广泛实验表明,CONCF通过利用异质目标的互补性来显着改善模型的概括。
Over the past decades, for One-Class Collaborative Filtering (OCCF), many learning objectives have been researched based on a variety of underlying probabilistic models. From our analysis, we observe that models trained with different OCCF objectives capture distinct aspects of user-item relationships, which in turn produces complementary recommendations. This paper proposes a novel OCCF framework, named ConCF, that exploits the complementarity from heterogeneous objectives throughout the training process, generating a more generalizable model. ConCF constructs a multi-branch variant of a given target model by adding auxiliary heads, each of which is trained with heterogeneous objectives. Then, it generates consensus by consolidating the various views from the heads, and guides the heads based on the consensus. The heads are collaboratively evolved based on their complementarity throughout the training, which again results in generating more accurate consensus iteratively. After training, we convert the multi-branch architecture back to the original target model by removing the auxiliary heads, thus there is no extra inference cost for the deployment. Our extensive experiments on real-world datasets demonstrate that ConCF significantly improves the generalization of the model by exploiting the complementarity from heterogeneous objectives.