论文标题

个性化展示柜:生成多模式的解释

Personalized Showcases: Generating Multi-Modal Explanations for Recommendations

论文作者

Yan, An, He, Zhankui, Li, Jiacheng, Zhang, Tianyang, McAuley, Julian

论文摘要

现有的解释模型仅生成建议的文本,但仍在努力生产各种内容。在本文中,为了进一步丰富解释,我们提出了一项名为“个性化展示”的新任务,其中我们提供文本和视觉信息以解释我们的建议。具体来说,我们首先选择一个个性化图像集,该图与用户对推荐物品的兴趣最相关。然后,自然语言解释将相应地产生我们的选定图像。对于这项新任务,我们从Google Local(即〜maps)收集一个大规模数据集,并构建一个用于生成多模式说明的高质量子集。我们提出了一个个性化的多模式框架,该框架可以通过对比度学习产生多样化和视觉上的解释。实验表明,我们的框架受益于不同方式作为输入,并且与以前的各种评估指标相比,能够产生更多样化和表达的解释。

Existing explanation models generate only text for recommendations but still struggle to produce diverse contents. In this paper, to further enrich explanations, we propose a new task named personalized showcases, in which we provide both textual and visual information to explain our recommendations. Specifically, we first select a personalized image set that is the most relevant to a user's interest toward a recommended item. Then, natural language explanations are generated accordingly given our selected images. For this new task, we collect a large-scale dataset from Google Local (i.e.,~maps) and construct a high-quality subset for generating multi-modal explanations. We propose a personalized multi-modal framework which can generate diverse and visually-aligned explanations via contrastive learning. Experiments show that our framework benefits from different modalities as inputs, and is able to produce more diverse and expressive explanations compared to previous methods on a variety of evaluation metrics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源