论文标题

可扩展的学习和映射非对称确定点过程的推断

Scalable Learning and MAP Inference for Nonsymmetric Determinantal Point Processes

论文作者

Gartrell, Mike, Han, Insu, Dohmatob, Elvis, Gillenwater, Jennifer, Brunel, Victor-Emmanuel

论文摘要

确定点过程(DPP)在机器学习方面引起了极大的关注,因为它们可以从大型物品集合中绘制的子集建模。最近的工作表明,非对称DPP(NDPP)内核在建模功率和预测性能方面具有显着的对称内核。但是,对于大小$ m $的项目集合,现有的NDPP学习和推理算法需要$ M $中的内存二次和运行时立方体(用于学习)或二次(用于推理)$ m $,这对于许多典型的子集选择任务而言是不切实际的。在这项工作中,我们通过引入新的NDPP内核分解,开发了具有$ m $时空需求的学习算法。我们还得出了线性复杂性NDPP最大后验(MAP)推理算法,该算法不仅适用于我们的新内核,而且适用于先前的工作。通过对现实世界数据集的评估,我们表明我们的算法比例明显更好,并且可以符合先前工作的预测性能。

Determinantal point processes (DPPs) have attracted significant attention in machine learning for their ability to model subsets drawn from a large item collection. Recent work shows that nonsymmetric DPP (NDPP) kernels have significant advantages over symmetric kernels in terms of modeling power and predictive performance. However, for an item collection of size $M$, existing NDPP learning and inference algorithms require memory quadratic in $M$ and runtime cubic (for learning) or quadratic (for inference) in $M$, making them impractical for many typical subset selection tasks. In this work, we develop a learning algorithm with space and time requirements linear in $M$ by introducing a new NDPP kernel decomposition. We also derive a linear-complexity NDPP maximum a posteriori (MAP) inference algorithm that applies not only to our new kernel but also to that of prior work. Through evaluation on real-world datasets, we show that our algorithms scale significantly better, and can match the predictive performance of prior work.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源