论文标题

set2box:相似性保存集合的表示

Set2Box: Similarity Preserving Representation Learning of Sets

论文作者

Lee, Geon, Park, Chanyoung, Shin, Kijung

论文摘要

集合已用于建模各种类型的对象(例如,文档作为其中的关键字和客户作为她购买的项目集)。测量集合之间的相似性(例如,Jaccard索引)是广泛应用的关键构建块,包括窃检测,建议和图形压缩。但是,随着集合的数量和大小的增长,集合相似性计算所需的计算成本和存储已变得很大,这导致了基于哈希和草图的解决方案的开发。在这项工作中,我们提出了SET2Box,这是一种基于学习的方法,用于用于集合的压缩表示,可以从恒定时间中准确估算各种相似性测量。关键想法是将集合表示为框以精确捕获集合的重叠。此外,基于提出的框量化方案,我们设计了Set2Box+,该方案可产生更简洁但更准确的集合框表示。 Through extensive experiments on 8 real-world datasets, we show that, compared to baseline approaches, Set2Box+ is (a) Accurate: achieving up to 40.8X smaller estimation error while requiring 60% fewer bits to encode sets, (b) Concise: yielding up to 96.8X more concise representations with similar estimation error, and (c) Versatile: enabling the estimation of four set-similarity measures from a single每组的表示。

Sets have been used for modeling various types of objects (e.g., a document as the set of keywords in it and a customer as the set of the items that she has purchased). Measuring similarity (e.g., Jaccard Index) between sets has been a key building block of a wide range of applications, including, plagiarism detection, recommendation, and graph compression. However, as sets have grown in numbers and sizes, the computational cost and storage required for set similarity computation have become substantial, and this has led to the development of hashing and sketching based solutions. In this work, we propose Set2Box, a learning-based approach for compressed representations of sets from which various similarity measures can be estimated accurately in constant time. The key idea is to represent sets as boxes to precisely capture overlaps of sets. Additionally, based on the proposed box quantization scheme, we design Set2Box+, which yields more concise but more accurate box representations of sets. Through extensive experiments on 8 real-world datasets, we show that, compared to baseline approaches, Set2Box+ is (a) Accurate: achieving up to 40.8X smaller estimation error while requiring 60% fewer bits to encode sets, (b) Concise: yielding up to 96.8X more concise representations with similar estimation error, and (c) Versatile: enabling the estimation of four set-similarity measures from a single representation of each set.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源