论文标题

改善了差异私有欧几里得距离近似

Improved Differentially Private Euclidean Distance Approximation

论文作者

Stausholm, Nina Mesing

论文摘要

这项工作表明了如何私下,更准确地估计矢量对之间的欧几里得距离。输入矢量$ x $和$ y $映射到差异私人草图$ x'$和$ y'$,从中可以估算$ x $和$ y $之间的距离。 Our estimator relies on the Sparser Johnson-Lindenstrauss constructions by Kane \& Nelson (Journal of the ACM 2014), which for any $0<α,β<1/2$ have optimal output dimension $k=Θ(α^{-2}\log(1/β))$ and sparsity $s=O(α^{-1}\log(1/β))$.我们将Kane \&Nelson的构造与差异隐私文献中的Laplace或Gaussian机制相结合,具体取决于隐私参数$ \ VAREPSILON $和$δ$。我们还建议AILON \&CHAZELLE的Fast Johnson-Lindenstrauss Transform(FJLT)的差异私有版本(Siam Computing 2009),该版本在某些参数方面的速度方面提供了权衡的权衡。我们通过分析依靠拉普拉斯(Laplacian)而不是高斯噪声的估计器来回答Kenthapadi等人〜(《隐私与机密性杂志》 2013年)的一个公开问题(《隐私与机密性杂志》杂志)。我们证明,每当$Δ<β^{o(1/α)$时,拉普拉斯机制就会产生的方差低于高斯机制。因此,我们的工作对Kenthapadi等人的工作构成了改进,它给出了更有效的估计器,其方差较低,以较小的$δ$。我们的草图还实现了\ emph {pure}差异隐私作为拉普拉斯机制的整洁副作用,而不是\ emph {近似}对高斯机制的差异隐私保证,这对于某些设置可能还不够强。

This work shows how to privately and more accurately estimate Euclidean distance between pairs of vectors. Input vectors $x$ and $y$ are mapped to differentially private sketches $x'$ and $y'$, from which one can estimate the distance between $x$ and $y$. Our estimator relies on the Sparser Johnson-Lindenstrauss constructions by Kane \& Nelson (Journal of the ACM 2014), which for any $0<α,β<1/2$ have optimal output dimension $k=Θ(α^{-2}\log(1/β))$ and sparsity $s=O(α^{-1}\log(1/β))$. We combine the constructions of Kane \& Nelson with either the Laplace or the Gaussian mechanism from the differential privacy literature, depending on the privacy parameters $\varepsilon$ and $δ$. We also suggest a differentially private version of Fast Johnson-Lindenstrauss Transform (FJLT) by Ailon \& Chazelle (SIAM Journal of Computing 2009) which offers a tradeoff in speed for variance for certain parameters. We answer an open question by Kenthapadi et al.~(Journal of Privacy and Confidentiality 2013) by analyzing the privacy and utility guarantees of an estimator for Euclidean distance, relying on Laplacian rather than Gaussian noise. We prove that the Laplace mechanism yields lower variance than the Gaussian mechanism whenever $δ<β^{O(1/α)}$. Thus, our work poses an improvement over the work of Kenthapadi et al.~by giving a more efficient estimator with lower variance for sufficiently small $δ$. Our sketch also achieves \emph{pure} differential privacy as a neat side-effect of the Laplace mechanism rather than the \emph{approximate} differential privacy guarantee of the Gaussian mechanism, which may not be sufficiently strong for some settings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源