论文标题

连接点:隐私损失分布的更严格的离散近似

Connect the Dots: Tighter Discrete Approximations of Privacy Loss Distributions

论文作者

Doroshenko, Vadym, Ghazi, Badih, Kamath, Pritish, Kumar, Ravi, Manurangsi, Pasin

论文摘要

隐私损失分配(PLD)在差异隐私(DP)的背景下对机制的隐私损失进行了严格的特征。最近的工作表明,与其他已知方法相比,基于PLD的会计允许更紧密的$(\ Varepsilon,δ)$ - DP保证。基于PLD的会计中的一个关键问题是如何在任何指定的离散支持上近似任何(潜在的连续)PLD。 我们为解决这个问题提供了一种新颖的方法。我们的方法都支持悲观的估计,它高估了任何$ \ varepsilon $的曲棍球分歧(即$δ$),以及乐观的估计,这低估了曲棍球棒的差异。此外,我们表明,在所有悲观估计中,我们的悲观估计是最好的估计。实验评估表明,与以前的方法相比,我们的方法可以在更大的离散时间间隔内工作,同时保持相似的误差,但比现有方法更近似。

The privacy loss distribution (PLD) provides a tight characterization of the privacy loss of a mechanism in the context of differential privacy (DP). Recent work has shown that PLD-based accounting allows for tighter $(\varepsilon, δ)$-DP guarantees for many popular mechanisms compared to other known methods. A key question in PLD-based accounting is how to approximate any (potentially continuous) PLD with a PLD over any specified discrete support. We present a novel approach to this problem. Our approach supports both pessimistic estimation, which overestimates the hockey-stick divergence (i.e., $δ$) for any value of $\varepsilon$, and optimistic estimation, which underestimates the hockey-stick divergence. Moreover, we show that our pessimistic estimate is the best possible among all pessimistic estimates. Experimental evaluation shows that our approach can work with much larger discretization intervals while keeping a similar error bound compared to previous approaches and yet give a better approximation than existing methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源