论文标题
贝叶斯优化的当地差异隐私
Local Differential Privacy for Bayesian Optimization
论文作者
论文摘要
在当今数据密集型在线学习系统中对隐私的越来越关注的激励,我们考虑了具有当地差异隐私(LDP)保证的非参数高斯流程设置中的黑盒优化。具体而言,每个用户的奖励进一步损坏以保护隐私,学习者只能获得损坏的奖励以最大程度地减少遗憾。我们首先为任何不发达国家机制和任何学习算法提供了遗憾的下限。然后,我们根据GP-UCB框架和Laplace DP机制提出了三种几乎最佳的算法。在此过程中,我们还提出了一种基于平均值技术和内核近似值的新贝叶斯优化方法(BO)方法(称为MOMA-GP-UCB),该方法补充了以前的BO算法,用于降低复杂性的重尾收益。此外,合成和现实世界中不同算法的经验比较突出了MOMA-GP-UCB在私人和非私人场景中的出色性能。
Motivated by the increasing concern about privacy in nowadays data-intensive online learning systems, we consider a black-box optimization in the nonparametric Gaussian process setting with local differential privacy (LDP) guarantee. Specifically, the rewards from each user are further corrupted to protect privacy and the learner only has access to the corrupted rewards to minimize the regret. We first derive the regret lower bounds for any LDP mechanism and any learning algorithm. Then, we present three almost optimal algorithms based on the GP-UCB framework and Laplace DP mechanism. In this process, we also propose a new Bayesian optimization (BO) method (called MoMA-GP-UCB) based on median-of-means techniques and kernel approximations, which complements previous BO algorithms for heavy-tailed payoffs with a reduced complexity. Further, empirical comparisons of different algorithms on both synthetic and real-world datasets highlight the superior performance of MoMA-GP-UCB in both private and non-private scenarios.