论文标题
用于机器学习的私人ADMM算法
Differentially Private ADMM Algorithms for Machine Learning
论文作者
论文摘要
在本文中,我们通过许多机器学习问题来研究有效的私有私有交流方向方法(ADMM)。对于使用(非)平滑正则化的平滑凸损耗功能,我们提出了第一个差异私有ADMM(DP-ADMM)算法,其性能保证为$(ε,δ)$ - 差分隐私($($(ε,δ)$ -DP)。从理论分析的角度来看,我们使用Rényi差异隐私(RDP)和DP之间的高斯机制以及对我们算法进行全面的隐私分析。然后,我们建立一个新标准,以证明包括DP-ADMM在内的提出算法的收敛性。我们还对DP-ADMM进行了实用分析。此外,我们提出了使用Nesterov的加速技术加速的DP-ADMM(DP-ACCADMM)。最后,我们在许多实际数据集上进行了数值实验,以显示两种拟议算法的隐私 - 实用性折衷方案,所有比较分析表明,当隐私预算$ε$比TheleShold更大时,DP-ACCADMM收敛速度比DP-ADMM更快,并且具有更好的效用。
In this paper, we study efficient differentially private alternating direction methods of multipliers (ADMM) via gradient perturbation for many machine learning problems. For smooth convex loss functions with (non)-smooth regularization, we propose the first differentially private ADMM (DP-ADMM) algorithm with performance guarantee of $(ε,δ)$-differential privacy ($(ε,δ)$-DP). From the viewpoint of theoretical analysis, we use the Gaussian mechanism and the conversion relationship between Rényi Differential Privacy (RDP) and DP to perform a comprehensive privacy analysis for our algorithm. Then we establish a new criterion to prove the convergence of the proposed algorithms including DP-ADMM. We also give the utility analysis of our DP-ADMM. Moreover, we propose an accelerated DP-ADMM (DP-AccADMM) with the Nesterov's acceleration technique. Finally, we conduct numerical experiments on many real-world datasets to show the privacy-utility tradeoff of the two proposed algorithms, and all the comparative analysis shows that DP-AccADMM converges faster and has a better utility than DP-ADMM, when the privacy budget $ε$ is larger than a threshold.