论文标题

具有差异私有非凸优化的扰动的标准化/剪裁SGD

Normalized/Clipped SGD with Perturbation for Differentially Private Non-Convex Optimization

论文作者

Yang, Xiaodong, Zhang, Huishuai, Chen, Wei, Liu, Tie-Yan

论文摘要

通过确保学习算法中的差异隐私,可以严格减轻大型模型记忆敏感培训数据的风险。在本文中,我们为此目的研究了两种算法,即DP-SGD和DP-NSGD,它们首先剪辑或归一化\ textIt \ textit {每样本}梯度以绑定灵敏度,然后添加噪声以使精确信息混淆。我们通过两个常见的假设分析了这两种算法在非凸优化设置中的收敛行为,并实现了$ \ Mathcal {o} \ left(\ sqrt [4] {\ frac {\ frac {d \ log log(1/δ)}} {n^2ε^2}} $ $ n $ a $ a $ a $ a $ a $ a $ a $ a $ a $ a样品和$(ε,δ)$ -DP,在较弱的假设下,它比以前的边界有所改善。具体而言,我们在DP-NSGD中介绍了一个正规化因素,并表明它对融合证明至关重要,并巧妙地控制了偏见和噪声权衡。我们的证明故意处理针对私人环境指定的每样本梯度剪辑和标准化。从经验上讲,我们证明这两种算法达到了相似的最佳准确性,而DP-NSGD比DP-SGD更容易调整,因此在计算调整工作时可能有助于进一步节省隐私预算。

By ensuring differential privacy in the learning algorithms, one can rigorously mitigate the risk of large models memorizing sensitive training data. In this paper, we study two algorithms for this purpose, i.e., DP-SGD and DP-NSGD, which first clip or normalize \textit{per-sample} gradients to bound the sensitivity and then add noise to obfuscate the exact information. We analyze the convergence behavior of these two algorithms in the non-convex optimization setting with two common assumptions and achieve a rate $\mathcal{O}\left(\sqrt[4]{\frac{d\log(1/δ)}{N^2ε^2}}\right)$ of the gradient norm for a $d$-dimensional model, $N$ samples and $(ε,δ)$-DP, which improves over previous bounds under much weaker assumptions. Specifically, we introduce a regularizing factor in DP-NSGD and show that it is crucial in the convergence proof and subtly controls the bias and noise trade-off. Our proof deliberately handles the per-sample gradient clipping and normalization that are specified for the private setting. Empirically, we demonstrate that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD and hence may help further save the privacy budget when accounting the tuning effort.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源