论文标题

垂直联合学习中的私人AUC计算

Differentially Private AUC Computation in Vertical Federated Learning

论文作者

Sun, Jiankai, Yang, Xin, Yao, Yuanshun, Xie, Junyuan, Wu, Di, Wang, Chong

论文摘要

联邦学习最近引起了人们的关注,作为一种增强隐私工具,可以由多方共同培训机器学习模型。作为一个子类别,垂直联合学习(VFL)的重点是将功能和标签分为不同政党的情况。 VFL的先前工作主要研究了如何在模型培训期间保护标签隐私。但是,VFL中的模型评估也可能导致专用标签信息的潜在泄漏。一种缓解策略是应用标签差异隐私(DP),但它给出了对真实(非私人)指标的错误估计。在这项工作中,我们提出了两种评估算法,当在VFL中使用标签DP时,可以更准确地计算广泛使用的AUC(曲线下)度量。通过广泛的实验,我们显示我们的算法与基准相比可以实现更准确的AUC。

Federated learning has gained great attention recently as a privacy-enhancing tool to jointly train a machine learning model by multiple parties. As a sub-category, vertical federated learning (vFL) focuses on the scenario where features and labels are split into different parties. The prior work on vFL has mostly studied how to protect label privacy during model training. However, model evaluation in vFL might also lead to potential leakage of private label information. One mitigation strategy is to apply label differential privacy (DP) but it gives bad estimations of the true (non-private) metrics. In this work, we propose two evaluation algorithms that can more accurately compute the widely used AUC (area under curve) metric when using label DP in vFL. Through extensive experiments, we show our algorithms can achieve more accurate AUCs compared to the baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源