论文标题
Darknight:培训和推断深神经网络的数据隐私方案
DarKnight: A Data Privacy Scheme for Training and Inference of Deep Neural Networks
论文作者
论文摘要
随着机器学习方法到达新的应用领域,保护输入数据的隐私越来越重要。在本文中,我们为大型DNN提供了统一的培训和推理框架,同时保护输入隐私和计算完整性。我们称为Darknight的方法使用矩阵掩蔽使用一种新颖的数据盲策略来在受信任的执行环境(TEE)中创建输入混淆。我们严格的数学证明表明,我们的盲目过程通过界限信息泄漏来提供信息理论的隐私保证。然后可以将混淆的数据卸载到任何GPU上,以加速盲数据的线性操作。在TEE内执行非线性操作之前,对盲数据的线性操作的结果进行解码。该合作执行使Darknight可以利用GPU的计算能力执行线性操作,同时利用TEES来保护输入隐私。我们在Intel SGX Tee上实施了Darknight,并加强了GPU,以评估其性能。
Protecting the privacy of input data is of growing importance as machine learning methods reach new application domains. In this paper, we provide a unified training and inference framework for large DNNs while protecting input privacy and computation integrity. Our approach called DarKnight uses a novel data blinding strategy using matrix masking to create input obfuscation within a trusted execution environment (TEE). Our rigorous mathematical proof demonstrates that our blinding process provides information-theoretic privacy guarantee by bounding information leakage. The obfuscated data can then be offloaded to any GPU for accelerating linear operations on blinded data. The results from linear operations on blinded data are decoded before performing non-linear operations within the TEE. This cooperative execution allows DarKnight to exploit the computational power of GPUs to perform linear operations while exploiting TEEs to protect input privacy. We implement DarKnight on an Intel SGX TEE augmented with a GPU to evaluate its performance.