论文标题
在微调的BERT模型中对命名实体的记忆
Memorization of Named Entities in Fine-tuned BERT Models
论文作者
论文摘要
保护深度学习的隐私是机器学习中的一个新兴领域,旨在减轻深层神经网络使用的隐私风险。一种风险是从已在数据集中培训的语言模型中培训数据提取,其中包含个人和隐私敏感信息。在我们的研究中,我们研究了微调BERT模型中指定实体记忆的程度。我们使用单标签文本分类作为代表性的下游任务,并在我们的实验中采用三种不同的微调设置,包括具有差异隐私(DP)的一个。我们利用自定义的顺序采样策略从微调的BERT模型中创建了大量的文本样本,并具有两种提示策略。我们在这些示例中搜索命名实体,并检查它们是否也存在于微调数据集中。我们在电子邮件和博客的域中使用两个基准数据集尝试。我们表明,DP的应用对BERT的文本生成能力有害。此外,我们表明,与仅预先培训的BERT模型相比,微型BERT不会生成针对微型数据集的更特定的命名实体。这表明伯特不太可能发出个人或隐私敏感的命名实体。总体而言,我们的结果对于了解基于BERT的服务在多大程度上容易培训数据提取攻击。
Privacy preserving deep learning is an emerging field in machine learning that aims to mitigate the privacy risks in the use of deep neural networks. One such risk is training data extraction from language models that have been trained on datasets, which contain personal and privacy sensitive information. In our study, we investigate the extent of named entity memorization in fine-tuned BERT models. We use single-label text classification as representative downstream task and employ three different fine-tuning setups in our experiments, including one with Differential Privacy (DP). We create a large number of text samples from the fine-tuned BERT models utilizing a custom sequential sampling strategy with two prompting strategies. We search in these samples for named entities and check if they are also present in the fine-tuning datasets. We experiment with two benchmark datasets in the domains of emails and blogs. We show that the application of DP has a detrimental effect on the text generation capabilities of BERT. Furthermore, we show that a fine-tuned BERT does not generate more named entities specific to the fine-tuning dataset than a BERT model that is pre-trained only. This suggests that BERT is unlikely to emit personal or privacy sensitive named entities. Overall, our results are important to understand to what extent BERT-based services are prone to training data extraction attacks.