论文标题

使用有限的数据资源学习图形分类器

Unlearning Graph Classifiers with Limited Data Resources

论文作者

Pan, Chao, Chien, Eli, Milenkovic, Olgica

论文摘要

随着对用户隐私的需求的增长,受控数据删除(机器学习)正在成为机器学习模型的重要特征,用于数据敏感的Web应用程序,例如社交网络和推荐系统。然而,在这一点上,如何执行图形神经网络(GNNS)的有效机器的有效机器仍然未知;当训练样本的数量很少时,尤其是这种情况,在这种情况下,不学习会严重损害模型的性能。为了解决这个问题,我们启动了学习图散射变换(GST)的研究,这是一个有效的数学框架,在功能或图形拓扑扰动下可证明稳定,并提供与GNNS相当的图形分类性能。我们的主要贡献是基于GST的第一个已知的非线性近似图未学习方法。我们的第二个贡献是对所提出的未学习机制的计算复杂性的理论分析,这对于深层神经网络很难复制。我们的第三个贡献是广泛的仿真结果,这些结果表明,与每次删除请求后的GNN完全重新培训相比,新的基于GST的方法平均提供10.38倍的加速,并导致IMDB数据集中的100个训练图中的90个训练图中的测试准确性提高2.6%(训练率为10%)。我们的实施可在线访问https://doi.org/10.5281/zenodo.7613150。

As the demand for user privacy grows, controlled data removal (machine unlearning) is becoming an important feature of machine learning models for data-sensitive Web applications such as social networks and recommender systems. Nevertheless, at this point it is still largely unknown how to perform efficient machine unlearning of graph neural networks (GNNs); this is especially the case when the number of training samples is small, in which case unlearning can seriously compromise the performance of the model. To address this issue, we initiate the study of unlearning the Graph Scattering Transform (GST), a mathematical framework that is efficient, provably stable under feature or graph topology perturbations, and offers graph classification performance comparable to that of GNNs. Our main contribution is the first known nonlinear approximate graph unlearning method based on GSTs. Our second contribution is a theoretical analysis of the computational complexity of the proposed unlearning mechanism, which is hard to replicate for deep neural networks. Our third contribution are extensive simulation results which show that, compared to complete retraining of GNNs after each removal request, the new GST-based approach offers, on average, a 10.38x speed-up and leads to a 2.6% increase in test accuracy during unlearning of 90 out of 100 training graphs from the IMDB dataset (10% training ratio). Our implementation is available online at https://doi.org/10.5281/zenodo.7613150.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源