论文标题

保护隐私医学图像分析

Privacy-preserving medical image analysis

论文作者

Ziller, Alexander, Passerat-Palmbach, Jonathan, Ryffel, Théo, Usynin, Dmitrii, Trask, Andrew, Junior, Ionésio Da Lima Costa, Mancuso, Jason, Makowski, Marcus, Rueckert, Daniel, Braren, Rickmer, Kaissis, Georgios

论文摘要

人工智能在医学和医疗保健中的利用已导致在几个领域成功使用临床应用。必须解决此类系统中数据使用和隐私保护要求之间的冲突,以获得最佳结果以及道德和法律合规性。这需要创新的解决方案,例如隐私机器学习(PPML)。我们提出Primia(保护隐私的医学图像分析),这是一个用于医学成像中PPML的软件框架。在现实生活中的案例研究中,与看不见的数据集中的人类专家相比,与人类专家相比,安全汇总的联合学习模型的分类表现明显更好。此外,我们显示了端到端加密诊断的推理,即数据和模型都没有揭示。最后,我们根据基于梯度的模型反转攻击对框架的安全性进行经验评估,并证明无法从模型中恢复可用的信息。

The utilisation of artificial intelligence in medicine and healthcare has led to successful clinical applications in several domains. The conflict between data usage and privacy protection requirements in such systems must be resolved for optimal results as well as ethical and legal compliance. This calls for innovative solutions such as privacy-preserving machine learning (PPML). We present PriMIA (Privacy-preserving Medical Image Analysis), a software framework designed for PPML in medical imaging. In a real-life case study we demonstrate significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets. Furthermore, we show an inference-as-a-service scenario for end-to-end encrypted diagnosis, where neither the data nor the model are revealed. Lastly, we empirically evaluate the framework's security against a gradient-based model inversion attack and demonstrate that no usable information can be recovered from the model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源