论文标题

皮肤癌分类的对抗性攻击和防御

Adversarial Attacks and Defences for Skin Cancer Classification

论文作者

Jogani, Vinay, Purohit, Joy, Shivhare, Ishaan, Attari, Samina, Surtkar, Shraddha

论文摘要

近年来,用于促进诊断和机器学习技术的性能以执行诸如分类,检测和分割等任务的医学图像以及近年来执行机器学习技术的性能的同时有显着改善。结果,可以在医疗保健行业(例如,以医学图像分类系统的形式)观察到这种系统的使用迅速增加,这些模型已与人类医生达到了诊断的均等。可以观察到的一种应用是在计算机视觉任务中,例如皮肤镜图像中皮肤病变的分类。但是,随着保险公司等医疗保健行业的利益相关者继续在机器学习基础设施上进行广泛的投资,了解此类系统中的脆弱性变得越来越重要。由于这些机器学习模型正在执行的任务的高度关键性质,因此有必要分析可用于利用这些漏洞和方法来防御它们的技术。本文探讨了常见的对抗攻击技术。快速标志梯度方法和预测的下降梯度用于卷积神经网络,该网络训练了对皮肤病变的皮肤镜图像进行分类。随后,它还讨论了最受欢迎的对抗防御技术之一,即对抗性训练。然后,根据前面提到的攻击对在对抗性示例进行训练的模型的性能进行测试,因此,根据实验的结果提供了改善神经网络鲁棒性的建议。

There has been a concurrent significant improvement in the medical images used to facilitate diagnosis and the performance of machine learning techniques to perform tasks such as classification, detection, and segmentation in recent years. As a result, a rapid increase in the usage of such systems can be observed in the healthcare industry, for instance in the form of medical image classification systems, where these models have achieved diagnostic parity with human physicians. One such application where this can be observed is in computer vision tasks such as the classification of skin lesions in dermatoscopic images. However, as stakeholders in the healthcare industry, such as insurance companies, continue to invest extensively in machine learning infrastructure, it becomes increasingly important to understand the vulnerabilities in such systems. Due to the highly critical nature of the tasks being carried out by these machine learning models, it is necessary to analyze techniques that could be used to take advantage of these vulnerabilities and methods to defend against them. This paper explores common adversarial attack techniques. The Fast Sign Gradient Method and Projected Descent Gradient are used against a Convolutional Neural Network trained to classify dermatoscopic images of skin lesions. Following that, it also discusses one of the most popular adversarial defense techniques, adversarial training. The performance of the model that has been trained on adversarial examples is then tested against the previously mentioned attacks, and recommendations to improve neural networks robustness are thus provided based on the results of the experiment.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源