论文标题

关于神经网络的不确定性原理

On the uncertainty principle of neural networks

论文作者

Zhang, Jun-Jie, Zhang, Dong-Xiao, Chen, Jian-Nan, Pang, Long-Gang, Meng, Deyu

论文摘要

在这项研究中,我们探讨了神经网络中的准确性和鲁棒性之间的固有权衡,与量子力学中的不确定性原理进行了类似。我们建议神经网络受到不确定性关系,这表现为它们同时实现对对抗性攻击的高精度和鲁棒性的能力的基本限制。通过数学证明和经验证据,我们证明了这种权衡是训练过程中不同阶级概念之间尖锐界限的自然结果。我们的发现表明,互补原理是量子物理学的基石,适用于神经网络,对它们在同时学习共轭特征方面的能力施加了基本限制。同时,我们的工作表明,仅通过单个网络架构或大量数据集就可以实现人类水平的智能,这可能是固有的。我们的工作为神经网络脆弱性的理论基础提供了新的见解,并为设计更强大的神经网络体系结构开辟了途径。

In this study, we explore the inherent trade-off between accuracy and robustness in neural networks, drawing an analogy to the uncertainty principle in quantum mechanics. We propose that neural networks are subject to an uncertainty relation, which manifests as a fundamental limitation in their ability to simultaneously achieve high accuracy and robustness against adversarial attacks. Through mathematical proofs and empirical evidence, we demonstrate that this trade-off is a natural consequence of the sharp boundaries formed between different class concepts during training. Our findings reveal that the complementarity principle, a cornerstone of quantum physics, applies to neural networks, imposing fundamental limits on their capabilities in simultaneous learning of conjugate features. Meanwhile, our work suggests that achieving human-level intelligence through a single network architecture or massive datasets alone may be inherently limited. Our work provides new insights into the theoretical foundations of neural network vulnerability and opens up avenues for designing more robust neural network architectures.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源