论文标题
转移学习的化学精确原子间神经网络电位
Transfer learning for chemically accurate interatomic neural network potentials
论文作者
论文摘要
从Ab-Initio电子结构方法中开发基于机器学习的基于原子的潜力仍然是计算化学和材料科学的一项挑战。这项工作研究了转移学习的能力,特别是歧视性微调,可有效地从MD17和ANI数据集中在有机分子上产生化学精确的原子间神经网络电位。我们表明,从密度功能计算获得的数据上进行预训练的网络参数大大提高了对更准确的AB-Initio数据训练的模型的样本效率。此外,我们表明,仅使用能量标签进行微调就足以获得准确的原子力并运行大规模的原子模拟,提供了精心设计的微调数据集。我们还研究了转移学习的可能局限性,尤其是关于预培训和微调数据集的设计和大小。最后,我们在ANI-1X和ANI-1CCX数据集上提供了预先训练的GM-NN势并微调,可以很容易地对其进行微调并应用于有机分子。
Developing machine learning-based interatomic potentials from ab-initio electronic structure methods remains a challenging task for computational chemistry and materials science. This work studies the capability of transfer learning, in particular discriminative fine-tuning, for efficiently generating chemically accurate interatomic neural network potentials on organic molecules from the MD17 and ANI data sets. We show that pre-training the network parameters on data obtained from density functional calculations considerably improves the sample efficiency of models trained on more accurate ab-initio data. Additionally, we show that fine-tuning with energy labels alone can suffice to obtain accurate atomic forces and run large-scale atomistic simulations, provided a well-designed fine-tuning data set. We also investigate possible limitations of transfer learning, especially regarding the design and size of the pre-training and fine-tuning data sets. Finally, we provide GM-NN potentials pre-trained and fine-tuned on the ANI-1x and ANI-1ccx data sets, which can easily be fine-tuned on and applied to organic molecules.