Journal
JOURNAL OF CHEMICAL PHYSICS
Volume 155, Issue 20, Pages -Publisher
AIP Publishing
DOI: 10.1063/5.0070931
Keywords
-
Funding
- Swedish Research Council (VR) [2019-05012]
- Swedish National Strategic e-Science program eSSENCE
- Uppsala University AI4Research project
- Vinnova [2019-05012] Funding Source: Vinnova
- Swedish Research Council [2019-05012] Funding Source: Swedish Research Council
Ask authors/readers for more resources
This article compares the performance of two popular training algorithms for neural network potentials on liquid water datasets, finding that NNPs trained with the extended Kalman filter algorithm (EKF) are more transferable and less sensitive to the learning rate. The study also shows that error metrics of the validation set may not accurately reflect the actual performance of NNPs, with a Fisher information based similarity measure serving as a better indicator instead.
One hidden yet important issue for developing neural network potentials (NNPs) is the choice of training algorithm. In this article, we compare the performance of two popular training algorithms, the adaptive moment estimation algorithm (Adam) and the extended Kalman filter algorithm (EKF), using the Behler-Parrinello neural network and two publicly accessible datasets of liquid water [Morawietz et al., Proc. Natl. Acad. Sci. U. S. A. 113, 8368-8373, (2016) and Cheng et al., Proc. Natl. Acad. Sci. U. S. A. 116, 1110-1115, (2019)]. This is achieved by implementing EKF in TensorFlow. It is found that NNPs trained with EKF are more transferable and less sensitive to the value of the learning rate, as compared to Adam. In both cases, error metrics of the validation set do not always serve as a good indicator for the actual performance of NNPs. Instead, we show that their performance correlates well with a Fisher information based similarity measure. (C) 2021 Author(s).
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available