0 Datasets
0 Files
Get instant academic access to this publication’s datasets.
Yes. After verification, you can browse and download datasets at no cost. Some premium assets may require author approval.
Files are stored on encrypted storage. Access is restricted to verified users and all downloads are logged.
Yes, message the author after sign-up to request supplementary files or replication code.
Join 50,000+ researchers worldwide. Get instant access to peer-reviewed datasets, advanced analytics, and global collaboration tools.
✓ Immediate verification • ✓ Free institutional access • ✓ Global collaborationJoin our academic network to download verified datasets and collaborate with researchers worldwide.
Get Free AccessMachine-learned representations of potential energy surfaces generated in the output layer of a feedforward neural network are becoming increasingly popular. One difficulty with neural-network output is that it is often unreliable in regions where training data is missing or sparse. Human-designed potentials often build in proper extrapolation behavior by choice of functional form. Because machine learning is very efficient, it is desirable to learn how to add human intelligence to machine-learned potentials in a convenient way. One example is the well understood feature of interaction potentials that they vanish when subsystems are too far separated to interact. In this article, we present a way to add a new kind of activation function to a neural network to enforce low-dimensional constraints. In particular the activation function depends parametrically on all the input variables. We illustrate the use of this step by showing how it can force an interaction potential to go to zero at large subsystem separations with either inputting a specific functional form for the potential or adding data to the training set in the asymptotic region of geometries where the subsystems are separated. In the process of illustrating this, we present an improved set of potential energy surfaces for the 14 lowest 3A´ states of O3. The method is more general than this example, and it may be used to add other low-dimensional knowledge or lower-level knowledge to machine-learned potentials. In addition to the O3 example, we present a greater-generality method called parametrically managed diabatization by deep neural network (PM-DDNN) that is an improvement on our previously presented permutationally restrained diabatization by deep neural network (PR-DDNN).
Farideh Badichi Akher, Yinan Shu, Zoltán M. Varga, Suman Bhaumik, Donald G Truhlar (2023). Parametrically Managed Activation Function for a Fitting a Neural Network Potential with Physical Behavior Enforced by a Low-Dimensional Potential. , DOI: https://doi.org/10.26434/chemrxiv-2023-0cwdz-v2.
Datasets shared by verified academics with rich metadata and previews.
Authors choose access levels; downloads are logged for transparency.
Students and faculty get instant access after verification.
Type
Preprint
Year
2023
Authors
5
Datasets
0
Total Files
0
Language
en
DOI
https://doi.org/10.26434/chemrxiv-2023-0cwdz-v2
Access datasets from 50,000+ researchers worldwide with institutional verification.
Get Free Access