Advanced features

Introduction

This section demonstrate SIMPLE-NN with tutorials. Example files are in SIMPLE-NN/tutorials/. In this example, snapshots from 500K MD trajectory of amorphous SiO2 (72 atoms) are used as training set.

GDF weighting

Tuning the weight of atomic force in loss function can be used to reduce the force errors of the sprasely sampled atoms. Gaussian densigy function (GDF) weighting [1] is one of the methods, which suggests the gaussian type of weighting scheme. To use GDF, you need to calculate the \(\rho(\mathbf{G})\) by adding the following lines to the symmetry_function section in input.yaml. SIMPLE-NN supports automatic parameter generation scheme for \(\sigma\) and \(c\). Use the setting sigma: Auto to get a robust \(\sigma\) and \(c\) (values are stored in LOG file). Input files introduced in this section can be found in SIMPLE-NN/tutorials/GDF_weighting.

# input.yaml:

preprocessing:
    valid_rate: 0.1
    calc_scale: True
    calc_pca: True
    calc_atomic_weights:
        type: gdf
        params: Auto

\(\rho(\mathbf{G})\) indicates the density of each training point. After calculating \(\rho(\mathbf{G})\), histograms of \(\rho(\mathbf{G})^{-1}\) are also saved as in the file of GDFinv_hist_XX.pdf.

Note

If there is a peak in high \(\rho(\mathbf{G})^{-1}\) region in the histogram, increasing the Gaussian weight(\(\sigma\)) is recommended until the peak is removed. On the contrary, if multiple peaks are shown in low \(\rho(\mathbf{G})^{-1}\) region in the histogram, reduce \(\sigma\) is recommended until the peaks are combined.

In the default setting, the group of \(\rho(\mathbf{G})^{-1}\) is scaled to have average value of 1. The interval-averaged force error with respect to the \(\rho(\mathbf{G})^{-1}\) can be visualized with the following script.

from simple_nn.utils import graph as grp
grp.plot_error_vs_gdfinv(['Si','O'], 'test_result')

The graph of interval-averaged force errors with respect to the \(\rho(\mathbf{G})^{-1}\) is generated as ferror_vs_GDFinv_XX.pdf

If default GDF is not sufficient to reduce the force error of sparsely sampled training points, One can use scale function to increase the effect of GDF. In scale function, \(b\) controls the decaying rate for low \(\rho(\mathbf{G})^{-1}\) and \(c\) separates highly concentrated and sparsely sampled training points. To use the scale function, add following lines to the neural_network section in input.yaml.

# input.yaml:

neural_network:
    weight_modifier:
        type: modified sigmoid
        params:
            Si:
                b: 1
                c: 35.
            O:
                b: 1
                c: 74.

For our experience, \(b=1.0\) and automatically selected \(c\) shows reasonable results. To check the effect of scale function, use the following script for visualizing the force error distribution according to \(\rho(\mathbf{G})^{-1}\).

In the script below, test_result_woscale is the test result file from the training without scale function and test_result_wscale is the test result file from the training with scale function. These test_result are made as described in evaluation. We do not provide test_result_wscale.

from simple_nn.utils import graph as grp
grp.plot_error_vs_gdfinv(['Si','O'], 'test_result_woscale', 'test_result_wscale')

Uncertainty estimation

The local configuration shown in the simulation driven by NNP should be included the training set because NNP only guarantees the reliability within the trained domain. Therefore, we suggest to check whether the local environment is trained or not through the standard deviation of atomic energies from replica ensemble [2]. To estimate the uncertainty of atomic configuration, following three steps are needed.

1. Atomic energy extraction

To estimatet the uncertainty of atomic configuration, the atomic energies extracted from reference NNP should be added into reference dataset (.pt).

# input.yaml

generate_features: False
preprocess: False
train_model: True

params:
    Si: params_Si
    O:  params_O

neural_network:
    train: False
    test: False
    add_NNP_ref: True
    ref_list: 'ref_list'
    train_atomic_E: False
    use_scale: True
    use_pca: True
    continue: checkpoint_bestmodel.pth.tar

ref_list contains the dataset list to be evaluated to atomic energy. Reference NNP is written in continue. After that, the reference dataset (.pt) are overwritten with atomic energies.

2. Training with atomic energy

Next, train the replica NNP only with atomic energy. To prevent the convergence among replicas, diversity the network structure by increasing the standard deviation of initial weight distribution (gain (default: 1.0)) and change the number of hidden nodes larger than reference NNP.

# input.yaml

generate_features: False
preprocess: False
train_model: True
random_seed: 123

params:
    Si: params_Si
    O:  params_O

neural_network:
    train: False
    test: False
    add_NNP_ref: False
    train_atomic_E: True
    nodes: 30-30
    weight_initializer:
        params:
            gain: 2.0
    optimizer:
        method: Adam
    total_epoch: 100
    learning_rate: 0.001
    scale: True
    pca: True
    continue: null

Because the atomic energies are needed in training, data directory made from atomic_energy_extraction is needed.

3. Uncertainty estimation in molecular dynamics

Note

You have to compile your LAMMPS with pair_nn_replica.cpp, pair_nn_replica.h, and symmetry_function.h to evaluate the uncertainty in molecular dynamics simulation.

LAMMPS can calculate the atomic uncertainty through standard deviation of atomic energies. Because atomic uncertainty will be written as atomic charge, prepare LAMMPS data file as charge format and modify your LAMMPS input as below example.

# lammps.in

units       metal
atom_style  charge

pair_style  nn/r 3
pair_coeff  * * potential_saved Si O &
            potential_saved_30 &
            potential_saved_60 &
            potential_saved_90

compute     std all property/atom q

dump        mydump all custom 1 dump.lammps id type x y z c_std
dump_modify sort id

run 1

We provide the LAMMPS potentials whose network size are 60-60 and 90-90, respectively. Atomic uncertainties are written in a dump file for each atoms. Outputs files are found in SIMPLE-NN/tutorials/Uncertainty_estimation_answer/3.Uncertainty_estimation_in_molecular_dynamics.