TensorFlow Integrated MLTSA module

These are the functions that are integrated with tensorflow for MLTSA and training of ML models and the models creation and integration onto the code.

MLTSA_tf.MLTSA(data, ans, model, encoder, drop_mode='Average', data_mode='Normal')

Function to apply the Machine Learning Transition State Analysis to a given training dataset/answers and trained model. It calculates the Gloabl Means and re-calculates accuracy for predicting each outcome.

Parameters
  • data (list) – Training data used for training the ML model. Must have shape (samples, features)

  • ans (list) – Outcomes for each sample on “data”. Shape must be (samples)

  • model

  • drop_mode

  • data_mode

Returns

MLTSA_tf.MLTSA_Plot(FR, dataset_og, pots, errorbar=True)

Wrapper for plotting the results from the Accuracy Drop procedure

Parameters
  • FR – Values from the feature reduction

  • dataset_og – Original dataset object class used for generating the data

  • pots – Original potentials object class used for generating the data

  • errorbar – Flag for including or not including the errobars in case of using replicas.

Returns

This snippet of code is for the set of functions we will use for the basic Multi-Layer Perceptron architecture to try on the different data.

Note this is built on TensorFlow 2 code and may not work on earlier versions.

In this snippet the architecture is built on the sequential “build” of TF.

This snippet of code is for the set of functions we will use for the LSTM architecture to try on the different data.

Note this is built on TensorFlow 2 code and may not work on earlier versions.

In this snippet the architecture is built on the sequential “build” of TF.

This snippet of code is for the set of functions we will use for the different Recurrent Neural Networks architectures to try on the different data.

Note this is built on TensorFlow 2 code and may not work on earlier versions.

In this snippet the architecture is built on the sequential “build” of TF.