Introduction

The rapid advances in computing power and simulation technologies for molecular dynamics (MD) of biomolecules and fluids1,2,3,4, and ab initio MD of small molecules and materials5,6, allow the generation of extensive simulation data of complex molecular systems. Thus, it is of high interest to automatically extract statistically relevant information, including stationary, kinetic, and mechanistic properties.

The Markov modeling approach7,8,9,10,11,12 has been a driving force in the development of kinetic modeling techniques from MD mass data, chiefly as it facilitates a divide-and-conquer approach to integrate short, distributed MD simulations into a model of the long-timescale behavior. State-of-the-art analysis approaches and software packages4,13,14 operate by a sequence, or pipeline, of multiple processing steps, that has been engineered by practitioners over the last decade. The first step of a typical processing pipeline is featurization, where the MD coordinates are either aligned (removing translation and rotation of the molecule of interest) or transformed into internal coordinates such as residue distances, contact maps, or torsion angles4,13,15,16. This is followed by a dimension reduction, in which the dimension is reduced to much fewer (typically 2–100) slow collective variables, often based on the variational approach or conformation dynamics17,18, time-lagged independent component analysis (TICA)19,20, blind source separation21,22,23, or dynamic mode decomposition24,25,26,27,28—see refs 29,30 for an overview. The resulting coordinates may be scaled, in order to embed them in a metric space whose distances correspond to some form of dynamical distance31,32. The resulting metric space is discretized by clustering the projected data using hard or fuzzy data-based clustering methods11,13,33,34,35,36,37, typically resulting in 100–1000 discrete states. A transition matrix or rate matrix describing the transition probabilities or rate between the discrete states at some lag time τ is then estimated8,12,38,39 (alternatively, a Koopman model can be built after the dimension reduction27,28). The final step toward an easily interpretable kinetic model is coarse-graining of the estimated Markov state model (MSM) down to a few states40,41,42,43,44,45,46.

This sequence of analysis steps has been developed by combining physico-chemical intuition and technical experience gathered in the last ~10 years. Although each of the steps in the above pipeline appears meaningful, there is no fundamental reason why this or any other given analysis pipeline should be optimal. More dramatically, the success of kinetic modeling currently relies on substantial technical expertise of the modeler, as suboptimal decisions in each step may deteriorate the result. As an example, failure to select suitable features in step 1 will almost certainly lead to large modeling errors.

An important step toward selecting optimal models (parameters) and modeling procedures (hyper-parameters) has been the development of the variational approach for conformation dynamics (VAC)17,18, which offers a way to define scores that measure the optimality of a given kinetic model compared to the (unknown) MD operator that governs the true kinetics underlying the data. The VAC has recently been generalized to the variational approach for Markov processes (VAMP), which allows to optimize models of arbitrary Markov processes, including nonreversible and non-stationary dynamics47. The VAC has been employed using cross-validation in order to make optimal hyper-parameter choices within the analysis pipeline described above while avoiding overfitting34,48. However, a variational score is not only useful to optimize the steps of a given analysis pipeline, but in fact allows us to replace the entire pipeline with a more general learning structure.

Here we develop a deep learning structure that is in principle able to replace the entire analysis pipeline above. Deep learning has been very successful in a broad range of data analysis and learning problems49,50,51. A feedforward deep neural network is a structure that can learn a complex, nonlinear function y = F(x). In order to train the network, a scoring or loss function is needed that is maximized or minimized, respectively. Here we develop VAMPnets, a neural network architecture that can be trained by maximizing a VAMP variational score. VAMPnets contain two network lobes that transform the molecular configurations found at a time delay τ along the simulation trajectories. Compared to previous attempts to include “depth” or “hierarchy” into the analysis method52,53, VAMPnets combine the tasks of featurization, dimension reduction, discretization, and coarse-grained kinetic modeling into a single end-to-end learning framework. We demonstrate the performance of our networks using a variety of stochastic models and data sets, including a protein-folding data set. The results are competitive with and sometimes surpass the state-of-the-art handcrafted analysis pipeline. Given the rapid improvements of training efficiency and accuracy of deep neural networks seen in a broad range of disciplines, it is likely that follow-up works can lead to superior kinetic models.

Results

Variational principle for Markov processes

Molecular dynamics can be theoretically described as a Markov process {xt} in the full state space Ω. For a given potential energy function, the simulation setup (e.g., periodic boundaries) and the time-step integrator used, the dynamics are fully characterized by a transition density pτ(x, y), i.e., the probability density that a MD trajectory will be found at configuration y given that it was at configuration x a time lag τ before. Markovianity implies that the y can be sampled by knowing x alone, without the knowledge of previous time steps. While the dynamics might be highly nonlinear in the variables xt, Koopman theory24,54 tells us that there is a transformation of the original variables into some features or latent variables that, on average, evolve according to a linear transformation. In mathematical terms, there exist transformations to features or latent variables, χ0(x) = (χ01(x), ..., χ0m(x)) and χ1(x) = (χ11(x), ..., χ1m(x)), such that the dynamics in these variables are approximately governed by the matrix K:

$${\Bbb E}\left[ {{\boldsymbol{\chi }}_1\left( {{\bf{x}}_{t + \tau }} \right)} \right] \approx {\bf{K}}^ \top {\Bbb E}\left[ {{\boldsymbol{\chi }}_0\left( {{\bf{x}}_t} \right)} \right].$$
(1)

This approximation becomes exact in the limit of an infinitely large set of features (m → ∞) χ0 and χ1, but for a sufficiently large lag time τ the approximation can be excellent with low-dimensional feature transformations, as we will demonstrate below. The expectation values \({\Bbb E}\) account for stochasticity in the dynamics, such as in MD, but they can be omitted for deterministic dynamical systems24,26,27.

To illustrate the meaning of Eq. (1), consider the example of {xt} being a discrete-state Markov chain. If we choose the feature transformation to be indicator functions (χ0i = 1 when xt = i and 0 otherwise, and correspondingly with χ1i and xt + τ), their expectation values are equal to the probabilities of the chain to be in any given state, pt and pt + τ, and K = P(τ) is equal to the matrix of transition probabilities, i.e., pt + τ = P(τ)pt. Previous papers on MD kinetics have usually employed a propagator or transfer operator formulation instead of (1)7,8. However, the above formulation is more powerful as it also applies to nonreversible and non-stationary dynamics, as found for MD of molecules subject to external force, such as voltage, flow, or radiation55,56.

A central result of the VAMP theory is that the best finite-dimensional linear model, i.e., the best approximation in Eq. (1), is found when the subspaces spanned by χ0 and χ1 are identical to those spanned by the top m left and right singular functions, respectively, of the so-called Koopman operator47. For an introduction to the Koopman operator, please refer to refs. 24,30,54.

How do we choose χ0, χ1, and K from data? First, suppose we are given some feature transformation χ0, χ1 and define the following covariance matrices:

$${\bf{C}}_{00} = {\Bbb E}_t\left[ {{\boldsymbol{\chi }}_0\left( {{\bf{x}}_t} \right){\boldsymbol{\chi }}_0\left( {{\bf{x}}_t} \right)^ \top } \right]$$
(2)
$${\bf{C}}_{01} = {\Bbb E}_t\left[ {{\boldsymbol{\chi }}_0\left( {{\bf{x}}_t} \right){\boldsymbol{\chi }}_1\left( {{\bf{x}}_{t + \tau }} \right)^ \top } \right]$$
(3)
$${\bf{C}}_{11} = {\Bbb E}_{t + \tau }\left[ {{\boldsymbol{\chi }}_1\left( {{\bf{x}}_{t + \tau }} \right){\boldsymbol{\chi }}_1\left( {{\bf{x}}_{t + \tau }} \right)^ \top } \right],$$
(4)

where \({\Bbb E}_t\left[ \cdot \right]\) and \({\Bbb E}_{t + \tau }\left[ \cdot \right]\) denote the averages that extend over time points and lagged time points within trajectories, respectively, and across trajectories. Then the optimal K that minimizes the least square error \({\Bbb E}_t\left[ {\left\| {{\boldsymbol{\chi }}_1\left( {{\bf{x}}_{t + \tau }} \right) - {\mathbf{K}}^ \top {\boldsymbol{\chi }}_0\left( {{\bf{x}}_t} \right)} \right\|^2} \right]\) is refs 27,57,47:

$${\bf{K}} = {\bf{C}}_{00}^{ - 1}{\bf{C}}_{01}.$$
(5)

Now the remaining problem is how to find suitable transformations χ0, χ1. This problem cannot be solved by minimizing the least square error above, as is illustrated by the following example: suppose we define χ0(x) = χ1(x) = (1(x)), i.e., we just map the state space to the constant 1—in this case the least square error is 0 for K = [1], but the model is completely uninformative as all dynamical information is lost.

Instead, in order to seek χ0 and χ1 based on available simulation data, we employ the VAMP theorem introduced in ref. 47, that can be equivalently formulated as the following subspace version.

VAMP variational principle

For any two sets of linearly independent functions χ0 (x) and χ1(x), let us call

$$\hat R_2\left[ {{\boldsymbol{\chi }}_0,{\boldsymbol{\chi }}_1} \right] = \left\| {{\bf{C}}_{00}^{ - \frac{1}{2}}{\bf{C}}_{01}{\bf{C}}_{11}^{ - \frac{1}{2}}} \right\|_F^2$$

their VAMP-2 score, where C00, C01, C11 are defined by Eqs. (2)–(4) and \(\left\| {\bf{A}} \right\|_F^2 = n^{ - 1}\mathop {\sum}\nolimits_{i,j} A_{ij}^2\) is the Frobenius norm of n × n matrix A. The maximum value of VAMP-2 score is achieved when the top m left and right Koopman singular functions belong to span(χ01, ..., χ0m) and span(χ11, ..., χ1m), respectively.

This variational theorem shows that the VAMP-2 score measures the consistency between subspaces of basis functions and those of dominant singular functions, and we can therefore optimize χ0 and χ1 via maximizing the VAMP-2 score. In the special case where the dynamics are reversible with respect to equilibrium distribution the theorem above specializes to variational principle for reversible Markov processes17,18.

Learning the feature transformation using VAMPnets

Here we employ neural networks to find an optimal set of basis functions, χ0(x) and χ1(x). Neural networks with at least one hidden layer are universal function approximators58, and deep networks can express strongly nonlinear functions with a fairly few neurons per layer59. Our networks use VAMP as a guiding principle and are hence called VAMPnets. VAMPnets consist of two parallel lobes, each receiving the coordinates of time-lagged MD configurations xt and xt+τ as input (Fig. 1). The lobes have m output nodes and are trained to learn the transformations χ0(xt) and χ1(xt + τ), respectively. For a given set of transformations, χ0 and χ1, we pass a batch of training data through the network and compute the training VAMP score of our choice. VAMPnets bear similarities with auto-encoders60,61 using a time-delay embedding and are closely related to deep canonical covariance analysis (CCA)62. VAMPnets are identical to deep CCA with time-delay embedding when using the VAMP-1 score discussed in ref. 47, however the VAMP-2 score has easier-to-handle gradients and is more suitable for time series data, due to its direct relation to the Koopman approximation error47.

Fig. 1
figure 1

Scheme of the neural network architecture used. For each time step t of the simulation trajectory, the coordinates xt and xt+τ are inputs to two deep networks that conduct a nonlinear dimension reduction. In the present implementation, the output layer consists of a Softmax classifier. The outputs are then merged to compute the variational score that is maximized to optimize the networks. In all present applications, the two network lobes are identical clones, but they can also be trained independently

The first left and right singular functions of the Koopman operator are always equal to the constant function 1(x) ≡ 147. We can thus add 1 to basis functions and train the network by maximizing

$$\hat R_2\left[ {\left( {\begin{array}{*{20}{c}} 1 \\ {{\boldsymbol{\chi }}_0} \end{array}} \right),\left( {\begin{array}{*{20}{c}} 1 \\ {{\boldsymbol{\chi }}_1} \end{array}} \right)} \right] = \left\| {\overline {\bf{C}} _{00}^{\, - \frac{1}{2}}\overline {\bf{C}} _{01}\overline {\bf{C}} _{11}^{ \, - \frac{1}{2}}} \right\|_F^2 + 1,$$
(6)

where \(\overline {\bf{C}} _{00},\overline {\bf{C}} _{01},\overline {\bf{C}} _{11}\) are mean-free covariances of the feature-transformed coordinates:

$$\overline {\bf{C}} _{00} = (T - 1)^{ - 1}\overline {\bf{X}} \, \overline {\bf{X}} ^ \top$$
(7)
$$\overline {\bf{C}} _{01} = (T - 1)^{ - 1}\overline {\bf{X}} \, \overline {{\bf Y}} ^ \top$$
(8)
$$\overline {\bf{C}} _{11} = (T - 1)^{ - 1}\overline {\bf{Y}} \, \overline {\bf{Y}} ^ \top .$$
(9)

Here we have defined the matrices \({\bf{X}} = \left[ {X_{ij}} \right] = \chi _{0i}\left( {{\bf{x}}_j} \right) \in {\Bbb R}^{m \times T}\) and \({\bf{Y}} = \left[ {Y_{ij}} \right] = \chi _{1i}\left( {{\bf{x}}_{j + \tau }} \right) \in {\Bbb R}^{m \times T}\) with \(\left\{ {\left( {{\bf{x}}_j,{\bf{x}}_{j + \tau }} \right)} \right\}_{j = 1}^T\) representing all available transition pairs, and their mean-free versions \(\overline {\bf{X}} = {\bf{X}} - T^{ - 1}{\bf{X1}}\), \(\overline {\bf{Y}} = {\bf{Y}} - T^{ - 1}{\bf{Y1}}\). The gradients of \(\hat R_2\) are given by:

$$\nabla _{\bf{X}}\hat R_2 = \frac{2}{{T - 1}}\overline {\bf{C}} _{00}^{\, - 1}\overline {\bf{C}} _{01}\overline {\bf{C}} _{11}^{\, - 1}\left( {\overline {\bf{Y}} - \overline {\bf{C}} _{01}^{\rm T}\overline {\bf{C}} _{00}^{\, - 1}\overline {\bf{X}} } \right)$$
(10)
$$\nabla _{\mathbf{Y}}\hat R_2 = \frac{2}{{T - 1}}\overline {\bf{C}} _{11}^{ - 1}\overline {\bf{C}} _{01}^{\rm T}\overline {\bf{C}} _{00}^{ - 1}\left( {\overline {\bf{X}} - \overline {\bf{C}} _{01}\overline {\bf{C}} _{11}^{ - 1}\overline {\bf{Y}} } \right)$$
(11)

and are back-propagated to train the two network lobes. See Supplementary Note 1 for derivations of Eqs. (6), (10), and (11).

For simplicity of interpretation, we may just use a unique basis set χ = χ0 = χ1. Even when using two different basis sets would be meaningful, we can unify them by simply defining χ = (χ0χ1). In this case, we clone the lobes of the network and train them using the total gradient \(\nabla \hat R_2 = \nabla _{\bf{X}}\hat R_2 + \nabla _{\bf{Y}}\hat R_2\).

After training, we asses the quality of the learned features and select hyper-parameters (e.g., network size) while avoiding overfitting using the VAMP-2 validation score

$$\hat R_{\mathrm{2}}^{{\mathrm{val}}} = \left\| {\left( {\overline {\bf{C}} _{00}^{{\mathrm{val}}}} \right)^{ - \frac{1}{2}}\overline {\bf{C}} _{01}^{{\mathrm{val}}}\left( {\overline {\bf{C}} _{11}^{{\mathrm{val}}}} \right)^{ - \frac{1}{2}}} \right\|_F^2 + 1,$$
(12)

where \(\overline {\bf{C}} _{00}^{{\mathrm{val}}},\overline {\bf{C}} _{01}^{{\mathrm{val}}},\overline {\bf{C}} _{11}^{{\mathrm{val}}}\) are mean-free covariance matrices computed from a validation data set not used during the training.

Dynamical model and validation

The direct estimate of the time-lagged covariance matrix C01 is generally nonsymmetric. Hence the Koopman model or MSM K given by Eq. (5) is typically not time-reversible28. In MD, it is often desirable to obtain a time-reversible kinetic model—see39 for a detailed discussion. To enforce reversibility, K can be reweighted as described in28 and implemented in PyEMMA13. The present results do not depend on enforcing reversibility, as classical analyses such as PCCA+63 are avoided as the VAMPnet structure automatically performs coarse graining.

Since K is a Markovian model, it is expected to fulfill the Chapman–Kolmogorov (CK) equation:

$${\bf{K}}(n\tau ) = {\bf{K}}^n(\tau ),$$
(13)

for any value of n ≥ 1, where K(τ) and K() indicate the models estimated at a lag time of τ and , respectively. However, since any Markovian model of MD can be only approximate8,64, Eq. (13) can only be fulfilled approximately, and the relevant test is whether it holds within statistical uncertainty. We construct two tests based on Eq. (13): in order to select a suitable dynamical model, we proceed as for Markov state models by conducting an eigenvalue decomposition for every estimated Koopman matrix, K(τ)ri = riλi(τ), and computing the implied timescales9 as a function of lag time:

$$t_i(\tau ) = - \frac{\tau }{{{\mathrm{ln}}\left| {\lambda _i(\tau )} \right|}},$$
(14)

We chose a value τ, where ti(τ) are approximately constant in τ. After having chosen τ, we test whether Eq. (13) holds within statistical uncertainty65. For both the implied timescales and the CK test, we proceed as follows: train the neural network at a fixed lag time τ*, thus obtaining the network transformation χ, and then compute Eq. (13) or Eq. (14) for different values of τ with a fixed transformation χ. Finally, the approximation of the ith eigenfunction is given by

$$\hat \psi _i^e({\bf{x}}) = \mathop {\sum}\limits_j r_{ij}\chi _j({\bf{x}}).$$
(15)

If dynamics are reversible, the singular value decomposition and eigenvalue decomposition are identical, i.e., σi = λi and \(\psi _i = \psi _i^e\).

Network architecture and training

We use VAMPnets to learn molecular kinetics from simulation data of a range of model systems. While any neural network architecture can be employed inside the VAMPnet lobes, we chose the following setup for our applications: the two network lobes are identical clones, i.e., χ0 ≡ χ1, and consist of fully connected networks. In most cases, the networks have less output than input nodes, i.e., the network conducts a dimension reduction. In order to divide the work equally between network layers, we reduce the number of nodes from each layer to the next by a constant factor. Thus, the network architecture is defined by two parameters: the depth d and the number of output nodes nout. All hidden layers employ rectified linear units (ReLU)66,67.

Here, we build the output layer with Softmax output nodes, i.e., χi(x) ≥ 0 for all i and \(\mathop {\sum}\nolimits_i \chi _i({\bf{x}}) = 1\). Therefore, the activation of an output node can be interpreted as a probability to be in state i. As a result, the network effectively performs featurization, dimension reduction, and finally a fuzzy clustering to metastable states, and the K(τ) matrix computed from the network-transformed data is the transition matrix of a fuzzy MSM36,37. Consequently, Eq. (1) propagates probability distributions in time.

The networks were trained with pairs of MD configurations (xt, xt + τ) using the Adam stochastic gradient descent method68. For each result, we repeated 100 training runs, each of which with a randomly chosen 90%/10% division of the data into training and validation data. See Methods section for details on network architecture, training, and choice of hyper-parameters.

Asymmetric double-well potential

We first model the kinetics of a bistable one-dimensional process, simulated by Brownian dynamics (Methods) in an asymmetric double-well potential (Fig. 2a). A trajectory of 50,000 time steps is generated. Three-layer VAMPnets are set up with 1-5-10-5 nodes in each lobe. The single input node of each lobe is given the current and time-lagged mean-free x coordinate of the system, i.e., xt − μ1 and xt + τ − μ2, where μ1 and μ2 are the respective means, and τ = 1 is used. The network maps to five Softmax output nodes that we will refer to as states, as the network performs a fuzzy discretization by mapping the input configurations to the output activations. The network is trained by using the VAMP-2 score with the four largest singular values.

Fig. 2
figure 2

Approximation of the slow transition in a bistable potential. a Potential energy function U(x) = x4 − 6x2 + 2x. b Eigenvector of the slowest process calculated by direct numerical approximation (black) and approximated by a VAMPnet with five output nodes (red). Activation of the five Softmax output nodes define the state membership probabilities (blue). c Relaxation timescales computed from the Koopman model using the VAMPnet transformation. d Chapman–Kolmogorov test comparing long-time predictions of the Koopman model estimated at τ = 1 and estimates at longer lag times. c, d report 95% confidence interval error bars over 100 training runs

The network learns to place the output states in a way to resolve the transition region best (Fig. 2b), which is known to be important for the accuracy of a Markov state model8,64. This placement minimizes the Koopman approximation error, as seen by comparing the dominant Koopman eigenfunction (Eq. (15)) with a direct numerical approximation of the true eigenfunction obtained by a transition matrix computed for a direct uniform 200-state discretization of the x axis—see ref. 8 for details. The implied timescale and CK tests (Eqs. (13) and (14)) confirm that the kinetic model learned by the VAMPnet successfully predicts the long-time kinetics (Fig. 2c, d).

Protein-folding model

While the first example was one-dimensional, we now test if VAMPnets are able to learn reaction coordinates that are nonlinear functions of a multi-dimensional configuration space. For this, we simulate a 100,000 time step Brownian dynamics trajectory (Eq. (17)) using the simple protein-folding model defined by the potential energy function (Supplementary Fig. 1a):

$$U(r) = \left( {\begin{array}{*{20}{l}} { - 2.5(r - 3)^2} \hfill & {r < 3} \hfill \\ {0.5(r - 3)^3 - (r - 3)^2} \hfill & {r \ge 3} \hfill \end{array}} \right.$$

The system has a five-dimensional configuration space, \({\mathbf{x}} \in {\Bbb R}^5\), however the energy only depends on the norm of the vector \(r = \left| {\bf{x}} \right|\). While small values of r are energetically favorable, large values of r are entropically favorable as the number of configurations available on a five-dimensional hypersphere grows dramatically with r. Thus, the dynamics are bistable along the reaction coordinate r. Four-layer network lobes with 5-32-16-8-2 nodes each were employed and trained to maximize the VAMP-2 score involving the largest nontrivial singular value.

The two output nodes successfully identify the folded and the unfolded states, and use intermediate memberships for the intersecting transition region (Supplementary Fig. 1b). The network excellently approximates the Koopman eigenfunction of the folding process, as apparent from the comparison of the values of the network eigenfunction computed by Eq. (15) with the eigenvector computed from a high-resolution MSM built on the r coordinate (Supplementary Fig. 1b). This demonstrates that the network can learn the nonlinear reaction coordinate mapping \(r = \left| {\bf{x}} \right|\) based only on maximizing the variational score (Eq. 6). Furthermore, the implied timescales and the CK test indicate that the network model predicts the long-time kinetics almost perfectly (Supplementary Fig. 1c, d).

Alanine dipeptide

As a next level, VAMPnets are used to learn the kinetics of alanine dipeptide from simulation data. It is known that the ϕ and ψ backbone torsion angles are the most important reaction coordinates that separate the metastable states of alanine dipeptide, however, our networks only receive Cartesian coordinates as an input, and are thus forced to learn both the nonlinear transformation to the torsion angle space and an optimal cluster discretization within this space, in order to obtain an accurate kinetic model.

A 250 ns MD trajectory generated in ref.69 (MD setup described there) serves as a data set. The solute coordinates were stored every ps, resulting in 250,000 configurations that are all aligned on the first frame using minimal root mean square deviation fit to remove global translation and rotation. Each network lobe uses the three-dimensional coordinates of the 10 heavy atoms as input, (x1, y1, z1, ..., x10, y10, z10), and the network is trained using time lag τ = 40 ps. Different numbers of output states and layer depths are considered, employing the layer sizing scheme described in the Methods section (see Fig. 3 for an example).

Fig. 3
figure 3

Representative structure of one lobe of the VAMPnet used for alanine dipeptide. Here, the five-layer network with six output states used for the results shown in Fig. 4 is shown. Layers are fully connected, have 30-22-16-12-9-6 nodes, and use dropout in the first two hidden layers. All hidden neurons use ReLu activation functions, while the output layer uses Softmax activation function in order to achieve a fuzzy discretization of state space

A VAMPnet with six output states learns a discretization in six metastable sets corresponding to the free energy minima of the ϕ/ψ space (Fig. 4b). The implied timescales indicate that given the coordinate transformation found by the network, the two slowest timescales are converged at lag time τ = 50 ps or larger (Fig. 4c). Thus, we estimated a Koopman model at τ = 50 ps, whose Markov transition probability matrix is depicted in Fig. 4d. Note that transition probabilities between state pairs 1 ↔ 4 and 2 ↔ 3 are important for the correct kinetics at τ = 50 ps, but the actual trajectories typically pass via the directly adjacent intermediate states. The model performs excellently in the CK test (Fig. 4e).

Fig. 4
figure 4

VAMPnet kinetic model of alanine dipeptide. a Structure of alanine dipeptide. The main coordinates describing the slow transitions are the backbone torsion angles ϕ and ψ, however the neural network inputs are only the Cartesian coordinates of heavy atoms. b Assignment of all simulated molecular coordinates, plotted as a function of ϕ and ψ, to the six Softmax output states. Color corresponds to activation of the respective output neuron, indicating the membership probability to the associated metastable state. c Relaxation timescales computed from the Koopman model using the neural network transformation. d Representation of the transition probabilities matrix of the Koopman model; transitions with a probability lower than 0.5% have been omitted. e Chapman–Kolmogorov test comparing long-time predictions of the Koopman model estimated at τ = 50 ps and estimates at longer lag times. c, e report 95% confidence interval error bars over 100 training runs excluding failed runs (see text)

Choice of lag time, network depth, and number of output states

We studied the success probability of optimizing a VAMPnet with six output states as a function of the lag time τ by conducting 200 optimization runs. Success was defined as resolving the three slowest processes by finding three slowest timescale higher than 0.2, 0.4, and 1 ns, respectively. Note that the results shown in Fig. 4 are reported for successful runs in this definition. There is a range of τ values from 4 to 32 ps where the training succeeds with a significant probability (Supplementary Fig. 2a). However, even in this range the success rate is still below 40%, which is mainly due to the fact that many runs fail to find the rarely occurring third-slowest process that corresponds to the ψ transition of the positive ϕ range (Fig. 4b, states 5 and 6).

The breakdown of optimization success for small and large lag times can be most easily explained by the eigenvalue decomposition of Markov propagators8. When the lag time exceeds the timescale of a process, the amplitude of this process becomes negligible, making it hard to fit given noisy data. At short lag times, many processes have large eigenvalues, which increases the search space of the neural network and appears to increase the probability of getting stuck in suboptimal maxima of the training score.

We have also studied the success probability, as defined above, as a function of network depth. Deeper networks can represent more complex functions. Also, since the networks defined here reduce the input dimension to the output dimension by a constant factor per layer, deeper networks perform a less radical dimension reduction per layer. On the other hand, deeper networks are more difficult to train. As seen in Supplementary Fig. 2b, a high success rate is found for four to seven layers.

Next, we studied the dependency of the network-based discretization as a function of the number of output nodes (Fig. 5a–c). With two output states, the network separates the state space at the slowest transition between negative and positive values of the ϕ angle (Fig. 5a). The result with three output nodes keeps the same separation and additionally distinguishes between the α and β regions of the Ramachandran plot, i.e., small and large values of the ψ angle (Fig. 5b). For a higher number of output states, finer discretizations and smaller interconversion timescales are found, until the network starts discretizing the transition regions, such as the two transition states between the α and β regions along the ψ angle (Fig. 5c). We chose the lag time depending on the number of output nodes of the network, using τ = 200 ps for two output nodes, τ = 60 ps for three output nodes, and τ = 1 ps for eight output nodes.

Fig. 5
figure 5

Kinetic model of alanine dipeptide as a function of the number of output states. ac Assignment of input coordinates, plotted as a function of ϕ and ψ, to two, three, and eight output states. Color corresponds to activation of the respective output neuron, indicating the membership probability to this state (Fig. 4b). d Comparison of VAMPnet and MSM performance as a function of the number of output states/MSM states. Mean VAMP-2 score and 95% confidence interval from 100 runs are shown. e Mean squared values of the four largest singular values that make up the VAMPnets score plotted in d

A network output with k Softmax neurons describes a (k − 1)-dimensional feature space as the Softmax normalization removes one degree of freedom. Thus, to resolve k − 1 relaxation timescales, at least k output nodes or metastable states are required. However, the network quality can improve when given more degrees of freedom in order to approximate the dominant singular functions accurately. Indeed, the best scores using k = 4 singular values (three nontrivial singular values) are achieved when using at least six output states that separate each of the six metastable states in the Ramachandran plane (Fig. 5d, e).

For comparison, we investigated how a standard MSM would perform as a function of the number of states (Fig. 5d). For a fair comparison, the MSMs also used Cartesian coordinates as an input, but then employed a state-of-the-art procedure using a kinetic map transformation that preserves 95% of the cumulative kinetic variance31, followed by k-means clustering, where the parameter k is varied. It is seen that the MSM VAMP-2 scores obtained by this procedure is significantly worse than by VAMPnets when <20 states are employed. Clearly, MSMs will succeed when sufficiently many states are used, but in order to obtain an interpretable model, those states must again be coarse-grained onto a fewer-state model, while VAMPnets directly produce an accurate model with few states.

VAMPnets learn to transform Cartesian to torsion coordinates

The results above indicate that the VAMPnet has implicitly learned the feature transformation from Cartesian coordinates to backbone torsions. In order to probe this ability more explicitly, we trained a network with 30-10-3-3-2-5 layers, i.e., including a bottleneck of two nodes before the output layer. We find that the activation of the two bottleneck nodes correlates excellently with the ϕ and ψ torsion angles that were not presented to the network (Pearson correlation coefficients of 0.95 and 0.92, respectively, Supplementary Fig. 3a, b). To visualize the internal representation that the network learns, we color data samples depending on the free energy minima in the ϕ/ψ space they belong to (Supplementary Fig. 3c), and then show where these samples end up in the space of the bottleneck node activations (Supplementary Fig. 3d). It is apparent that the network learns a representation of the Ramachandran plot—the four free energy minima at small ϕ values (αR and β areas) are represented as contiguous clusters with the correct connectivity, and are well separated from states with large ϕ values (αL area). The network fails to separate the two substates in the large ϕ value range well, which explains the frequent failure to find the corresponding transition process and the third-largest relaxation timescale.

NTL9 protein-folding dynamics

In order to proceed to a higher-dimensional problem, we analyze the kinetics of an all-atom protein-folding simulation of the NTL9 protein generated by the Anton supercomputer1. A five-layer VAMPnet was trained at lag time τ = 10 ns using 111,000 time steps, uniformly sampled from a 1.11 ms trajectory. Since NTL9 is folding and unfolding, there is no unique reference structure to align Cartesian coordinates to—hence we use internal coordinates as a network input. We computed the nearest-neighbor heavy-atom distance, dij for all non-redundant pairs of residues i and j and transformed them into contact maps using the definition cij = exp(−dij), resulting in 666 input nodes.

Again, the network performs a hierarchical decomposition of the molecular configuration space when increasing the number of output nodes. Figure 6a shows the decomposition of state space for two and five output nodes, and the corresponding mean contact maps and state probabilities. With two output nodes, the network finds the folded and unfolded state that are separated by the slowest transition process (Fig. 6a, middle row). With five output states, the folded state is decomposed into a stable and well-defined fully folded substate and a less stable, more flexible substate that is missing some of the tertiary contacts compared to the fully folded substate. The unfolded substate decomposes into three substates, one of them largely unstructured, a second one with residual structure, thus forming a folding intermediate, and a mis-folded state with an entirely different fold including a non-native β-sheet.

Fig. 6
figure 6

VAMPnet results of NTL9-folding kinetics. a Hierarchical decomposition of the NTL9 protein state space by a network with two and five output nodes. Mean contact maps are shown for all MD samples grouped by the network, along with the fraction of samples in that group. 3D structures are shown for the five-state decomposition, residues involved in α-helices or β-sheets in the folded state are colored identically across the different states. b Relaxation timescales computed from the Koopman model approximated using the transformation applied by a neural network with five output nodes. c Relaxation timescales from a Markov state model computed from a TICA transformation of the contact maps, followed by k-means clustering with k = 40. d Chapman–Kolmogorov test comparing long-time predictions of the Koopman model estimated at τ = 320 ns and estimates at longer lag times. bd report 95% confidence interval error bars over 100 training runs

The relaxation timescales found by a five-state VAMPnet model are en par with those found by a 40-state MSM using state-of-the-art estimation methods (Fig. 6b, c). However, the fact that only five states are required in the VAMPnet model makes it easier to interpret and analyze. Additionally, the CK test indicates excellent agreement between long-time predictions and direct estimates.

Discussion

We have introduced a deep learning framework for molecular kinetics, called VAMPnet. Data-driven learning of molecular kinetics is usually done by shallow learning structures, such as TICA and MSMs. However, the processing pipeline, typically consisting of featurization, dimension reduction, MSM estimation, and MSM coarse-graining is, in principle, a handcrafted deep learning structure. Here we propose to replace the entire pipeline by a deep neural network that learns optimal feature transformations, dimension reduction and, if desired, maps the MD time steps to a fuzzy clustering. The key to optimize the network is the VAMP variational approach that defines scores by which learning structures can be optimized to learn models of both equilibrium and non-equilibrium MD.

Although MSM-based kinetic modeling has been refined over more than a decade, VAMPnets perform competitively or superior in our examples. In particular, they perform extremely well in the Chapman–Kolmogorov test that validates the long-time prediction of the model. VAMPnets have a number of advantages over models based on MSM pipelines: (i) they may be overall more optimal, because featurization, dimension reduction, and clustering are not explicitly separate processing steps. (ii) When using Softmax output nodes, the VAMPnet performs a fuzzy clustering of the MD structures fed into the network and constructs a fuzzy MSM, which is readily interpretable in terms of transition probabilities between metastable states. In contrast to other MSM coarse-graining techniques, it is thus not necessary to accept reduction in model quality in order to obtain a few-state MSM, but such a coarse-grained model is seamlessly learned within the same learning structure. (iii) VAMPnets require less user expertise to train than an MSM-based processing pipelines, and the formulation of the molecular kinetics as a neural network learning problem enables us to exploit an arsenal of highly developed and optimized tools in learning softwares such as tensorflow, theano, or keras.

Despite these advantages, VAMPnets still miss many of the benefits that come with extensions developed for the MSM approach. This includes multi-ensemble Markov models that are superior to single conventional MSMs in terms of sampling rare events by combining data from multiple ensembles70,71,72,73,74,75, augmented Markov models that combine simulation data with experimental observation data76, and statistical error estimators developed for MSMs77,78,79. Since these methods explicitly use the MSM likelihood, it is currently unclear, how they could be implemented in a deep learning structure such as a VAMPnet. Extending VAMPnets toward these special capabilities is a challenge for future studies.

Finally, a remaining concern is that the optimization of VAMPnets can get stuck in suboptimal local maxima. In other applications of network-based learning, a working knowledge has been established to find which type of network implementation and learning algorithm are most suitable for robust and reproducible learning. For example, it is conceivable that the VAMPnet lobes may benefit from convolutional filters80 or different types of transfer functions. Suitably chosen convolutions, as in ref. 81 may also lead to learned feature transformations that are transferable within a given class of molecules.

Methods

Neural network structure

Each network lobe in Fig. 1 has a number of input nodes given by the data dimension. According to the VAMP variational principle (Sec. A), the output dimension must be at least equal to the number of Koopman singular functions that we want to approximate, i.e., equal to k used in the score function \(\hat R_2\). In most applications, the number of input nodes exceeds the number of output nodes, i.e., the network conducts a dimension reduction. Here, we keep the dimension reduction from layer i with ni nodes to layer i + 1 with ni + 1 nodes constant:

$$\frac{{n_i}}{{n_{i + 1}}} = \left( {\frac{{n_{{\mathrm{in}}}}}{{n_{{\mathrm{out}}}}}} \right)^{1/d},$$
(16)

where d is the network depth, i.e., the number of layers excluding the input layer. Thus, the network structure is fixed by nout and d. We tested different values for d ranging from 2 to 11; for alanine dipeptide, Supplementary Fig. 2b reports the results in terms of the training success rate described in the Results section. Networks have a number of parameters that ranges between 100 and 400,000, most of which are between the first and second layer due to the rapid dimension reduction of the network. To avoid overfitting, we use dropout during training82, and select hyper-parameters using the VAMP-2 validation score.

Neural network hyper-parameters

Hyper-parameters include the regularization factors for the weights of the fully connected and the Softmax layer, the dropout probabilities for each layer, the batch size, and the learning rate for the Adam algorithm. Since a grid search in the joint parameter space would have been too computationally expensive, each hyper-parameter was optimized using the VAMP-2 validation score while keeping the other hyper-parameters constant. We started with the regularization factors due to their large effect on the training performance, and observed optimal performance for a factor of 10−7 for the fully connected hidden layers and 10−8 for the output layer; regularization factors >10−4 frequently led to training failure. Subsequently, we tested the dropout probabilities with values ranging from 0 to 50% and found 10% dropout in the first two hidden layers and no dropout otherwise to perform well. The results did not strongly depend on the training batch size, however, more training iterations are necessary for large batches, while small batches exhibit stronger fluctuations in the training score. We found a batch size of 4000 to be a good compromise, with tested values ranging between 100 and 16,000. The optimal learning rate strongly depends on the network topology (e.g., the number of hidden layers and the number of output nodes). In order to adapt the learning rate, we started from an arbitrary rate of 0.05. If no improvement on the validation VAMP-2 score was observed over 10 training iterations, the learning rate was reduced by a factor of 10. This scheme led to better convergence of the training and validation scores and better kinetic model validation compared to using a high learning rate throughout.

The time lag between the input pairs of configurations was selected depending on the number of output nodes of the network: larger lag times are better at isolating the slowest processes, and thus are more suitable with a small number of output nodes. The procedure of choosing network structure and lag time is thus as follows: First, the number of output nodes n and the hidden layers are selected, which determines the network structure as described above. Then, a lag time is chosen in which the largest n singular values (corresponding to the n − 1 slowest processes) can be trained consistently.

VAMPnet training and validation

We pre-trained the network by minimizing the negative VAMP-1 score during the first third of the total number of epochs, and subsequently optimize the network with VAMP-2 optimization (Sec. B). In order to ensure robustness of the results, we performed 100 network optimization runs for each problem. In each run, the data set was shuffled and randomly split into 90%/10% for training and validation, respectively. To exclude outliers, we then discarded the best 5% and the worst 5% of results. Hyper-parameter optimization was done using the validation score averaged over the remaining runs. Figures report training or validation mean and 95% confidence intervals.

Brownian dynamics simulations

The asymmetric double well and the protein-folding toy model are simulated by over-damped Langevin dynamics in a potential energy function U(x), also known as Brownian dynamics, using an forward Euler integration scheme. The position xt is propagated by time step Δt via:

$${\bf{x}}_{t + {\mathrm{\Delta }}t} = {\bf{x}}_t - {\mathrm{\Delta }}t\frac{{\nabla U({\bf{x}})}}{{kT}} + \sqrt {2{\mathrm{\Delta }}tD} {\bf{w}}_t,$$
(17)

where D is the diffusion constant and kT is the Boltzmann constant and temperature. Here, dimensionless units are used and D = 1, kT = 1. The elements of the random vector wt are sampled from a normal distribution with zero mean and unit variance.

Hardware used and training times

VAMPnets were trained on a single NVIDIA GeForce GTX 1080 GPU, requiring between 20 seconds (for the double-well problem) and 180 seconds for NTL9 for each run.

Code availability

TICA, k-means and MSM analyses were conducted with PyEMMA version 2.4, freely available at http://www.pyemma.org. VAMPnets are implemented using the freely available packages keras83 with tensorflow-gpu84 as a backend. The code can be obtained at https://github.com/markovmodel/deeptime.

Data availability

Data for NTL9 can be requested from the authors of ref. 1. Data for all other examples is available at https://github.com/markovmodel/deeptime.