Introduction

Tracking time evolution and characterizing phase transitions are fundamental tasks in nonequilibrium statistical mechanics1, having implications to a wide range of fields including quantum transport2, molecular machines3, quantitative biology4,5, and complex networks6. For example, the glassy behavior has been explored in kinetically constrained models (KCM) of spin flips on a lattice7, where each spin can facilitate the flip of its neighbor spins. The stochastic dynamics of spin flips give insights into the dynamical heterogeneity of the glass transition8. Another example is the voter model9 describing consensus formation, where voters located on a network choose the opinion based on their neighbors and can form characteristic spatial structures.

Despite tremendous efforts, studying the dynamical phase transition remains challenging in general. First, it requires tracking the evolving probability distribution of microscopic configurations, which is exponentially increasing with the system size and can be computationally prohibitive from sampling trajectories10. Second, the phase transition is induced by a control parameter, and the corresponding dynamical operator governing the time evolution, known as the “tilted” generator11, does not preserve the normalization condition of the distribution. Thus, one needs to estimate the normalization factor, i.e., the dynamical partition function, a central quantity in nonequilibrium systems12. Third, studying nonequilibrium dynamics at an arbitrary time is even more challenging, because it demands estimating the whole spectrum of the tilted generator, rather than computing only the largest eigenvalue in the long-time limit13,14 based on the large deviation theory11,15. Analytically, it is intractable except for rare cases16,17,18; numerically, the major difficulty of studying time evolution beyond the steady state arises from expressing the high-dimensional probability distribution with a growing complexity when the correlation builds up over time.

In this work, we develop a general framework to track nonequilibrium systems over time by variational autoregressive networks (VAN), which offers an ideal model for describing the normalized joint distribution of configurations19. The VAN was previously shown effective to investigate equilibrium statistical mechanics20,21,22, quantum many-body systems23,24,25, chemical reaction networks26 and computational biology27. However, evaluating the dynamical partition function and characterizing dynamical phase transitions have not been achieved in these applications of the VAN. Here, we leverage the VAN to propose an algorithm for tracking the evolving probability distribution, leading to an efficient computation of the dynamical partition function. The latter serves as the moment-generating function of dynamical observables, which uncovers the phase transition over time (Fig. 1).

Fig. 1: Uncovering dynamical phase transitions in nonequilibrium statistical mechanics by machine learning.
figure 1

a The major theme of equilibrium and nonequilibrium statistical mechanics. b Uncovering dynamical phase transitions requires studying the “tilted” dynamics under \({{\mathbb{T}}}_{s}\) with various values of the control parameter s, where the evolved distribution is no longer normalized (dashed light blue). As the normalization factors, the dynamical partition function needs to be learned over time, to discover the dynamical phase diagram. c A proposed algorithm for learning the dynamical partition function by training the VAN (distributions with hat) over time. The loss function is given by the KL-divergence between the VAN at time t + δt with learnable parameters θt+δt and the evolved distribution learned at t with parameters θt fixed. The algorithm tracks nonequilibrium dynamics, and reveals dynamical phase transitions, overcoming the three challenges in (b).

We apply the approach to representative models in nonequilibrium statistical mechanics. We first validate the method in the voter model, by comparing with the analytical result6. To demonstrate that our approach can reveal unknown dynamical phenomena, we investigate KCMs of spin flips for glassy dynamics, including the Fredrickson-Andersen (FA) model28 and variants of the East model29, in one dimension (1D), two dimensions (2D), and three dimensions (3D). Previously, the dynamical active-inactive phase transition in space and time30,31,32 was investigated mainly at the steady state in the long-time limit, where KCMs have two phases with extensive or subextensive spin-flipping activities. However, the phase transition in the full-time regime was seldom investigated. For the 1D finite-time problem, our results agree with the recent study by matrix product states33. For 2D and 3D cases, where the phase transition at arbitrary time was not obtained either analytically or numerically, our method uncovers the dynamical phase diagram versus time the control parameter of the counting field, and the critical exponent of the finite-time scaling. We also observe the emergence of characteristic spatial structures over time, extending the steady-state result34. We further discuss the applications to nonequilibrium systems with other types of dynamics and topologies.

Results

Nonequilibrium statistical mechanics

We consider a continuous-time discrete-state Markovian dynamics of size N. Each variable has ds states (ds = 2 for binary spin systems), giving total \(M={d}_{s}^{N}\) configurations. With configuration states \(\left\vert {{{{{{{\bf{x}}}}}}}}\right\rangle \equiv ({x}_{1},\, {x}_{2},\, \ldots,\, {x}_{N})\) forming an orthonormal basis, the system at time t is described by the probability vector \(\vert {P}_{t}\rangle={\sum }_{{{{{{{{\bf{x}}}}}}}}}{P}_{t}({{{{{{{\bf{x}}}}}}}})\left\vert {{{{{{{\bf{x}}}}}}}}\right\rangle\). It evolves under the stochastic master equation35:

$$\frac{d}{dt}\vert {P}_{t}\rangle={\mathbb{W}}\vert {P}_{t}\rangle,\,$$
(1)

where \({\mathbb{W}}={\sum }_{{{{{{{{\bf{x}}}}}}}},\, {{{{{{{{\bf{x}}}}}}}}}^{{\prime} }\ne {{{{{{{\bf{x}}}}}}}}}{w}_{{{{{{{{\bf{x}}}}}}}},\, {{{{{{{{\bf{x}}}}}}}}}^{{\prime} }}\left\vert {{{{{{{{\bf{x}}}}}}}}}^{{\prime} }\right\rangle \left\langle {{{{{{{\bf{x}}}}}}}}\right\vert -{\sum }_{{{{{{{{\bf{x}}}}}}}}}{r}_{{{{{{{{\bf{x}}}}}}}}}\left\vert {{{{{{{\bf{x}}}}}}}}\right\rangle \left\langle {{{{{{{\bf{x}}}}}}}}\right\vert\) is the generator, \({w}_{{{{{{{{\bf{x}}}}}}}},\, {{{{{{{{\bf{x}}}}}}}}}^{{\prime} }}\) is the transition rate from \(\left\vert {{{{{{{\bf{x}}}}}}}}\right\rangle\) to \(\left\vert {{{{{{{{\bf{x}}}}}}}}}^{{\prime} }\right\rangle\), and \({r}_{{{{{{{{\bf{x}}}}}}}}}={\sum }_{{{{{{{{{\bf{x}}}}}}}}}^{{\prime} }\ne {{{{{{{\bf{x}}}}}}}}}{w}_{{{{{{{{\bf{x}}}}}}}},\, {{{{{{{{\bf{x}}}}}}}}}^{{\prime} }}\) is the escape rate from \(\left\vert {{{{{{{\bf{x}}}}}}}}\right\rangle\).

Dynamical partition function

Studying dynamical phase transitions emerging from microscopic interactions relies on evaluating dynamical observables. In general, a quantity of interest is the time-extensive dynamical observable \(\hat{K}\) incremented along a trajectory \({\omega }_{t}=\{{{{{{{{{\bf{x}}}}}}}}}_{{t}_{0}}\to {{{{{{{{\bf{x}}}}}}}}}_{{t}_{1}}\cdots \to {{{{{{{{\bf{x}}}}}}}}}_{t}\}\), with time-step length δt, t = Jδt and total J time steps. The probability of observing the dynamical observable with a value K is obtained by summing over all possible trajectories: \({P}_{t}(K)={\sum }_{{\omega }_{t}}p({\omega }_{t})\delta [\hat{K}({\omega }_{t})-K]\), where p(ωt) is the probability of the trajectory ωt.

To extract the statistics of the dynamical observable, it is useful to estimate its moment-generating function, i.e., the dynamical partition function \({Z}_{t}(s)={\sum }_{K}{P}_{t}(K){e}^{-sK}={\sum }_{{\omega }_{t}}p({\omega }_{t}){e}^{-s\hat{K}({\omega }_{t})}\), where a control parameter s is introduced as the conjugate variable to the dynamical observable. Taking derivatives of Zt(s) to s gives moments of the dynamical observable’s distribution. The dynamical partition function can be evaluated by30:

$${Z}_{t}(s)=\langle -| {e}^{t{{\mathbb{W}}}_{s}}| {{{{{{{\rm{ss}}}}}}}}\rangle,\,$$
(2)

where \(\left\langle -\right\vert={\sum }_{{{{{{{{\bf{x}}}}}}}}}\left\langle {{{{{{{\bf{x}}}}}}}}\right\vert\), and \(\left\vert {{{{{{{\rm{ss}}}}}}}}\right\rangle\) is the steady-state probability vector under the generator \({\mathbb{W}}\). Different from \({\mathbb{W}}\), \({{\mathbb{W}}}_{s}\) is termed as the “tilted” generator, and its form depends on the dynamical observable \(\hat{K}\) and control parameter s.

A representative dynamical observable

For the spin-flip dynamics, a characteristic dynamical observable is the “dynamical activity”30,36, measuring the number of spin flips. The observable quantifies how dynamically active the trajectories are: a more active trajectory has a higher value. The control parameter s is then termed as the “counting field”. To evaluate the dynamical observable, the generator is modified correspondingly as the tilted generator30:

$${{\mathbb{W}}}_{s}=\mathop{\sum}\limits_{{{{{{{{\bf{x}}}}}}}},\, {{{{{{{{\bf{x}}}}}}}}}^{{\prime} }\ne {{{{{{{\bf{x}}}}}}}}}{e}^{-s}{w}_{{{{{{{{\bf{x}}}}}}}},\, {{{{{{{{\bf{x}}}}}}}}}^{{\prime} }}\vert {{{{{{{{\bf{x}}}}}}}}}^{{\prime} }\rangle \langle {{{{{{{\bf{x}}}}}}}}\vert -\mathop{\sum}\limits_{{{{{{{{\bf{x}}}}}}}}}{r}_{{{{{{{{\bf{x}}}}}}}}}\left\vert {{{{{{{\bf{x}}}}}}}}\right\rangle \left\langle {{{{{{{\bf{x}}}}}}}}\right\vert,\,$$
(3)

under which the probability is no longer normalized. Combining Eq. (2) and Eq. (3), the moments can be calculated, including the average dynamical activity per unit of time and site:

$${k}_{t}(s)=-\frac{1}{Nt}\frac{d}{ds}\ln {Z}_{t}(s).$$
(4)

Other dynamical observables can be investigated in a similar way by using its corresponding tilted generator.

Tracking nonequilibrium statistical mechanics and dynamical phase transitions

A natural approach of studying nonequilibrium dynamics is tracking the evolution equation, Eqs. (1) and (2). Unfortunately, the exact representation of \(\vert {P}_{t}\rangle\) requires a computational effort that is exponential in the system size. Hence we need an efficient method to approximately represent \(\vert {P}_{t}\rangle\). Here, we consider a neural-network model, the VAN19, as a variational ansatz for \(\vert {P}_{t}\rangle\). The VAN maps configurations to the probability distribution Pt(x) in \(\vert {P}_{t}\rangle\) as the product of conditional probabilities:

$${\hat{P}}_{t}^{\theta }({{{{{{{\bf{x}}}}}}}})=\mathop{\prod }\limits_{i=1}^{N}{\hat{P}}_{t}^{\theta }({x}_{i}| {x}_{1},\, \ldots,\, {x}_{i-1}),\,$$
(5)

where the hat symbol denotes the parameterization by the neural network with learnable parameters θ.

For lattice systems, we start from an initial site and traverse the lattice in a predetermined order to acquire all the conditional probabilities (Methods). Each conditional probability \({\hat{P}}_{t}^{\theta }({x}_{i}| {x}_{1},\, \ldots,\, {x}_{i-1})\) is parameterized by a neural network, where the input is the sites visited earlier \(\{{x}_{{i}^{{\prime} } < i}\}\) and the output is xi associated with a probability under proper normalization19. The parameters of the neural network can be shared among sites37 to increase computational efficiency. Since all the conditional probabilities are stored, one can efficiently generate samples associated with normalized probabilities, which can be used to compute quantities such as energy and entropy and to construct loss functions to update the parameters.

The VAN is capable of expressing strongly correlated distributions20,22, including equilibrium distributions in statistical mechanics19, steady state distributions of KCMs in nonequilibrium statistical mechanics34 and quantum systems21. Here, we find it effective to learn the time-evolving distributions. The expressivity of the VAN and training time depends on the architecture, as well as the depth and width of the neural network (Supplementary Information Sect. VD). Typical neural-network architectures can be employed, including MADE38, PixelCNN39,40 or RNN41. For example, we have used RNN for KCMs in 1D and 2D, PixelCNN for 2D, and MADE for 3D. In 1D, we find RNN more accurate than MADE. In 2D, RNN has a comparable accuracy with PixelCNN but takes a longer computational time. The VAN can be further improved by cooperating with more advanced neural network architectures and sampling techniques.

The advantages of the VAN over the tensor network model in representing \(\vert {P}_{t}\rangle\) mainly come from its generality. The neural network ansatz can be used in systems with various topologies, in contrast to the matrix product states designed for one-dimensional systems. On the one hand, the VAN is suitable for systems with arbitrary topology without modifying the structure of VAN19; on the other hand, we can design the VAN to fit the topology. For example, in 2D or 3D, one can use convolutional networks42, as in image recognition; for sparse networks, graph neural networks are efficient43.

Based on the VAN representation, we evaluate Eq. (2) by applying the operator \({e}^{\delta t{{\mathbb{W}}}_{s}}\) sequentially at each of the total J time steps: \({Z}_{t}(s)\, \approx \, \langle -| {({{\mathbb{T}}}_{s})}^{\,J}| {{{{{{{\rm{ss}}}}}}}}\rangle\) (Suzuki-Trotter decomposition44), where the transition operator \({{\mathbb{T}}}_{s}=({\mathbb{I}}+\delta t{{\mathbb{W}}}_{s})\, \approx \, {e}^{\delta t{{\mathbb{W}}}_{s}}\), \({\mathbb{I}}\) denotes the identity operator. Without loss of generality, we consider the one-step evolution from a normalized probability vector \(\vert {\hat{P}}_{j}^{{\theta }_{j}}\rangle\) at time step j ( j = 0, …, J − 1) with parameters θj. Under the tilted generator, the evolved probability vector \({{\mathbb{T}}}_{s}\vert {\hat{P}}_{j}^{{\theta }_{j}}\rangle\) becomes unnormalized. Still, since the VAN provides a normalized probability vector, we can use it to represent \(\vert {\hat{P}}_{j+1}^{{\theta }_{j+1}}\rangle\) at time step j + 1 and approximate the normalized probability vector \(\vert {\hat{Q}}_{j}^{{\theta }_{j}}\rangle={{\mathbb{T}}}_{s}\vert {\hat{P}}_{j}^{{\theta }_{j}}\rangle /{Z}_{j+1}(s)\), by minimizing the Kullback-Leibler (KL) divergence,

$${D}_{KL}\left[{\hat{P}}_{j+1}^{{\theta }_{j+1}} \big| \big| {\hat{Q}}_{j}^{{\theta }_{j}}\right]={{{{{{{{\mathcal{L}}}}}}}}}_{j+1}+\ln {Z}_{j+1}(s),\,$$
(6)
$${{{{{{{{\mathcal{L}}}}}}}}}_{j+1}=\mathop{\sum}\limits_{{{{{{{{\bf{x}}}}}}}}}{\hat{P}}_{j+1}^{{\theta }_{j+1}}({{{{{{{\bf{x}}}}}}}},\, s)\left[\ln {\hat{P}}_{j+1}^{{\theta }_{j+1}}({{{{{{{\bf{x}}}}}}}},\, s)-\ln {{\mathbb{T}}}_{s}{\hat{P}}_{j}^{{\theta }_{j}}({{{{{{{\bf{x}}}}}}}},\, s)\right].$$
(7)

Minimizing the KL-divergence is equivalent to minimizing \({{{{{{{{\mathcal{L}}}}}}}}}_{j+1}\), which plays a role analogous to the variational free energy in equilibrium statistical mechanics. Since the VAN supports unbiased sampling in parallel to compute \({{{{{{{{\mathcal{L}}}}}}}}}_{j+1}\), we estimate the gradients with respect to parameters θj+1 by the REINFORCE algorithm45:

$${\nabla }_{{\theta }_{j+1}}{{{{{{{{\mathcal{L}}}}}}}}}_{j+1}=\mathop{\sum}\limits_{{{{{{{{\bf{x}}}}}}}}}{\hat{P}}_{j+1}^{{\theta }_{j+1}}({{{{{{{\bf{x}}}}}}}},\, s)\left\{{\nabla }_{{\theta }_{j+1}}\ln {\hat{P}}_{j+1}^{{\theta }_{j+1}}({{{{{{{\bf{x}}}}}}}},\, s) \right.\\ \left. \cdot\left[\ln {\hat{P}}_{j+1}^{{\theta }_{j+1}}({{{{{{{\bf{x}}}}}}}},\, s)-\ln {{\mathbb{T}}}_{s}{\hat{P}}_{j}^{{\theta }_{j}}({{{{{{{\bf{x}}}}}}}},\, s)\right]\right\},\,$$
(8)

where the summation is over the samples from the VAN.

As an essential outcome of the algorithm, the dynamical partition function is computed as a product of the normalization constants at each time step \({Z}_{t}(s) \, \approx \mathop{\prod }\nolimits_{j=1}^{J}{Z}_{j}(s)\) (Supplementary Information Sect. IIA). For each normalization constant, the nonnegativity of the KL-divergence ensures that Eq. (7) provides a lower bound as:

$$\ln {Z}_{j+1}(s)\ge -{{{{{{{{\mathcal{L}}}}}}}}}_{j+1}.$$
(9)

The equality holds when the VAN faithfully learns the evolved distribution and achieves zero KL divergence. Then, to evaluate the dynamical partition function our algorithm starts from the steady state of the non-tilted generator and tracks the distribution under the tilted generator (Eq. (2)). The renormalization procedure over time points enables to extraction of the dynamical partition function under the tilted generator, beyond the algorithm of only tracking the evolving distribution under the non-tilted generator26.

Algorithm

The pseudocode of tracking the dynamical partition function and dynamical phase transitions by the VAN is summarized below:

  • Input: The system size and dimension, the model type, the boundary condition, time steps, values of the counting field s. Choose an initial distribution, such as the steady state of the non-tilted dynamics.

  • Every time step j = 0, …, J − 1:

    1. 1.

      Learn the next-step VAN \({\hat{P}}_{j+1}^{{\theta }_{j+1}}({{{{{{{\bf{x}}}}}}}})\): for every epoch,

      1. (a)

        Draw samples {x} from \({\hat{P}}_{j+1}^{{\theta }_{j+1}}({{{{{{{\bf{x}}}}}}}})\);

      2. (b)

        Calculate relevant matrix elements of the transition operator \({{\mathbb{T}}}_{s}\) to get \({{\mathbb{T}}}_{s}{\hat{P}}_{j}^{{\theta }_{j}}({{{{{{{\bf{x}}}}}}}})\);

      3. (c)

        Train the VAN by minimizing the loss function: \({{{{{{{{\mathcal{L}}}}}}}}}_{j+1}={{\mathbb{E}}}_{{{{{{{{\bf{x}}}}}}}} \sim {\hat{P}}_{j+1}^{{\theta }_{j+1}}}[\ln {\hat{P}}_{j+1}^{{\theta }_{j+1}}({{{{{{{\bf{x}}}}}}}})-\ln {{\mathbb{T}}}_{s}{\hat{P}}_{j}^{{\theta }_{j}}({{{{{{{\bf{x}}}}}}}})]\).

    2. 2.

      Calculate the variational free energy at each time step: \({{{{{{{{\mathcal{F}}}}}}}}}_{j+1}^{{\theta }_{j+1}}(s)={{{{{{{{\mathcal{L}}}}}}}}}_{j+1}\) after training.

  • Estimate the dynamical partition function: \({Z}_{t}(s)\, \approx \mathop{\prod }\nolimits_{j=1}^{J}{Z}_{j}(s)\), \(\ln {Z}_{j}(s)\, \approx \, {{{{{{{{\mathcal{F}}}}}}}}}_{j}^{{\theta }_{j}}(s)\); and moments of dynamical observables.

  • Output: The evolved probability distributions, dynamical partition function, and dynamical observables.

The computational complexity of tracking the dynamical phase transition depends on the number of training steps Ntrain and the number of counting-field values Ns. Under a fixed system size, the computational time of studying the steady-state phase transition34 has the order of \({{{{{{{\mathcal{O}}}}}}}}({N}_{s}{N}_{{{{{{{{\rm{train}}}}}}}}})\), and the finite-time study without the phase transition25 has \({{{{{{{\mathcal{O}}}}}}}}({N}_{{{{{{{{\rm{train}}}}}}}}}J)\) for J total time steps. Here, to uncover the phase transition at any time, the computational complexity becomes \({{{{{{{\mathcal{O}}}}}}}}({N}_{s}{N}_{{{{{{{{\rm{train}}}}}}}}}J)\), where larger Ntrain may be required to reach high accuracy for larger system sizes (Supplementary Fig. 4). Despite the high computational complexity, the result under the chosen lattice sizes fulfills to uncover the finite-time scaling of the phase transition for 2D and 3D KCMs, as demonstrated below.

Applications

The present approach is generally applicable to systems in nonequilibrium statistical mechanics. Since the computational cost is proportional to the number of allowable state transitions at each time step in Eq. (2), the cost is greatly reduced when compared with tracking the full distribution exponentially proportional to the system size. This is computationally feasible when the number of allowable transitions scales polynomially on the system size, including the voter model6 which validates our method of tracking the distribution (Supplementary Fig. 1). Another representative class of systems is KCMs modeling glasses8. The KCMs exhibit rich phenomena of phase transitions (Fig. 2), where the 2D and 3D cases were unexplored either analytically or numerically, and are our focus.

Fig. 2: A schematic on kinetically constrained models and dynamical phase diagram.
figure 2

a The rule of spin flips for the 2D SE and FA models, as demonstrative examples. The filled (unfilled) circle is up (down) spin. The flipping rate of each spin (red) depends on f, the number of up spins at certain nearest neighbors (green). The SE model counts the left and above neighbors, and the FA model counts all nearest neighbors. The flip-up probability is c. b An illustrative dynamical phase diagram, with active and inactive phases of spin flips depending on the counting field s and time. The active (inactive) phase has more (less) spin flips over time, giving large (small) dynamical activity kt(s) in Eq. (4). c The two architectures of the VAN used in 2D (Methods). The left panels have a schematic of the gated recurrent unit cell (GRU)57. For 2D lattice systems, a zigzag path for transmitting variables is employed, allowing the hidden state to propagate both vertically and horizontally. The right panels are PixelCNNs. The vanilla PixelCNN39 encounters an issue of the “blind spot'', i.e., the black lattice point is not conditioned on the lattice points within the blind spot area under a 3 × 3 causal convolution layer. This issue is solved in the gated PixelCNN40 by splitting the model into two parts: vertical and horizontal stacks.

Active-inactive phase transition over time for KCMs

The previous studies of KCMs focused on equilibrium properties46 or the active-inactive phase transition in the long-time limit30,31,32,34. For the phase transition at arbitrary time, besides the recent result in 1D33, the cases with lattice dimension greater than 1 have not been revealed. Here, the VAN provides the phase transition over time of KCMs on 1D, 2D, and 3D lattices in a unified way.

We consider two paradigmatic KCMs, namely, FA28 and variants of East29 models, on a lattice of size N = L in 1D, N = L2 in 2D, and N = L3 in 3D, with binary spins xi = 0, 1 for i = 1, …, N, ds = 2 and 2N configurations in total. The Markovian generator is

$${\mathbb{W}}=\mathop{\sum }\limits_{i=1}^{N}{f}_{i}[c{\sigma }_{i}^{+}+(1-c){\sigma }_{i}^{-}-c(1-{x}_{i})-(1-c){x}_{i}],\,$$
(10)

where \({\sigma }_{i}^{\pm }\) are the Pauli raising and lowering operators flipping site i up and down, and c (0, 0.5] controls the rate of flipping up. The up spin number xi acts as the operator \({x}_{i}={\sigma }_{i}^{+}{\sigma }_{i}^{-}\), and the terms 1 − xi, xi separately represent the escape transitions out from the down, up spins at site i. The fi equals the number of up spins at certain directions of the nearest neighbors (Fig. 2a): the FA model counts all directions, the 1D East counts the left, the 2D South-East (SE) counts the left and above, and the 3D South-East-Front (SEF) counts the left, above and back. We consider open boundary conditions for the convenience of comparing results with the literature. The boundary sites are up for 1D33 and down for 2D34 and 3D. For the 2D FA model, the configuration with all down spins is excluded to avoid this disconnected configuration. To access the largest ergodic element in the configuration space, the first spin is fixed up for 2D and 3D East models. Note that all the figures of configurations do not include boundary sites.

The generator acts on each configuration and at each time step contributes to the case with only one spin flip between two configurations. Each configuration has N connected configurations that transit into or out from. The probability distribution is updated by using only the connected configurations of batch samples in Eq. (8). This procedure reduces the complexity from multiplying the transition matrix with the probability vector, both of which are exponentially large, to counting an order of \({{{{{{{\mathcal{O}}}}}}}}(N)\) transitions linear to the system size.

The VAN shows a high accuracy in revealing the phase transition. The obtained dynamical partition function coincides with the numerically exact values available for the small system sizes (Supplementary Fig. 5). Our result at long time matches with the steady-state estimation from the variational Monte-Carlo method34 (Supplementary Fig. 6). For the finite-time regime which was previously explored only for 1D, our result (Supplementary Figs. 2, 3) agrees with tensor networks33. For the unexplored 2D and 3D KCMs at finite time, we obtain the dynamical activity kt(s) as a function of time t and the counting field s with s > 0, showing two phases with extensive or subextensive activities (Fig. 3a). The critical point sc(N, t) as a function of system size and time can be identified numerically by the peak point of the dynamical susceptibility χt(s) = dkt(s)/ds. We conduct a scaling analysis of the critical point sc(N, t). In the long-time regime, the scaling of system size sc(N) ~ Nα gives the exponent α 1 for 2D and 3D (Fig. 3b). By inspecting the short-time regime, the critical point approximately scales as sc(t) ~ t−1 for the two models. These two regimes motivate to approximate sc(N, t) ≈ sc(N) + sc(t). We then estimate the scaling of time by fitting \(\ln [{s}_{c}(N,\, t)-{s}_{c}(N)]\) versus \(-\ln (t)\), giving sc(t) ~ tβ with β ≈ 1 for 2D and 3D, which confirms the speculated value. The finite-time scaling in 2D and 3D has a similar critical exponent as in 1D33, implying that the transition point evolves with time similarly in all the dimensions for KCMs. By dividing sc(N, t) with sc(N) and time as Nαtβ, sc(N, t) curves are collapsed together (Fig. 3c).

Fig. 3: Characterization of the dynamical active-inactive phase transition of kinetically constrained models.
figure 3

The top, middle, and bottom panels are the 2D SE model, 2D FA model and 3D SEF model. a The dynamical activity kt(s) denoted by color reveals the phase transition versus time t and the counting field s, with L = 5 for 2D and L = 3 for 3D. b The critical point sc(N, t) over time, giving critical exponents α from sc(N) ~ Nα at the steady state (the horizontal dotted lines and the inset) and β from the finite-time scaling tβ (the black dashed line). c The scaled phase transition lines, with sc(N, t)/sc(N) and time scaled as Nαtβ, are collapsed together, indicating the proper scaling relation. Parameters: c = 0.5, L = 4, 5, 6 for 2D and L = 2, 3, 4 for 3D.

The emergence of active phases for 2D KCMs

We further analyze the emergence of characteristic spatial structures over time, beyond the steady state34. For 2D SE and FA models, with a small c = 0.05, we discover that the average density of up spins 〈n(t)〉ν (versus ν ≡ 1 − es) of the 2D SE model gradually shows piecewise density plateaus (Fig. 4a), which are absent in the 2D FA model (Fig. 4c). For the average spatial profile of configurations, the 2D SE model forms structures with up-spin diagonal bands separated by down-spin bands, and the number of bands varies over ν (Fig. 4b). The 2D FA model does not have such characteristic bands (Fig. 4d), even when its first spin is also fixed up.

Fig. 4: The emergence of the active phases.
figure 4

The 2D SE (a, b) and 2D FA (c, d) models with c = 0.05, L = 6 and s < 0. a, c The average density in the active phase at various time points for c = 0.05. The grey arrows point to the chosen ν (introduced for s < 0) for (b, d). b, d Time evolution of the spatial profile of configurations at various time points (rows) for different ν values (columns). For the comparison between the two models, the left-up corner spin of both models is fixed up. The color denotes the average number of up spins, with the same color bar in (d).

Discussion

We have presented a general framework to track dynamics of nonequilibrium statistical systems based on neural networks, and developed an efficient algorithm to estimate the dynamical partition function. This extends applications of the VAN from equilibrium19 to nonequilibrium, from the steady state34 to arbitrary time, and from the absence of phase transitions25 to phase transitions, the approach enables to reveal of the unexplored active-inactive phase transition at finite time and scaling relations in KCMs on 2D and 3D lattices, as well as the emergence of characteristic spatial structures.

The error, such as quantified by the relative error of the dynamical partition function in Eq. (16), does not keep increasing (Supplementary Figs. 4, 5), because the time evolution of the distribution does not have a dramatic change for all time points. The accumulation of error mainly occurs when the dynamics change dramatically over time, e.g., when the active-inactive phase transition occurs. Identifying these time points by trial and error helps find the most efficient way of increasing epochs for accuracy at certain time points. An alternative way of resolving the issue is to project the evolution into two parts: one follows the largest eigenvalue of the tilted generator, and the other follows from a modified tilted generator (Supplementary Sect. IIA). When the modified generator reaches a steady value, one can stop the simulation and extrapolate it by using the largest eigenvalue.

Capturing the active-inactive transition requires an efficient sampling of rare inactive configurations with few up spins, which is accessible by the importance sampling (Supplementary Sect. IIIA). This important sampling is on the configurations from a distribution at each time point, different from the sampling on trajectories, which may be harder to sample as the trajectory space grows exponentially with time points. Besides, learning the multiple probability peaks can be affected by the mode-collapse: For rugged distributions, not all modes of the target distribution may be directly captured by the VAN47. To alleviate the mode-collapse, besides the importance sampling used here, temperature annealing19 and variational annealing48 can be employed.

When the system size increases, the computational time reaches the order of \({{{{{{{\mathcal{O}}}}}}}}(1{0}^{2})\) h for one value of the counting field (Supplementary Table 1). Larger system sizes require longer computational time to reach high accuracy (Supplementary Fig. 4). Evaluating the phase transition needs to scan various values of the counting field. Thus, although the method is generally applicable, computing the scaling relation for larger system sizes meets the practical challenges of available computational resources. However, this can be alleviated by optimizing the efficiency of tracking the distribution, with the help of the time-dependent variational method49 and by paralleling multiple GPUs.

The present approach is applicable to other types of Markovian dynamics, including stochastic reaction networks26,50, where phase transitions can be analyzed after adding the control parameter. Based on generality of the VAN, it is adaptable to other topologies, such as the voter model on graphs51 and epidemic spreading on networks52, where the architecture of the VAN can be the graph neural network43. It may also be generalized to the KCMs of various dimensions in the quantum regime53. Another direction is to leverage the VAN for sampling rare trajectories, with the help of the Doob operator54, active learning55 and reinforcement learning56.

Methods

Variational autoregressive networks for spin-lattice systems

We use the variational autoregressive network19 to parameterize the probability distribution. The VAN factorizes the joint probability into a product of conditional probabilities as Eq. (5), where xi denotes the ds-state variable of site i (ds = 2 for binary spin systems). The symbol θ represents the learnable parameters. The parameterized distribution is automatically normalized, which is also called autoregressive modeling in the machine learning community. Since each conditional probability only depends on previous sites, it supports efficient ancestral sampling in parallel.

Below, we briefly describe the architecture of the RNN and gated PixelCNN for our problem. The setting of MADE was the same as19.

Recurrent neural networks

For the recurrent neural network (RNN), we use a gated recurrent unit57 as the recurrent cell, which is capable of learning the distribution with long-range correlations. It is more efficient than the long-short time memory (LTSM) model and avoids the vanishing gradient problem for vanilla recurrent neural networks.

For the 1D RNN, the conditional probability is iteratively obtained over the one-dimensional sites. A recurrent cell processes the information from the previous hidden state hi−1 and the input data xi−1 in the current cell, generates a new hidden state hi, gives the conditional probability \({\hat{P}}^{\theta }({x}_{i}| {x}_{1},\, \ldots,\, {x}_{i-1})\) based on hi, and passes on the information of hi to the next cell. The dimension of the hidden states is denoted by dh. The GRU has a candidate hidden state \({\hat{h}}_{i}\), an update gate zi interpolating between the previous and candidate hidden states, and a reset gate ri setting the extent of forgetting for the previous hidden state. It updates by the following gates:

$${z}_{i}=\sigma ({W}_{zx}{x}_{i-1}+{W}_{zh}{h}_{i-1}+{b}_{z}),\,$$
(11)
$${r}_{i}=\sigma ({W}_{rx}{x}_{i-1}+{W}_{rh}{h}_{i-1}+{b}_{r}),\,$$
(12)
$${\hat{h}}_{i}=\tanh ({W}_{hx}{x}_{i-1}+{W}_{hh}({r}_{i}\odot {h}_{i-1})+{b}_{h}),\,$$
(13)
$${h}_{i}=(1-{z}_{i})\odot {h}_{i-1}+{z}_{i}\odot {\hat{h}}_{i},\,$$
(14)

where Ws are the weight matrices, bs are bias vectors, and σ is the sigmoid activation function,  denotes the Hadamard product.

The conditional probability is obtained from the hidden states. The output is acted on by a linear transform and a softmax operator \({\hat{P}}^{\theta }({x}_{i}| {x}_{1},\, \ldots,\, {x}_{i-1})={{{{{{{\rm{Softmax}}}}}}}}(W{h}_{i}+b)\), which ensures the normalized condition for the output probability vector. Given an initial hidden state h0 and variable x0 (chosen as a zero vector here), the full probability is obtained by Eq. (5) with the iteratively generated conditional probabilities. Sampling from the probability distribution is conducted similarly: given an initial hidden state and variable, the variable x1 is sampled from the estimated conditional probability, and the procedure is repeated to the last site.

For the 2D RNN, the implementation is more involved. A zigzag path34,41 is used to transmit the lattice variables, both vertically and horizontally. For the vertical and horizontal variables (hidden state h and variable x separately), we first concentrate the two into one and perform a linear transform (without the bias term) to an intermediate variable with the original dimension. They are then passed to the next GRU cell to continue the iteration.

The gated PixelCNN

For 2D lattice systems, we find that the vanilla PixelCNN model39 suffers from the blind spot problem. In the worst case, the blind spot in the receptive field only covers half of the sites above and to the left of the current site. To circumvent the blind spot problem, we use the gated PixelCNN40 that combines two convolutional network stacks: the vertical stack and the horizontal stack. The vertical stack conditions on all the sites left and the horizontal stack conditions on all the sites above. The activation function is also replaced by a gated activation unit:

$${{{{{{{{\bf{h}}}}}}}}}^{l+1}=\tanh ({W}_{1}^{l}\,*\,{{{{{{{{\bf{h}}}}}}}}}^{l})\odot \sigma ({W}_{2}^{l}\,*\,{{{{{{{{\bf{h}}}}}}}}}^{l})$$
(15)

where l is the number of layers, hl is the feature map at the l-th layer,  is the Hadamard product and * denotes the stacked convolution operation.

Details of training neural networks

When tracking the dynamics in Eq. (1), shorter Trotter time-step length generally gives higher accuracy, with the cost of longer simulation time. For KCMs, the range of time-step length δt [0.01, 0.1] is found suitable, as also reported in the 1D case33. Considering both the accuracy and efficiency, δt here is often chosen as δt = 0.1 for 1D and δt = 0.05 for 2D and 3D. The loss usually takes a number of \( > \,{{{{{{{\mathcal{O}}}}}}}}(1{0}^{3})\) epochs to converge for the first time step of evolving the system but requires only an order of \({{{{{{{\mathcal{O}}}}}}}}(1{0}^{2})\) epochs for the following time steps, because the probability distribution only has a small change after each time step. The smaller number of epochs after the first time step saves the training time for tracking the evolution.

Learning rates affect the accuracy of training. Among the tested learning rates 10−5, 10−4, 10−3, and 10−2, we found that 10−3 typically leads to relatively lower loss values and better training. It is possible to encounter general optimization issues such as trapping into local minima, which may be alleviated by designing schedulers for the learning rate.

We used the Adam optimizer58 to perform the stochastic gradient descent. The batch size was usually set as 1000 for each epoch. To better estimate the variational free energy, we used the batch from the last 20% of the training epochs, where the loss converges. This average gives a more accurate estimate by using approximately 1000 × 100 × 20% (batch size, epochs at each time point, last percent of the training epochs) batch samples.

The relative error

We estimate the error of the VAN for each system separately. To quantify the accuracy of the VAN, we calculated the relative error of the dynamical partition functions between the VAN and the numerically exact result. The numerically exact result of the dynamical partition function was obtained by summing up probabilities of all possible states, which is feasible only for systems with small sizes, e.g., approximately L ≤ 10 for 1D, L ≤ 4 for 2D, and L ≤ 2 for 3D.

The relative error er is defined as:

$${e}_{r}=\left| \frac{\ln {Z}_{t}(s)-\ln {Z}_{t}{(s)}_{{{{{{{{\rm{exact}}}}}}}}}}{\ln {Z}_{t}{(s)}_{{{{{{{{\rm{exact}}}}}}}}}}\right|,\,$$
(16)

where \(\ln {Z}_{t}{(s)}_{{{{{{{{\rm{exact}}}}}}}}}\) is the numerically exact result by summing up probabilities of all possible states, which is feasible for systems with small sizes. The error is shown in the inset of Supplementary Fig. 2a for 1D and Supplementary Fig. 5 for 2D and 3D. All the figures show that the relative error is usually smaller than \({{{{{{{\mathcal{O}}}}}}}}(1{0}^{-3})\) compared with the numerically exact result, validating the accuracy of the VAN. Based on the accuracy and efficiency, we chose the more appropriate VAN for each dimension: the RNN in 1D, the gated PixelCNN in 2D, and the MADE in 3D.

For larger system sizes, the numerically exact result is not accessible which then demands the use of the present algorithm. Thus, we can only evaluate the error from the loss function Eq. (7) based on the KL-divergence: the lower value of the loss function implies more accurate training, with the lower bound given by the dynamical partition function. We remark that the loss function will not reach zero, even when the VAN is accurately learned because the probability is not normalized under the tilted generator. Under this case, it is not feasible to estimate the KL divergence of the two probability distributions at consecutive time points as a quantification of accuracy. Besides, the value of the loss function decreases with a larger number of epochs and increases with the system size (Supplementary Fig. 4), indicating the requirement for more epochs and longer computational time when the system size increases.