Nonequilibrium thermodynamics of the asymmetric Sherrington-Kirkpatrick model

Most natural systems operate far from equilibrium, displaying time-asymmetric, irreversible dynamics characterized by a positive entropy production while exchanging energy and matter with the environment. Although stochastic thermodynamics underpins the irreversible dynamics of small systems, the nonequilibrium thermodynamics of larger, more complex systems remains unexplored. Here, we investigate the asymmetric Sherrington-Kirkpatrick model with synchronous and asynchronous updates as a prototypical example of large-scale nonequilibrium processes. Using a path integral method, we calculate a generating functional over trajectories, obtaining exact solutions of the order parameters, path entropy, and steady-state entropy production of infinitely large networks. Entropy production peaks at critical order-disorder phase transitions, but is significantly larger for quasi-deterministic disordered dynamics. Consequently, entropy production can increase under distinct scenarios, requiring multiple thermodynamic quantities to describe the system accurately. These results contribute to developing an exact analytical theory of the nonequilibrium thermodynamics of large-scale physical and biological systems and their phase transitions.


INTRODUCTION
While isolated systems tend toward thermodynamic equilibrium, many physical, chemical, and biological processes operate far from equilibrium.Such nonequilibrium systems -from molecules to organisms and machinespersist by exchanging matter and energy with their surroundings, being causally driven by time-varying external stimuli or by their past states (e.g., the adaptive action of sensor and effector interfaces [1]).Nonequilibrium processes inherently break time reversal symmetry, describing spatial and temporal patterns with a definite past-future order, and being thus strikingly different from the reversible dynamics found at thermodynamic equilibrium.Understanding these dissipative processes -from chemical reactions to neural dynamics or flocks of birdsbrings critical insights into the self-organization of open systems [2].Although these ideas have attracted the interest of disparate fields, from evolutionary dynamics [3] to neuroscience [4][5][6][7], little is known about the thermodynamic description of nonequilibrium systems comprising many interacting particles.While stochastic thermodynamics has been greatly influential in the study of small systems with appreciable fluctuations [8], the thermodynamics of large-scale nonequilibrium systems and their phase transitions has attracted attention only very recently [9][10][11].
When the elements of a system are numerous, characterizing its nonequilibrium states is challenging due to the expansion of its state space.Inspired by the success of the equilibrium Ising model in investigating disordered systems in the thermodynamic limit, we study the nonequilibrium thermodynamics of a stochastic, kinetic Ising model with both synchronous and asynchronous updates.The Ising model is a cornerstone of statistical mechanics, originally conceived as a model describing phase transitions in magnetic materials [12].A natural extension of the model introducing Markovian dynamics either in discrete or continuous time is the kinetic Ising model, a prototypical model of both equilibrium and nonequilibrium systems such as recurrent neural networks [13] or genetic regulatory networks [14].With time-independent parameters and symmetric couplings (under synchronous or asynchronous updates in the absence of lagged selfcouplings), the kinetic Ising model results in an equilibrium process exhibiting a variety of complex phenomena, including ordered (ferromagnetic), disordered (paramagnetic), and quenched disordered states (known as spin glasses).The celebrated Sherrington-Kirkpatrick (SK) model, characterized by quenched random couplings resulting in a spin-glass phase [15], can be solved using the replica mean-field method [16].A kinetic version of this symmetric-coupling model has been represented as a bipartite network, also solved using the replica trick [17].
The kinetics of equilibrium Ising systems are indistinguishable when observed in a forward or backward direction in time, i.e., they are invariant under the reversal of the arrow of time.This time-symmetry breaks down under time-varying external fields or asymmetric couplings comprising history-dependent, non-conservative forces.Such time-asymmetric processes violate detailed balance, leading to nonequilibrium dynamics yielding a positive arXiv:2205.09886v4[cond-mat.stat-mech]26 Jun 2023 entropy production [8,[18][19][20].In the latter case of asymmetric couplings with constant fields, the system may relax towards a steady state known as a nonequilibrium steady state after some time.Time-asymmetric trajectories in steady state are linked with entropy change of the heat baths under 'local detailed balance' for a system coupled to equilibrium reservoirs or heat baths [21,22], suggesting that steady-state entropy production is critical for unveiling the interaction of out-of-equilibrium systems with their environments.Yet, unlike its equilibrium counterpart, the properties of irreversible Ising dynamics remain unclear due to the lack of theoretical underpinnings for its entropy production.
Here, we study the kinetics of the SK model with asymmetric connections under synchronous and asynchronous updates as a prototypical model of nonlinear and nonequilibrium processes.As the model does not have a free energy defined in classical terms, we resort to a dynamical equivalent in the form of a generating functional.We apply a path integral approach to obtain exact solutions on its statistical moments and nonequilibrium thermodynamic properties.Unlike the replica method, the generating functional for fully asymmetric couplings has exact solutions in the thermodynamic limit without additional assumptions like analytic continuation and replica symmetry breaking [23].Previous studies using this method [24,25] have shown that the asymmetric kinetic Ising model with asynchronous updates does not have a spin glass phase.In this manuscript, we will extend the generating functional path integral method to confirm this result in both cases of synchronous and asynchronous updates and further retrieve an exact solution of the entropy production of the system.
One of the open questions in empirical studies is whether an increase in entropy production observed in specific nonequilibrium systems under investigation is linked with the critical properties of systems approaching continuous phase transitions [5,6].Entropy production is not necessarily maximized under such conditions and can display a continuous change [26], or discontinuities in its derivative [27].However, a number of simple nonequilibrium systems maximize their entropy production at a critical point.Examples are the entropy production of an Ising model with an oscillatory field and a mean-field majority vote model [28][29][30].It is therefore important to investigate the case of the kinetic Ising system as a general model of physical and biological networks.We previously showed that the entropy production of the stationary asymmetric SK model with finite size takes a maximum around a critical point by applying meanfield approximations preserving fluctuations in the system [31].However, this result (and the aforementioned references) relies on approximations and numerical simulations.Therefore, the assumption that entropy production is maximized near continuous phase transitions has not yet been ratified by exact solutions of spin models in the thermodynamic limit.In this study, we show analytically that the entropy production is locally maximized at critical phase transition points, representing a potentially useful phase transition correlate for systems without a globally defined free energy or heat capacity.Nevertheless, we also show that entropy production can take larger values for largely heterogeneous couplings in lowtemperature regimes exhibiting disordered but nearly deterministic dynamics.Thus, entropy production must be examined carefully, as its increase does not necessarily indicate that the system is approaching a critical state.Instead, combining the entropy rate and entropy production yields a more precise picture of the irreversible processes.
The paper is organized as follows.First, we introduce maximum entropy Markov processes, their entropy production, and a generating functional method used to compute the system's moments and the entropy production in both discrete and continuous time.Next, we describe the asymmetric SK model with synchronous and asynchronous updates and a path integral method calculating the configurational average of the generating functional.This yields an exact solution of the entropy production, magnetization, and correlations in an infinite system.We employ our theoretical results to draw phase diagrams of the order parameters and entropy production for synchronous and asynchronous dynamics with and without randomly sampled external fields.The theoretical predictions are then corroborated by numerical simulation.We also examine the critical line of nonequilibrium phase transitions, the temporal structure of the dynamics, and their relations to the entropy production.Finally, we conclude the paper by discussing the implications of our results for the study of biological systems.

Maximum entropy Markov chains
The principle of maximum entropy is a foundation of equilibrium statistical mechanics [32].The principle has been later generalized for treating time-dependent phenomena, as the principle of maximum caliber or maximum path entropy [33,34].Under consistency requirements preserving causal interactions, the maximum caliber principle yields a Markov process [35].To see this, we start with a discrete-time stochastic process with N discrete-state elements defined at time u as s u = {s 1,u , . . ., s N,u } for discrete-time trajectories of length t + 1 defined by a path probability p(s 0:t ).Later, we will show this discrete-time formulation can be generalized to an equivalent continuous-time formulation under appropriate assumptions.
Path entropy is defined as Maximizing Eq. 1, subject to constraints, yields the least structured distribution p(s 0:t ) consistent with observa-tions [36].In causal network models, entropy maximization has to be constrained with a set of temporal consistency requirements [35], as was first established by [37].Specifically, for any positive integer u (≤ t), we impose su p 0:u (s 0:u ) =p 0:u−1 (s 0:u−1 ), where p 0:u (s 0:u ) is given by That is, we impose consistency between the marginal distribution for the maximum entropy path s 0:u−1 in p 0:u (s 0:u ) and the maximum entropy distribution of path s 0:u−1 , p 0:u−1 (s 0:u−1 ).This constrains path distribution dependencies between consecutive states.We will drop the subscript in the path probability when not needed.Maximizing Eq. 1 with constraints f n (s u , s u−1 ) = C n,u (where C n,u is a constant for the n-th constraint at time u), an initial distribution p(s 0 ), and Eq. 2 results in a Markovian process (cf.[35]) The path entropy can be then decomposed into where S 0 is the entropy of the initial distribution and S u|u−1 is a conditional entropy, defined as which, at the steady state described in the following, corresponds to the Kolmogorov-Sinai entropy or entropy rate, lim t→∞ 1 t S 0:t .

Nonequilibrium steady state
A Markov chain converges to a unique stationary distribution if the system is irreducible (all states are accessible from any state in finite time) and aperiodic (the greatest common divisor of the number of steps for returning to the same state with non-zero probability is one) [38].We can confirm that these requirements are satisfied by Eq. 4 with finite transition probabilities, thus warranting the existence of a steady-state distribution π(s u ), which can be either in or out of thermodynamic equilibrium, as explained in the following.
For a discrete-time Markov chain, the evolution of the state probability distribution follows a master equation: Here p v (s u ) is a marginal probability distribution of a state s u calculated for the distribution at time v.For simplicity, we will omit the subscript and write p(s u ) when v = u.j u su−1→su are the system's probability fluxes: In the limit of small probability fluxes, the system can be described by an equivalent continuous-time process: where t refers to the continuous time and w(s | s ) are transition rates.
The system is stationary or in a steady state if the sum of all probability fluxes is zero for all s u , i.e., su,su−1 j u su−1→su = 0.In addition, this will be an equilibrium steady state if j u su−1→su = 0 for all pairs s u−1 , s u , resulting in the detailed balance condition where π(s u ) is the steady-state distribution.When detailed balance is broken under the stationary condition, i.e., some j u su−1→su = 0 but their sum is equal to zero, the stationary system is in a nonequilibrium steady state.

Steady-state entropy production
Stochastic thermodynamics describes a link between the time-irreversible stochastic trajectories with surroundings in the form of heat (entropy) dissipation.As the system evolves, it experiences an entropy change σ sys u : Nonequilibrium systems maintain irreversible dynamics by continuously dissipating heat (entropy) to their environments.Under local detailed balance [21,22,39] in a system coupled to a heat bath, the entropy change results from subtracting the entropy dissipated to the heat bath σ bath u from the (total) entropy production σ u : where the entropy change of the heat bath is given as Here p(s u−1 | s u ) is a transition probability (from Eq. 4) but evaluated by the reverse trajectory [21,40], that is, we define it using the transition function at time u, but switch s u and s u−1 .This equation relates the system's time asymmetry with the entropy change of the reservoir.The entropy production σ u at time u is then given as which is the Kullback-Leibler divergence between the forward and backward trajectories [8,18,20,41].Due to the non-negativity of the divergence, the entropy production is non-negative, σ u ≥ 0. This entropy production vanishes if the probability of forward trajectories is identical to a posterior of past states given the future state [20], i.e., when the process loses time-asymmetry in prediction and postdiction [42].
Alternatively, the dissipation function [8,43,44] quantifies the difference between incoming and outgoing fluxes in Eq. 8: which directly assesses a violation of the detailed balance.The entropy production σ u and dissipation function σ u are equivalent under steady-state conditions.Furthermore, both quantities become equivalent in the continuous-time limit [8,42,44] and converge to the entropy production rate [39]: In a steady state, the entropy production is caused by dissipation only and becomes equivalent to the house-keeping entropy production caused by the nonconservative forces under a steady state [45,46].Both σ u and σ u result in: Here S r u|u−1 is the entropy of the time-reversed conditional distribution: In this paper, we study the steady-state entropy production in Eq. 18, which is critical for evaluating the interaction of the nonequilibrium processes with their environment.

Generating functional
Consider a maximum caliber path probability (Eq.4) For simplicity, we will assume p 0 (s) = i δ [s i , s i,0 ] -the initial distribution is a Kronecker delta with a unique initial state -and ignore the term.However, the following steps are general for any p 0 (s 0 ).In equilibrium systems, the partition function retrieves its statistical moments.A nonequilibrium equivalent function is a generating functional or dynamical partition function.To obtain not only the statistical properties averaged over trajectories, but also the forward/timereversed conditional entropies (Eqs.6, 19), we define the following generating functional: Γ(g, where (s u | s u−1 ) ≡ − log p(s u | s u−1 ).In the limit t → ∞, the logarithm of the generating functional converges to the large deviation function [47][48][49], which plays the role of a free-energy function for nonequilibrium trajectories [50].The vector g is composed of parameters g i,u (i = 1, . . ., N , u = 1, . . ., t) and g S u , g S r u (u = 1, . . ., t) retrieving the system's statistical properties.The parameters g i,u recover the statistical moments of the systems like the average rates and correlations: where angle brackets are defined as f (s 0:t ) = s0:t f (s 0:t )p(s 0:t ).
In addition, g S u , g S r u retrieve the conditional and time-reversed conditional entropy terms, S u|u−1 , S r u|u−1 : and thus the steady-state entropy production (Eq.18): Synchronous and asynchronous, asymmetric Sherrington-Kirkpatrick model We consider N interacting elements s u (spins or neurons), taking each element i at time u a binary state s i,u = {−1, 1}.Constraints take the form of delayed pairwise couplings (i.e., f i,j (s u , s u−1 ) = s i,u s j,u−1 in Eq. 4).This results in the dynamics: where β is the inverse temperature.The system's state at time u depends on the previous time-step (Fig. 1(a)).
The equation above is a general formulation of a kinetic Ising model with time-dependent fields H i,u .The dynamics can include both synchronous and asynchronous Ising systems by introducing a set of independent Bernoulli random variables: τ i,u = 0, 1 with probabilities 1 − α and α (i.e., τ i,u ∼ Bernoulli (α)) and making H i,u stochastic processes: Note that in the limit of K → ∞, the state s i,u is tightly coupled to the previous state s i,u−1 .Therefore, the state changes only if τ i,u = 1.We have the following transition probability in the K → ∞ limit: with transition rate where h 1 i,u = Θ i,u + j J ij s j,u−1 and Θ i,u is an external field.With α = 1 we have a kinetic Ising system under parallel or synchronous updates.In the limit α → 0, we have in turn a kinetic Ising system with asynchronous updates (i.e., at most one spin is updated each time step), converging to a continuous-time master equation.
The generating functional of the kinetic Ising system (Eq.21) is defined by the functions where . The equilibrium Ising model with symmetric random Gaussian couplings is referred to as the SK model.In the fully-asymmetric SK model, the couplings J ij are quenched independent variables, each following a Gaussian distribution with mean J 0 /N and variance ∆J 2 /N scaled by N .
The asymmetric SK model shows a variety of population dynamics.Fig. 1 shows exemplary dynamics under asynchronous updates without fields (Θ i,u = 0).It shows disordered dynamics for large coupling variance ∆J 2 both at high and low temperatures (i.e.low and large β, Fig. 1 (b,c)), ordered dynamics for low temperatures and low ∆J 2 (Fig. 1(d)), and critical dynamics at the phase transition (Fig. 1(e)).

Solution of the asymmetric Sherrington-Kirkpatrick model
The solution of the kinetic version of the SK model with asymmetric and quenched couplings can be obtained by computing the generating functional averaged over the couplings (referred to as the configurational average): This integral cannot be solved directly because of the log [2 cosh [•]] terms in Eqs.36 and 37, which depend nonlinearly on J ij .A path integral method [51] to find a solution introduces a delta integral representing βh i,u with auxiliary variables θ i,u = β(H i,u + j J ij s j,u−1 ) as well as βh r i,u with an auxiliary variable ϑ i,u = θ i,u+1 + β(H i,u − H i,u+1 ).Let θ = {θ i,u } (note u = 1, . . ., t + 1) and ϑ = {ϑ i,u } (u = 0, . . ., t) denote a set of the auxiliary variables.Using conjugate variables θ = { θi,u } to represent the delta function in the integral form, the configurational average is written as with Note that the summation of θi,u terms is performed over u = 1, . . ., t + 1 to retrieve the fields of both the forward and backward trajectories.
The integral over J ij can be now performed directly over linear exponential terms (see Supplementary Note 1).After integration, Eq. 40 incorporates quadruple-wise interactions among spins s 0:t and conjugate variables θ (Eq.S1.10), similar to replica interactions in the equilibrium SK model [12].These interactions are simplified by introducing Gaussian integrals and a saddle-point approximation in the thermodynamic limit (Eq.S1.26).The saddle-point solution can be simplified by introducing four types of order parameters (Eq.S1.31).In fully-asymmetric networks, two of these order parameters are found to be zero at g = 0, yielding a solution in terms of the order parameters m u and q u,v (see Eq. S1.48, S1.49): Finally, conjugate variables θ in the saddle-point solution can be substituted with a multivariate Gaussian integral (Eq.S1.57), leading to a factorized generating functional where the stochastic elements ξ = (ξ 1 , . . ., ξ t+1 ) affecting each spin i follow a multivariate normal distribution p(ξ) = N (0, q) with q defining q u−1,v−1 as the covariance of each pair ξ u , ξ v for u, v ∈ 1, . . ., t + 1.Here, at g = 0, spin interactions are effectively substituted by same-spin temporal couplings in mean effective fields Applying Eqs.24 and 25 to the configurational average in Eq. 44, we obtain the order parameters m u and q u,v : where the Gaussian stochastic terms are simplified to Note that, in contrast with the equilibrium SK model, m u is independent of q u,v , resulting in the lack of a spinglass phase as suggested by previous studies [25].
The configurational average of Eqs.28 and 29 results in the following conditional entropy and time-reversed conditional entropy Up to this point, our results are general for timedependent fields H i,u , covering synchronous and asynchronous updates by Eq. 33.We obtain the results for the synchronous SK model by setting α = 1 or, equivalently, H i,u = Θ i,u .For time-independent fields (Θ i,u = Θ i ), the system converges to a steady state determined by the solution of the self-consistent equations given by Eqs.48 and 49.Finally, using Eq. 30, the steady-state entropy production under the synchronous updates is obtained as with m u−1 and q u,u−2 given by their steady-state values (i.e., independent of u).Note that for the synchronous system the steady-state solution of q u,v is the same for all u, v.In the following, we will use m and q to represent these steady-state solutions.
To calculate the steady-state solutions for the asynchronous SK model, we calculate the generating functional [Z t (g)] J,τ that is additionally averaged over the independent random variables τ i,u in Eq. 33.We show in Supplementary Note 2 that the resulting order parameters in continuous-time m(t) and q(t , t) are subject to the following dynamical equations: with Here q 1 (t , t) is the spin correlation conditioned on spins being updated at time t.The steady-state solutions of m(t) and q(t , t) (assuming t t) converge to the same steady-state values m and q found for the synchronous SK model (see Supplementary Note 2).
In the continuous-time limit, the steady-state entropy production converges to a steady-state entropy production rate (Eq.17) given by: We note that the delayed-self correlation q(t , t) is discontinuous at t = t (i.e., lim α→0 q(t + α, t − α) = q(t, t) = 1, see Fig. S1), warranting that the entropy production rate can be non-zero for appropriate parameters.
Given the analytical solutions of the system, we will now study the phase diagram of the SK model.In contrast with the naive replica-symmetric solution of the equilibrium SK model, the equations above are exact in the model with asymmetric couplings in the thermodynamic limit under both synchronous and asynchronous updates.
The SK model without external fields Fig. 2(a) and 2(b) display the phase diagram of the steady-state order parameters, m and q, for both synchronous and asynchronous updates, respectively derived from Eqs.48 and 49 as a function of the inverse temperature β and the width of the coupling distribution ∆J, when the external fields are fixed at zeros (Θ i,u = 0) and the mean coupling is J 0 = 1.In this setting, the inverse temperature β controls the magnitude of the couplings.The phase diagram shows two distinct regions, one in which the order parameters are fixed at zero (zero magnetization and zero self-correlations, m = 0 and q = 0) -indicating disordered states -and the other in which the order parameters become positive (m > 0 and q > 0) -indicating ordered states.Therefore, the system exhibits a nonequilibrium analogue of the paramagnetic-ferromagnetic (disorder-order) phase transition controlled by the parameters, β and ∆J.The dashed line in each panel shows the critical values of ∆J as a function of β, which is obtained by solving the following equation (see Supplementary Note 3), The solution will be denoted as ∆J c (β).As studied in Supplementary Note 3, this critical phase transition corresponds to the mean-field universality class, as in the order-disorder phase transition of the equilibrium SK model (note that the spin-glass phase has different exponents [52]).Note that, depending on the coupling variance ∆J, the dynamics do or do not undergo the nonequilibrium phase transition by varying the inverse temperature β.The critical ∆J c (β) at β → ∞ is given as ∆J c (∞) = 0.79501 (dotted horizontal line).If the distribution is narrower than the critical value ∆J c (∞), the process undergoes the phase transition by changing β.If the distribution is wider than the critical value, the order parameters are fixed at zeros (m = 0, q = 0) for any β.Note that, for β → ∞ (zero temperature), the activation function approaches the threshold nonlinearity given by the sign function; therefore, the process becomes deterministic.That is, for the large values of β, the process approaches deterministic dynamics yielding either ordered or disordered states for smaller or larger ∆J, respectively.We remark that the disordered state with m = 0 and q = 0 at high β (low temperature) does not indicate the spin-glass phase as expected for the equilibrium Ising system (see Supplementary Note 4).We confirmed the non-existence of a spin-glass phase for the asymmetric kinetic SK model by finding that the system decays exponentially in this region (Supplementary Note 5).
The reduction in uncertainty at higher β is indicated by the reduction of the conditional entropy (the path entropy) S u|u−1 by increasing β (Fig. 3(a)).This figure additionally shows that the conditional entropy decreases slowly with increasing β along the critical line of the phase transitions.This means that strong couplings and diverse patterns co-exist along the critical line.In contrast, the time-reversed conditional entropy S r u|u−1 (Fig. 3(b)) displays opposite dependency on β for the broader or narrower coupling distributions.Timereversed conditional entropy quantifies how surprising the reverse process is under the forward model.With coupling distributions narrower than the critical value ∆J c (∞), the time-reversed conditional entropy diminishes by increasing β, indicating that the reverse processes takes place with increasingly high probabilities.This is because the spin state is fixated at all up or down under the ferromagnetic-like state for all time, losing temporal asymmetry.In contrast, the reverse process becomes less likely to happen as the dynamics becomes more deterministic by increasing β yet remains disordered as long as the coupling distribution is broader than ∆J c (∞).This distinct behavior between the conditional entropy and its time-reversed version found at the wider coupling distributions and high inverse temperatures yields the strong time-asymmetry in this regime.
The entropy production under the steady-state condition quantifies the difference between the conditional and time-reversed conditional entropy.Fig. 3(c) displays the phase diagram of the entropy production for the synchronous Ising model (the asynchronous Ising model has a similar behavior but different scaling, see Supplementary Note 2 or Fig. 4).The entropy production is maximized at the high β under the broader coupling distributions, where we find a significant difference between these two conditional entropies.Namely, strong timeasymmetry appears when the dynamics are disordered, nearly deterministic processes.Note that the entropy production increases with β if the coupling distribution is wider than ∆J c (∞).In contrast, the entropy production is locally maximized at the critical point (white dashed line) with the coupling distribution being narrower than ∆J c (∞) (see also Fig. 3(d)).For the narrowly distributed couplings, the process exhibits a paramagneticlike (randomized or disordered) phase at smaller β and a ferromagnetic-like (ordered) phase at higher β (Fig. 2), neither of which can exhibit adequately asymmetric dynamics in time.Time-asymmetry appears between the ordered and disordered phases, namely at the critical point.As a consequence, the steady-state entropy production can be a measure of the criticality in this regime.However, more importantly, the magnitude of the entropy production is far more significant in the regime of large ∆J and β than near the critical states, due to the strong time-asymmetry caused by the combination of disordered and quasi-deterministic dynamics.
To verify our theoretical predictions for the order parameters and steady-state entropy production, we compared them with the values computed from the sample trajectories by numerically simulating the kinetic SK models (see Supplementary Note 6 for the details).We constructed the kinetic Ising model with parameters Θ i,u = 0 and randomly generated J ij with ∆J = 0.5 and J 0 = 1 while changing the inverse temperature β.We ran simulations of the model for t = 128 time steps and repeated the simulation 10 6 times at each β.We computed the mean activation rate , and the normalized entropy production and entropy production rates N dσ dt J,τ from trajectory and parameter sampling.We used the values at the last time step (u = t), where we confirmed that the statistics approached their steady-state values.Fig. 4 compares the theoretical order parameter m and q with the mean activation rate and delayed selfcorrelations computed from the simulated trajectories for system size N = 32, . . ., 1024 under synchronous (Fig. 4(a, b)) and asynchronous (Fig. 4(d, e)) updates.The simulated values approach the theoretical prediction as the size increases, albeit the convergence speed slows down near the critical temperature as it is expected.Similarly, we confirm in Fig. 4(c,f) that the entropy production from the sample trajectories for synchronous and asynchronous systems converges to the mean-field value at the thermodynamic limit as we increase the system size.Note that entropy production for synchronous updates differs from the entropy rate in the asynchronous update in continuous-time limit due to different values for the delayed correlation term q in Eqs.54 and 58.These results corroborate our theoretical predictions that the steady-state entropy production peaks at the critical nonequilibrium phase transitions.We further verified by simulations with ∆J = 1 that the steady-state entropy production increases when the significantly hetero-FIG.4. Verification of the exact mean-field solutions by simulating the kinetic Ising systems with synchronous and asynchronous updates.We repeated 400, 000 simulations of systems under synchronous (top) and asynchronous (bottom) updates with Θi,u = 0 and ∆J = 0.5: (a,d) Sample estimates of the mean activation rate compared with the theoretical order parameter m. (b,e) Sample estimates of the average delayed self-covariances 1 for the synchronous system and d = 10N for the asynchronous one) computed from samples compared with the theoretical order parameter q − m 2 .(c,f) Sample estimates of the entropy production and entropy production rate (Eqs.S6.6 and S6.7) compared with its mean-field value at the thermodynamic limit 54,58).
Finally, knowing the order parameters of the system under the configurational average, we can investigate the structure of the patterns emerging from the dynamics of a sufficiently large but finite system under certain conditions.We calculate the probability Ω (n) of observing a state s u+n again for the first time after n steps, starting from the same pattern s u = s u+n (Supplementary Note 7).For zero temperature (β → ∞) and synchronous updates, Ω (n) describes the probability of observing a periodic pattern of length n since transitions become deterministic and can result in periodic patterns.In general, the distribution of these patterns depends on the higherorder correlations between spins across time steps.However, we observe that for the disordered phase (m = 0) as well as in the deep ordered phase (m ∼ 1), these correlations disappear for the configurational average in the asynchronous model or in the synchronous model for large n (see Fig. S5).In these regions of the phase diagram, we can approximate the probability of observing a pattern n as Ω (see Supplementary Note 7).The expected length until a repeated pattern is observed is then At the disordered phase (m = 0), the average length of observed patterns exhibits a maximum value, growing exponentially at a rate of 2 N .In contrast, when the system enters the ordered phase (m ∼ 1), the growth rate decreases as m increases.In the limit m = 1, the system reaches a static equilibrium of average length 1, where the same pattern is repeated indefinitely.These results are consistent with expected dynamics under order-disorder phase transitions.Thus, bringing a quasi-deterministic system to a more stochastic regime by decreasing β to the critical value β c (with ∆J smaller than ∆J c (∞)) increases the diversity of irreversible patterns and hence entropy production.However, further reduction of β makes the system more random (i.e., less irreversible transitions), leading to a decrease in entropy production.In contrast, the large entropy production found at the disordered phase at large β and ∆J (wider than ∆J c (∞)) is caused by diverse oscillatory dynamics whose average pattern length is 2 N as in the random dynamics (β = 0).Adding stochasticity to the dynamics by decreasing β in this regime reduces the entropy production monotonically.
uniform distribution H i ∼ U (−∆H, ∆H).Fig. 5(a) and 5(b) show the β −∆J phase diagram for the order parameters.With this change, we observe non-zero correlation q in the area where we previously saw the disordered states (m = 0 and q = 0, Fig. 2(b)).Fig. 5(c) and 5(d) display the order parameters as a function of the inverse temperature and ∆H, where we examine the effect of heterogeneity in the external fields while fixing the coupling variability, ∆J = 0.2.The critical line of ∆H c (β) is obtained in this case as a solution of the following selfconsistent equation (Eq.S3.9): Again, as studied in Supplementary Note 3, this critical phase transition corresponds to the mean-field universality class.Since the right-hand side term is less than or equal to 1 regardless of β and ∆J, the phase transition occurs only when ∆H < J 0 is satisfied.Intuitively, there is a competition between the dispersion induced by the field diversities ∆H and the cohesion induced by the mean coupling strength J 0 .The ordered phase takes place only if J 0 counteracts the dispersion induced by the heterogeneity of external fields.More precisely, the critical ∆H c (β) at the low temperature limit (β → ∞) is obtained by solving ∆H/J 0 = Dz sign [∆H + ∆Jz].Here we have ∆H c (∞) = 1.We observe the phase transition by varying β if ∆H < ∆H c (∞), and no phase transition if ∆H > ∆H c (∞).Note that q increases monotonically with β even for ∆H > ∆H c (∞) when the distributed fields are introduced.We now examine the conditional entropy, its reverse, and entropy production for the synchronous system with distributed fields, using the β vs. ∆H phase diagram.Similarly to the observation in the model without fields, the conditional entropy decreases with higher β (it becomes more deterministic processes, see Fig. 6(a)).The time-reversed conditional entropy also decreases with increasing β for all ∆H, indicating that the reverse process is more and more likely to happen regardless of ∆H (Fig. 6(b)).As seen previously, the time-reversed conditional entropy diminishes under the ferromagnetic-like states (∆H < ∆H c (∞)).In contrast, we also observe the reduction of the time-reversed conditional entropy at higher β for ∆H > ∆H c (∞).Note that we observed increased correlations q at higher β for ∆H > ∆H c (∞) when we introduced the non-zero external fields (see Fig. 5(d)), which resulted in the reduction of the reversed entropy similarly to ferromagnetic-like states.Both conditional and time-reversed conditional entropies decrease much slower along the critical line than in other regions, although with different magnitudes.As a result, we see the maximization of the entropy production around critical points more clearly than the β-∆J phase diagram (Fig. 6(c) and 6(d)).Finally, at the zero temperature limit (β → ∞), the entropy production peaks at ∆H c (∞) = 1 (Fig. 6(e)).

DISCUSSION
In this paper, we studied in detail the nonequilibrium thermodynamics of the kinetic, asymmetric SK model for both synchronous and asynchronous dynamics.As expected, the order parameters reveal that the model exhibits order-disorder nonequilibrium phase transitions analogous to the paramagnetic-ferromagnetic phase transitions in the equilibrium Ising model.There are, however, no phase transition akin to a spin-glass (which does not emerge due to coupling asymmetry, as previously reported for continuous-time asymmetric SK models in [25]).In addition, we show that the steady-state entropy production is maximized near nonequilibrium phase transition points, being its first derivative discontinuous at these points (Fig. 3(d)).This result is similar to previously reported critical behavior of the entropy production caused by external stimulation or inertial dynamics (via self-coupling) in homogeneous systems with asynchronous updates by means of naive mean-field approximations or numerical simulations [28][29][30].Nevertheless, our result is novel in that it provides the critical behavior of the entropy production caused by asymmetric, heterogeneous couplings using an exact analytical solution for such complex systems.In addition, the studied model displays a region with disordered oscillations in its phase diagram, where the entropy production takes even larger values than in the critical regime.This phase takes place for disordered systems with low entropy rates, i.e., the heterogeneous connections are strong enough to make the dynamics disordered but quasi-deterministic (Fig. 3(c), top-right).In contrast, the entropy production does not increase when we increase the heterogeneity of external fields (Fig. 6(c)).
Taken together, our results indicate that the behavior of entropy production peaking at a critical point is more general than the simple mean-field, homogeneous models, therefore a non-smooth change of the steady-state entropy production (or entropy dissipated to an external reservoir) can be a useful indicator of a number of nonequilibrium phase transitions.At the same time, our results demonstrate that an increase in entropy production does not necessarily mean that the system is approaching a phase transition point.Instead, combining the order parameters, entropy rate, and entropy production yields a more precise picture of the complex systems and their phase transitions.
Typically, solutions of the symmetric (equilibrium) SK model involve the replica trick to calculate the configurational average of the logarithm of the partition function [12].This method introduces an integer number of replicas of a system for averaging the disorder and then recovers the solution using a continuous number of replicas in the zero limit under the replica symmetry or replica-symmetry breaking ansatz.This treatment forces researchers to check the validity of solutions be-fore reaching correct solutions [16,53].As an alternative to the replica methods, the path integral methods have been widely used in analyzing the symmetric SK model [54,55].However, for partially-or fully-symmetric SK models, the path integral method does not give a definite analytical solution but needs to be computed with Monte Carlo approaches [56].Fortunately, the path integral method derives an exact analytical solution for the case of the fully asymmetric nonequilibrium SK model [25], which we extended to cover synchronous and asynchronous updates, and theoretically underpinned their nonequilibrium properties by deriving the exact solution of the steady-state entropy production and entropy rates of the system.
Nonequilibrium properties of biological and adaptive systems have received the attention of neuroscience and biological science communities.For example, increased entropy production in macroscopic neural activity was suggested as a signature of physically and cognitively demanding tasks [4], conscious activity [5,6] or neuropsychiatric diseases like schizophrenia, bipolar disorders, and ADHD [7].While it is not easy to contrast their findings based on the coarse-grained analysis of ECoG or fMRI data with the present results, our precise characterization of the entropy production of the prototypical system sheds light on what kind of behaviors we might expect from these complicated systems.Most importantly, our results indicate two scenarios to increase entropy production by controlling the connection heterogeneity (∆J) and neuron's nonlinearity (β).These global changes in the model parameters can be achieved in the brain as gain modulation often mediated by neuromodulators [57].One scenario to increase entropy production is that the system approaches a critical state as seen in the low ∆J in Fig. 2 or Fig. 5.The other scenario is to make the system more heterogeneous and sensitive by increasing ∆J and β.A significant difference is that the former process maintains stochastic nature while the latter yields quasi-deterministic disorder, as indicated by the high or reduced entropy rate.Therefore, the results suggest that it is crucial to investigate the multiple possibilities of nonequilibrium states to underpin the unconscious (sleep or anesthesia), awake, and engaged states more precisely.
Neuroscientists have often discussed the role of temporal patterns in spiking activities of neurons in computation or in memory consolidation and retrieval.One central topic of the debate is whether neurons, e.g., cortical or hippocampal ones, exhibit precise sequential patterns in a repeated manner [58][59][60][61].Such precise sequences should result in a large entropy production similar to the low-temperature disordered phase of the kinetic Ising system.Alternatively, one may explain a broad range of irreversible temporal patterns, including avalanche dynamics [62,63], by the dynamics near the nonequilibrium phase transitions without the precise sequential structure.As suggested by Eq. 60, the same network that shows simple periodic patterns at zero temperature can retrieve the diverse patterns yielding large entropy production by being poised near the critical phase transition point.The current study highlights the need to dissociate the two scenarios, characterizing different temporally irreversible spiking patterns to understand the distinct roles in neural computation using multiple thermodynamic quantities.Finally, our analytical solutions offer a benchmark for -the aforementioned and other -methods for estimating thermodynamic quantities.For example, characterizing entropy production from brain imaging data requires methods for coarse-graining the phase diagram [4,5].The kinetic SK model can serve as a test bench for such methods as it is an analytically tractable system with a well-known phase diagram.Moreover, we can use them to examine both established and novel mean-field theories in estimating the thermodynamic properties of largescale systems.For example, one can directly fit the Ising model to neuronal spiking data using mean-field methods for finite-size networks, from which one can estimate various thermodynamic quantities of the system [31,64].Accurately estimating these quantities in large networks gives deeper insights into the nonlinear computations of cortical circuitries.The exact solutions provided here can serve to evaluate the accuracy of these approximation methods applied to large-scale networks and provide a benchmark of the thermodynamics quantities of infinitely large networks.In this Supplementary Note, we compute the generating functional of the asymmetric kinetic Ising model averaged over the quenched couplings with Gaussian distributions, known as the configurational average.
The probability density of a specific trajectory of the kinetic Ising model, s 0:t = {s 0 , s 1 , . . ., s t }, is defined as where Here the summation over u is taken for u = 1, . . ., t.For simplicity, we will assume that p(s 0 ) only contains one possible value, that is, the initial distribution is a Kronecker delta p 0 (s) = i δ [s i , s i,0 ].Although this enables us to ignore the term, the next steps are generalizable to any initial distributions.The dynamics above straightforwardly describes a synchronous kinetic Ising model under time-dependent fields H i,u and asymmetric couplings J ij .In Supplementary Note 2, we will show how this model and its solution can be generalized to cover both synchronous and asynchronous dynamics.
In equilibrium systems, the partition function provides the statistical moments of the equilibrium distribution.Here, to find the statistical properties expected from ensemble trajectories of the asymmetric SK model as well as its steady-state entropy production (Eq.18), we introduce the following generating functional or dynamical partition function: . Note that h i,u have to be defined up to t + 1 to recover the backwards trajectory.The terms g i,u are designed to obtain the moments and other statistics of the system, and g S u and g S r u are for its conditional and reversed conditional entropy terms at time u.We will use this generating functional to calculate the statistical moments of various random variables.We will denote them using an average function described as: For simplicity, we will denote f (•) 0 simply as f (•) , recovering the statistical moments of the original system.The configurational average over Gaussian couplings (Eq.38) of the generating functional is computed as The configurational average can be solved using a path integral method.To obtain the path integral form, we first insert an appropriate delta integral for the effective fields of each unit for the time steps u = 1, . . ., t + 1 to the above equation: where θ is the N (t + 1)-dimensional vector composed of the effective fields θ i,u (i = 1, . . ., N and u = 1, . . ., t + 1).θ is the N (t + 1)-dimensional conjugate effective field, and we used a Dirac delta function δ −a) dζ.Note that, from now on, all summations and products involving the conjugate effective field θ (as well as the order parameters we introduce later) will be performed over the range u = 1, . . ., t + 1. Next, we replace βh i,u in Eq.S1.3 with the auxiliary variable θ i,u as well as βh r i,u of the reversed couplings at time u with an auxiliary variable ϑ i,u = θ i,u+1 + β(H i,u − H i,u+1 ), and place them inside the integral with respect to θ (i.e., we perform the operation, f (a) = f (x)δ [x − a] dx).The configurational average is written as Using the Gaussian integral formula dx = exp ac + a 2 2 b , the expectation of exp [aJ ij ] is computed as Hence the integral related to J ij in Eq.S1.7 is computed as i,j Using this result, the Gaussian integral form of the partition function is given as Note that, for the term of summation over u, v, we separated the u = v terms from the rest, resulting in elimination of the spin variables because s j,u−1 s j,u−1 = 1.

S1.1. Gaussian integral and saddle node approximation
We will evaluate the aforementioned expression with a Gaussian integral and a saddle node approximation, and show that the saddle node solutions become order parameters.For this goal, we first give an outline of the derivation, and then apply the steps to the above equation.
Let C be a real value, and x and y be complex values.Eq.S1.10 contains the term in the form of exp [Cxy].We can represent this term by a double Gaussian integral (a pair of the Gaussian integral formulas) with the form: Because the integrand is an analytic function, we can change the contour of the path integral in the complex space so that it includes the saddle-point solution.This contour integration produces the original value, exp [Cxy].Therefore, z R and z I are no longer real values but can be complex values.When x and y are random variables, we can approximate the expectation of exp [Cxy] by the saddle node solutions when the constant C is large: where z * R and z * I are the saddle-point solutions that extremize the contents of the braces {} in Eq.S1.12.These solutions are given by the following self-consistent equations: where the bracket • represents We reiterate that for the saddle-point solution z * I is derived from substituting the exponent by its Taylor expansion around the minimum, i.e., f (z ), which in this case has an imaginary value.
To obtain a more intuitive saddle-point solution, we can perform a change of variables and In summary, the process previously described consists of 1) introducing a pair of Gaussian integrals, 2) finding a saddle-point solution, and 3) performing a change of variable to recover a solution in terms of expectations of the original variables.We now repeat the process for the integral of the partition function.
(i) Gaussian integrals.First, we introduce Gaussian integrals by applying Eq.S1.11 to the quadratic terms in the partition function.Using C = N βJ 0 , where M + u and M − u are real-valued integral variables.Similarly, using where Q + u,v and Q − u,v are real values.Note that the products over u and v are performed over the range 1, . . ., t + 1.
With these double Gaussian integrals and defining dM = u dM + u dM − u and dQ = u,v dQ + u,v dQ − u,v we can rewrite the partition function as + log s1:t dθd θe Φ(s0:t,θ,g)+Ω( θ,θ,g) , (S1.23) where the remaining terms related to the random variable s and θ, θ from the Gaussian integral can be separated into the terms Now, the next two steps for solving the integral is to find a saddle-point solution and perform a change of variables.In the next section, we find that the solutions result in the order parameters of the system.
The exponent of the integrand above is proportional to N , making it possible to evaluate the integral by steepest descent, giving the saddle-point solution as + log s1:t dθd θe Φ(s0:t,θ,g)+Ω( θ,θ,g) , (S1.26) where the optimal values M(g), Q(g) are chosen to extremize (maximize or minimize) the quantity between the braces {}.We introduced the dependence with g to denote the optimal values as the solution of M, Q will be different for different values of g.As in the solutions in Eqs.S1.13 and S1.14, the solutions ) are combinations of the average statistics of the variables of interest (multiplied by the imaginary unit for the latter).
(iii) Change of the variables.
Having order parameters in this form can be cumbersome.We simplify them to directly capture the average statistics of the system by performing a change of variables at the saddle-point solution as exemplified in Eq.S1.16, resulting in: or equivalently where now we expect all m(g), µ(g), q(g), ρ(g) to be real-valued.
This results in s1:t dθd θe Φ(s0:t,θ,g)+Ω( θ,θ,g) , (S1.31) where now m(g), µ(g), q(g), ρ(g) are chosen to extremize the quantity between the braces.Also, we define the terms Note that the summation of u or v related to the order parameters are performed over the range 1, . . ., t + 1.Also note that integration over disordered connections has removed couplings between units and replaced them with sameunit temporal couplings ρ(g) and varying effective fields, which are also independent between units, resulting in a mean-field solution where the activity of different spins is independent.
In the next section, we specify the conditions of the extrema, from which we find that some of the extrema are the order parameters.
On the other hand, the configurational average of the generating functional holds relations similar to Eqs. 24, 25: where In addition, we have the following identities that will be helpful in eliminating spurious solutions Note that the equations above are equal to zero for g = 0.
To derive the order parameters, we calculate the same partial derivatives using Eq.S1.31.The order parameter of the system given by Eq.S1.39 is calculated directly as Similarly, we obtain Here we should note that, as there is no coupling between units, for i = j we have a factorized solution [ s i,u s j,v ] = s i,u s j,v * ,g = s i,u * ,g s j,v * ,g .
Finally, by comparing the above derivatives with Eqs.S1.39, S1.40, S1.42, and S1.43 we obtain the order parameters: Note that, at g = 0, µ u (0) = ρ u (0) = 0.As well, notice that [Z t (0)] J = 1, retrieving activation rates and delayed self-correlations as the order parameters of the system m u (0), q u,v (0).Below, we will drop the parenthesis for the order parameters at g = 0, referring to these quantities as m u , q u,v .
Similarly, the forward and reverse entropy rates are calculated from the functions evaluated at g = 0.

S1.3. Mean-field solutions
After solving the saddle-point integral, we have the following expression for computing relevant quantities in the system [Z t (g)] J = s1:t dθd θe Φ(s0:t,θ,g)+Ω( θ,θ,g) . (S1.54) At this point, we want to remove the effective fields θ and effective conjugate fields θ.For this goal, (i) we first remove the effective conjugate fields θ by recovering the delta functions from their integral forms.Then, (ii) we revert the effective fields θ by removing the delta function.This results in the mean-field (factorized) generating functional, from which we obtain the mean-field solutions of order parameters, conditional entropy, or entropy production.
(i) Removing effective conjugate fields We first remove the conjugate fields by recovering a delta function.We rewrite e Ω( θ,θ,g) defining q u−1,u−1 = 1 and q u−1,v−1 = q v−1,u−1 to obtain a symmetric matrix.Note that the saddle-node solution Eq.S1.36 was defined only for u > v.
We can remove the quadratic terms of θ by applying N (t + 1)-dimensional multivariate Gaussian integrals of the form Similarly, the reversed conditional entropy results in where ξ u+1 is decomposed as a conditional Gaussian distribution for a given ξ u−1 as ξ u+1 = q u,u−2 ξ u−1 + 1 − q 2 u,u−2 ζ u+1 where ζ u+1 is a normalized Gaussian independent of ξ u−1 and the term q u,u−2 the covariance between variables.
Note that if the fields are constant H i,u = Θ i , the system will converge to a steady state with m u = m and q u,v = q the entropy production simplifies to where m and q are the steady-state solutions of Eqs.S1.69 and S1.70 respectively.Note that, since every q u,v depends only on m u and m v , all q u,v converges to the same solution q under the steady state.
means that the time t is equivalent to the expected number of the spin updates up to the u-th bin.Note that t stands for the continuous time and should not be taken as the last discrete-time step that we use in the subscript of the variables.In the limit α → 0, the system is equivalent to a continuous time description for the new time variable t: Using Eq.S2.9, the entropy change is further written as To obtain the third equality, we applied the change of variables from s to s [i] in the second term, which makes s i into −s i .We then reverted the summation over i and s [i] to s and i since both sum all the states and flipping to obtain the fourth equality.From this form, the entropy change can be decomposed into an entropy production rate and entropy flow rate terms as follows [39]: Thus, we define the continuous-time steady-state entropy production rate as: .13)variables τ i,u ∼ Bernoulli(α), we further separate m i,u , q i,u,v into the cases where τ i,u , τ i,v are updated or not: where m 0 i,u = m i,u−1 and q 0,0 i,u,v = q i,u−1,v−1 .

Mean activation rate order parameter
First, we look into the mean activation rate.Since p(τ i,u = 1) = α, the expectation of this order parameter by τ i is given by where we assumed K → ∞ to obtain the second term.The above equation gives the dynamical mean-field equation in the discrete time steps.We now represent the discrete time steps u = 1, 2, . . . on the real-valued line of the continuous time using t = (u − 1)α.By representing the mean activation rate at the u − 1th bin by m(t), i.e., m(t) = m u−1 , the equation above is described as: Hence, we obtain the following differential equation in the limit of α → 0: Note that Eqs.S2.25 converges to the same formula given by Eq.S1.69 when the magnetic fields are equal to Θ i,u .

Delayed self-correlation order parameter
Next, we compute the order parameter of the delayed self-correlation.We can simplify the decomposition of q i,u,v in two steps as: where the variable q τi,v i,u,v captures the marginal order parameter when we know τ i,v , i.e., if the spin at time v has been updated or not, regardless of whether the spin at time u has been updated or not.q τi,u,τi,v i,u,v is the order parameter given that we know both the update occurred at u and v.
We can decompose q i,u,v into the two cases, where spin v is or is not updated: Here we used the equivalence q 0 i,u,v = q i,u,v−1 under K → ∞ because, if the spin at time v is not updated, it takes the value of the spin at the previous time v − 1, at which we do not know if the spin is updated or not.We then calculate the other variable q 1 i,u,v in terms of the updates of u and v: Here we used q 0,1 i,u,v = q 1 i,u−1,v for the same reason previously mentioned, applied at time u.q 1,1 i,u,v is the self-delayed correlation q i,u,v as in Eq.S2.20 with H i,u = Θ i,u and H i,v = Θ i,v because the spin was updated at these time steps with correlation between effective fields q u−1,v−1 , and there are no constraints from the previous spins.
Similarly to the mean activation rate, in the small α → 0 limit, with t ≡ α(u − 1), t ≡ α(v − 1), the previous equations lead to The equations above are solved iteratively with boundary conditions: Similar boundary conditions apply for the discrete time description in u, v.The delayed self-correlation order parameter q(t , t) is obtained as the average of q i (t , t) over the spins.Similarly to the mean activation rate, the delayed self-correlation converges to Eq. S1.70 obtained under the synchronous update.

Long-range limit
Assuming a time-independent Θ i,u (= Θ i ), we can calculate the convergence values of q i,u,v in two steps, separating the dynamics in u and the dynamics in v. First, from Eq. S2.31, it is easy to see that, knowing q u−1,v−1 for all values of u and a fixed v, the convergence point for u v will be or, in continuous time Now we can solve the dynamics in v independently to u, writing Eq.S2.30 for u v as as well as its continuous-time equivalent If we assume a starting point q i,∞,v for a given v, then we find that q i,∞,v converges for a large v (still under u v) to  S1. Result of simulating Eq.S2.32,S2.33 to calculate q(t + d, t) (blue line) and q 1 (t + d, t) (purple line) for 0 ≤ d ≤ 10 with α = 0.01, β = 2, Θi,u = 0, J0 = 1, and ∆J = 0.5.We also calculated the long range limit using Eq.S2.40 (dashed orange line).We observe that both variables converge for large d to the result given by Eq.S2.40.In addition, we find that while q 1 (t + d, t) decays smoothly, q(t + d, t) has a discontinuity at d = 0 since q(t, t) = 1.
Similarly, we define the continuous-time equivalent function Note that Eqs.S2.27, S2.40 converge to the same values as Eqs.S1.69, S1.70 when the magnetic fields are equal to Θ i , proving that the synchronous and asynchronous asymmetric SK models have identical solutions for their order parameters.
In Fig. S1, we numerically confirmed our theoretical result by simulating an exemplary dynamics of the selfcorrelation order parameters in the continuous-time domain.The procedure is as follows.We select an Euler step of α = 0.01 and an initial values of q i (t , 0) = δ [d].Given q 1 i (0, 0) = 1, we computed a forward pass of the values of q 1 i (d, 0) for larger values of d (d ≥ 0), using Eq.S2.33.Then, given t = t + d we calculated one Euler step of Eq.S2.32 in t, updating the value of q i (t + d, t) for all values of d.We iterated the above procedure until the function q i (t + d, t) and q 1 i (t + d, t) converge to fixed values.We confirmed that the convergence value of the process was the same one as directly calculating q i (∞, ∞).

Entropy production
Assume that time-constant fields Θ i , i.e., the fields H i,u = Θ i + (1 − τ i,u )Ks i,u−1 are stochastic processes driven by time-independent τ i,u .In this case the average difference of the forward and reverse entropy rates in Eqs.S1.71, S1.72 converges to: for the steady-state values of m u−1 and q u,u−2 (independent of u).
For asynchronous updates in a steady state, non-zero contributions to the entropy production at spin i only occur during spin updates, i.e., τ i,u = 1 occurring with a probability α (we can see this in the equation above where the tanh 2 [β (H i,u + J 0 m + ∆Jz)] term becomes equal to 1 when τ i,u = 0).This leads to the steady-state entropy production value of Note that due to the discontinuity of q(t + d, t) (or q(t + d, t − d) equivalently), the limit in d → 0 is different from q(t, t) = 1 (see Fig. S1).This property, lim d→0 q(t + d, t − d) < 1, guarantees that the entropy production rate can be non-zero for the adequate parameters.Similarly, we can find the critical value of ∆J at the limit of zero temperature by solving the equation in the β → ∞ limit: Dz sign [(∆H + ∆Jz)] J 0 = 1.(S3.12)

S3.2. Critical exponents
We can characterize critical exponents of the system using the normalized inverse temperature τ = − β−βc βc .We first note that the first order Taylor expansion of the following term around the critical β c yields Thus we have a critical exponent 1 2 , which is consistent with the scaling exponent of the order parameter of the mean-field universality class typically denoted by the symbol 'β' in the literature.
Similarly, we can compute the susceptibility to a uniform magnetic field B added on top of H i , having that retrieving the γ = 1 exponent that is consistent with the mean-field universality class.Note that these critical exponents, corresponding to the mean-field universality class, are also the same found in the order-disorder phase transition of the equilibrium SK model [12].Note that the spin-glass phase, not present for asymmetric couplings, has different exponents and does not correspond to this universality class [52]).

FIG. S4.
Verification of the exact mean-field solutions by simulating the kinetic Ising systems.We repeated the simulations for systems of size N = 32, 64, 128, 256, 512, 1024 with synchronous (top) and asynchronous (bottom) updates with Θi,u = 0 and ∆J = 1.(a,d) Sampling estimation of the mean activation rate m compared with the theoretical order parameter m (black lines).(b,e) Sampling estimation of the average delayed self-correlations q − m 2 compared with the theoretical order parameter q − m 2 (black lines).(c,f) Sampling estimation of the entropy production and entropy production rate σ, limα→0 1 α σ computed from the sample trajectories (Eq.S6.6, S6.7) compared with its mean-field value at the thermodynamic limit The steady-state entropy production of the synchronous update is obtained by setting α = 1 and effectively removing the average over τ .Note in our results we normalize this entropy production by the number of spins N to make it independent of the system size.
The steady-state entropy rate for the asynchronous updates is given by lim α→0 σ t /α, to make it independent of the update rate α: Similarly to the synchronous case, we normalize this entropy production rate by the number of spins N to make it independent of the system size.
To verify our exact mean-field solutions, we simulated networks of different sizes with synchronous and asynchronous updates for the parameters Θ i,u = 0 and ∆J = 0.5 (Fig. 4) or ∆J = 1 (Fig. S4).These results corroborate our theoretical predictions and confirm that the steady-state entropy production peaks at the phase transitions and increases when the significantly heterogeneous system approaches the quasi-deterministic regimes.

FIG. 1 .
FIG. 1. Asymmetric kinetic SK model.(a) The asymmetric kinetic Ising model describes a Markov chain where states at time su depend on pairwise couplings to states su−1.(b-e) This model shows disordered dynamics for large coupling variance both at high and low temperatures (b and c), ordered dynamics for low temperatures and a low coupling variance (d), and critical dynamics at the phase transition (e).

FIG. 2 .
FIG. 2. Order parameters of the asymmetric SK model with zero fields under synchronous and asynchronous updates.The average magnetization m and the average delayed self-coupling q are shown in the phase diagram of the inverse temperature β and coupling heterogeneity ∆J using a model with fixed parameters J0 = 1, ∆H = 0.The dashed line represents the critical line separating ordered and disordered phases.The dotted line represents the critical value at zero temperature (β → ∞).

FIG. 3 .
FIG. 3. Steady-state entropy rate and entropy production of the asymmetric SK model under synchronous updates.(a) The phase diagram of the conditional entropy S u|u−1 J (equivalent to the entropy rate) as a function of the inverse temperature β and the coupling heterogeneity ∆J.(b) The conditional entropy of the reverse dynamics S r u|u−1 J .(c) The entropy production at a steady state.The white dashed line is a critical line for the nonequilibrium phase transitions.(d, inset) The horizontal sections of the entropy production (∆J = 0.4, 0.5, 0.6, 0.7, and 0.7950), showing that it peaks at the critical line.All figures are based on a model with fixed parameters Hi,u = 0 and J0 = 1.

FIG. 5 .
FIG. 5. Order parameters of the asymmetric SK model with heterogeneous fields.(a, b) The average magnetization m and average delayed self-coupling q are shown as a function of ∆J and β.Fixed parameters are J0 = 1, ∆H = 0.5.The dashed line represents the critical line separating ordered and disordered phases.The horizontal dotted line represents the critical ∆J at zero temperature (β → ∞).(c, d) The phase diagram of order parameters as a function of ∆H and β.Fixed parameters are J0 = 1 and ∆J = 0.2 and variable ∆H.The dashed line is a critical ∆H at zero temperature.

FIG. 6 .
FIG. 6. Conditional entropies and entropy production of the asymmetric SK model with heterogeneous fields under synchronous updates.(a) The normalized conditional entropy S u|u−1 (equivalent to the entropy rate under a steady state).(b) the normalized conditional entropy of the reverse dynamics S r u|u−1 .(c) The normalized entropy production at a steady state.(d) Horizontal sections of the entropy production (∆H = 0.2, 0.4, 0.6, 0.8, 1.0, and 1.2).It peaks at the critical line.(e) A vertical section of the entropy production at zero temperature (β → ∞).All plots are based on a model with fixed parameters J0 = 1, ∆J = 0.2 and variable ∆H and β.
CONTRIBUTIONS M.A. and H.S. designed and reviewed the research; M.A. contributed the analytical results.M.A. and M.I. contributed the numerical results; M.A., M.I. and H.S. contributed the theoretical review and wrote the paper.