Irreversibility in dynamical phases and transitions

Living and non-living active matter consumes energy at the microscopic scale to drive emergent, macroscopic behavior including traveling waves and coherent oscillations. Recent work has characterized non-equilibrium systems by their total energy dissipation, but little has been said about how dissipation manifests in distinct spatiotemporal patterns. We introduce a measure of irreversibility we term the entropy production factor to quantify how time reversal symmetry is broken in field theories across scales. We use this scalar, dimensionless function to characterize a dynamical phase transition in simulations of the Brusselator, a prototypical biochemically motivated non-linear oscillator. We measure the total energetic cost of establishing synchronized biochemical oscillations while simultaneously quantifying the distribution of irreversibility across spatiotemporal frequencies.

I n many-body systems, collective behavior that breaks timereversal symmetry can emerge due to the consumption of energy by the individual constituents [1][2][3] . In biological, engineered, and other naturally out of equilibrium processes, entropy must be produced so as to bias the system in a forward direction [4][5][6][7][8][9] . This microscopic breaking of time reversal symmetry can manifest at different length and time scales in different ways. For example, bulk order parameters in complex reactions can switch from exhibiting incoherent, disordered behavior to stable static patterns 10,11 or traveling waves of excitation 12,13 that break time reversal symmetry in both time and space simply by altering the strength of the microscopic driving force. Recent advances in stochastic thermodynamics have highlighted entropy production as a quantity to measure a system's distance from equilibrium [14][15][16][17][18][19] . While much work has been done investigating the critical behavior of entropy production at continuous and discontinuous phase transitions [20][21][22][23][24][25][26][27][28] , dynamical phase transitions in spatially extended systems have only recently been investigated, and to date no non-analytic behavior in the entropy production has been observed 29,30 .
To address this, we introduce what we term the entropy production factor (EPF), a dimensionless function of frequency and wavevector that measures time reversal symmetry breaking in a system's spatial and temporal dynamics. The EPF is a strictly non-negative quantity that is identically zero at equilibrium, quantifying how far individual modes are from equilibrium. Integrating the EPF produces a lower bound on the entropy production rate (EPR) of a system. We illustrate how to calculate the EPF directly from data using the analytically tractable example of Gaussian fields obeying partly relaxational dynamics supplemented with out of equilibrium coupling 31 . We then turn to the Brusselator reaction-diffusion model for spatiotemporal biochemical oscillations to study the connections between pattern formation and irreversibility. As the Brusselator undergoes a Hopf bifurcation far from equilibrium, its behavior transitions from incoherent and localized to coordinated and systemspanning oscillations in a discontinuous transition. The EPF quantifies the shift in irreversibility from high to low wavenumber as this transition occurs, but the EPR is indistinguishable from that of the well-mixed Brusselator where synchronization cannot occur. Importantly, the EPF can be calculated in any number of spatial dimensions, making it broadly applicable to a wide variety of data types, from particle tracking to 3+1 dimensional microscopy time series.

Results
Entropy production factor derivation. Consider a system described by a set of M real, random variables obeying some possibly unknown dynamics. A specific trajectory of the system over a total time T is given by X = {X i (t)|t ∈ [0, T]}. Given an ensemble of stationary trajectories, the average EPR, _ S, is bounded by 5,6,32 where we have set k B = 1 throughout and D KL denotes the Kullback-Leibler divergence which measures the distinguishability between two probability distributions. P X ½ and P½ e X are the steady-state probability distribution functionals of observing the path X(t) of length T and the probability of observing its reverse path, respectively. Therefore, the KL divergence in Eq. (1) measures the statistical irreversibility of a signal, and saturates the bound when X contains all relevant, non-equilibrium degrees of freedom.
We further bound the irreversibility itself by assuming the paths obey a Gaussian distribution. Writing the Fourier transform of X i (t) as x i (ω), where ω is the temporal frequency, and writing the column vector xðωÞ ¼ x 1 ðωÞ; x 2 ðωÞ; ð Þ T : where x † denotes the conjugate transpose of the vector x evaluated at the discrete frequencies ω n = 2πnT −1 . C(ω n ) is the covariance matrix in Fourier space with elements C ij ðω n Þ ¼ x i ðω n Þx j ðÀω n Þ h i T À1 , and Z is the partition function. The expression for P½e x is identical but with C −1 (ω n ) → C −1 (−ω n ) [see Supplementary Note 1]. Combining Eq. (1) with Eq. (2) and taking T → ∞, we arrive at our main result: This defines the EPF, EðωÞ, which measures time reversal symmetry breaking interactions between M ≥ 2 variables, while integrating E gives _ S. EðωÞ ¼ D KL ðP½xðωÞ jj P½e xðωÞÞ measures the Kullback-Leibler divergence between the joint distribution of M modes at a single frequency ω. While this quantity does not scale with trajectory length, the density of modes near a particular frequency is related to the total trajectory time by Δω = 2πT −1 . Since ± ω modes must be complex conjugates of each other and an overall average phase is prohibited by time translation invariance, asymmetry between these distributions can only be captured by relative phase relationships, quantified by their correlation functions. E is large when one variable tends to lead another in phase, implying a directed rotation between these variables in the time domain.
As mentioned above, P[x(ω)] describes the dynamics of a nonequilibrium steady state, and no reversal of external protocol is assumed. Further, in writing an expression for P½e xðωÞ, we assume that the observables are scalar, time-reversal symmetric quantities, such as the chemical concentrations we analyze below.
The Gaussian assumption we make here makes Eq. (3) exact only for systems obeying linear dynamics. Nevertheless, E is still defined for non-linear systems, where the integrated E lower bounds the true _ S. To see this, consider projecting complex dynamics onto Gaussian dynamics by choosing a data processing procedure which preserves two point correlations but which removes higher ones. This can be accomplished by multiplying every frequency by an independent random phasea postprocessing procedure which can be applied to individual trajectories. Post-convolution, the integrated EPF is equal to the KL divergence rate between forward and backwards rates. From the data processing inequality, the KL divergence rate of the true fields must be higher, so that the integrated EPF lower bounds the true entropy production rate [see Supplementary Note 2]. In addition to bounding the true _ S, we expect the integral of E to be a good approximation for the wide class of systems where linearization is reasonable. Such Gaussian approximations are starting points in many field theories, with higher order interactions accounted for by adding anharmonic terms in the action of Eq. (2). While this is not our focus here, we expect these additional terms to systematically capture corrections to _ S that do not appear in Eq. (3). As C ii (ω) = C ii (−ω), the only contributions to E come from the cross-covariances between the random variables of interest. As such, this bound yields exactly 0 for a single variable even though higher order terms may contribute to _ S.
This formulation extends naturally to random fields. For M random fields in d spatial dimensions, ϕ ¼ fϕ i ðx; tÞjt 2 ½0; T; x 2 R d g, the EPR density, _ s _ S=V where V is the system volume, is: where C ij (q, ω) is the dynamic structure factor and Eðq; ωÞ is now a function of wavevector q and frequency ω [see Supplementary Note 1]. Even without an explicit, analytic expression for the structure factor, C, we can estimate E from data. To use Eq. (4), we consider data of N finite length trajectories of M variables over a time T in d spatial dimensions. Each dimension has a length L i . We create an estimate of the covariance matrix, e Cðq; ωÞ, from time-series using standard methods [see Methods]. These measurements will inevitably contain noise that is not necessarily time-reversal symmetric, even for an equilibrium system. Noise due to thermal fluctuations and finite trajectory lengths in the estimate of e C from a single experiment (N = 1) will systematically bias our estimated E by ΔE ¼ MðMÀ1Þ 2 at each frequency and will thereby introduce bias and variance in our measurement of _ s. We can simply remove the bias from our measured E, but to reduce the variance, we smooth e C by component-wise convolution with a multivariate Gaussian of width σ ¼ ðσ q 1 ; ; σ q d ; σ ω Þ in frequency space, givingĈ. This is equivalent to multiplying each component of the time domain e Cðr; tÞ by a Gaussian, cutting off the noisy tails in the real space covariance functions at large lag times. We then useĈ in Eq. (4) to create our final estimator for the EPF,Ê, and thereby the EPR,_ s. We calculate and remove the bias inÊ and_ s in all results below [see Methods]. Smoothing e C with increasingly wide Gaussians in ω and q leads to a systematic decrease in_ s due to reduced amplitudes in e C (Supplementary Notes 3 and 4, Supplementary Fig. 1).
To illustrate the information contained in E, its numerical estimation, and the accuracy of_ s, we analyze simulations of coupled, one-dimensional Gaussian stochastic fields for which E and _ s can be calculated analytically. We then study simulations of the reaction-diffusion Brusselator, a prototypical model for nonlinear biochemical oscillators, and use E to study how irreversibility manifests at different time and length scales as the system undergoes a Hopf bifurcation 33 .
Driven Gaussian fields. Consider two fields obeying Model A dynamics 31 with non-equilibrium driving parametrized by α: where ξ(x, t) is Gaussian white noise with variance ξ i ðx; tÞξ j ðx 0 ; t 0 Þ ¼ δ ij δðx À x 0 Þδðt À t 0 Þ, D is a relaxation constant, and δF =δϕ is the functional derivative with respect to ϕ of the free energy F given by: so that the fields have units of ℓ 1/2 and r penalizes large amplitudes.
The EPR density, _ s, is calculated analytically in two ways. First, we solve Eq. (1) directly using the Onsager-Machlup functional for the path probability functional of ηðx; tÞ ¼ ϕðx; tÞ; ψðx; tÞ ð Þ T 4,34 . Second, the covariance matrices are calculated analytically, used to find E through Eq. (4), and integrated to find _ s. Both cases give the same result for _ s. The result for both E and _ s are [see Supplementary Note 5]: We see that E DGF ≥ 0 and exhibits a peak at (q, ω) = (0, ω 0 (0)), , indicating that the system is driven at all length scales with a driving frequency of α, dampened by an effective spring constant Dr. In addition, it is clear that multiple combinations of α, r, and D can give the same value for _ s while E distinguishes between equally dissipative trajectories in the shape and location of its peaks. In this way, E gives information about the form of the underlying dynamics not present in the total EPR. We note that E DGF is also recovered using an appropriately modified version of the generalized Harada-Sasa Relation introduced in 34 [see Supplementary Note 6].
We perform simulations to assess how well E can be extracted from time series data of fields [see methods for details]. The estimatedÊ shows excellent agreement with Eq. (7) (Fig. 1). IntegratingÊ gives_ s, which also shows good agreement with _ s DGF . Our estimator gives exact results for the driven Gaussian fields because the true path probability functional for these fields is Gaussian. In contrast, the complex patterns seen in nature arise from systems obeying highly non-linear dynamics. For such dynamics, our Gaussian approximation is no longer exact but provides a lower bound on the total irreversibility. To investigate how irreversibility correlates with pattern formation, we study simulations of the Brusselator model for biochemical oscillations 35 . We begin by describing the various dynamical phases of the equations of motion. Next, we calculate E and _ S for only the reactions before adding diffusion to study the synchronized oscillations that arise in the onedimensional reaction-diffusion system.
Reaction-diffusion Brusselator. We use a reversible Brusselator model 30,[35][36][37] with dynamics governed by the reaction equations: where {A, B, C} are external chemical baths with fixed concentrations {a, b, c}, and all the reactions occur in a volume V (Fig. 2a). The system is in equilibrium when the external chemical baths and reaction rates obey bk þ 2 k þ 3 ¼ ck À 2 k À 3 . When this equality is violated, the system is driven away from equilibrium and exhibits cycles in the (X, Y) plane. Defining the Brusselator is at equilibrium when Δμ = 0 and is driven into a non-equilibrium steady state when Δμ ≠ 0. We vary b and c to change Δμ while keeping the product ðbk þ 2 k þ 3 Þðck À 2 k À 3 Þ ¼ 1, keeping the rate at which reactions occur constant for all Δμ 38 .
As Δμ increases, the macroscopic version of Eq. (8) undergoes dynamical phase transitions. For all Δμ, there exists a steady state (X ss , Y ss ), the stability of which is determined by the relaxation matrix, R [Supplementary Note 7]. The two eigenvalues of R, λ ± , divide the steady state into four classes 33 : The eigenvalues undergo these changes as Δμ changes, allowing us to consider Δμ as a bifurcation parameter. We define Δμ HB as the value of Δμ where the macroscopic system undergoes the Hopf bifurcation.
Non-equilibrium steady states are traditionally characterized by their circulation in a phase space [39][40][41][42][43] . One may then question how it is possible to detect non-equilibrium effects in the Brusselator when the system's steady state is a stable attractor with no oscillatory component. While this is true for the macroscopic dynamics used to derive λ ± , we simulate a system with finite numbers of molecules subject to fluctuations. These stochastic fluctuations give rise to circulating dynamics, even when the deterministic dynamics do not 36 . We see persistent circulation in the (X, Y) plane when λ ± 2 R < 0 , with the vorticity changing sign around Δμ = 0 ( Supplementary Fig. 2).
In order to assess the accuracy of our estimated EPR,_ S, we calculate an estimate of the true EPR, _ S true , for a simulation of Eq. (8) by calculating the exact entropy produced by each reaction that occurs in the trajectory 44 , and then fitting a line to the cumulative sum ( Supplementary Fig. 3, Methods). We find that_ S significantly underestimates _ S true (note the logged axes in Fig. 2c) due to the Brusselator's hidden dynamics. In the Brusselator, information is lost because the observed trajectories are coarsegrainedthey do not distinguish between reactions that take place forward through the second reaction or backwards through the third reaction in Eq. (8). These pathways would be distinguishable if trajectory of B and C were also observable.
Our method relies purely on system dynamics to give_ S. Eq. (1) is true only if all microscopic details are captured by trajectories X. If X is already coarse-grained, multiple microscopic trajectories will be indistinguishable and Eq. (1) will underestimate the true entropy production rate due to the data processing inequality 19,45,46 .
In order to account for this, we recalculate _ S by considering the rate at which a given transition can occur as the sum over all chemical reactions that give the same dynamics [see Methods]. For example, a transition from (X, Y) → (X − 1, Y + 1) can occur via reaction k þ 2 or k À 3 in the Brusselator, each of which produces a different amount of entropy in general. Looking only in the (X, Y) plane, it is impossible to tell which reaction took place. When calculating the entropy produced by only the observable dynamics, the rate of making the transition (X, while the rate of making the reverse transition is k r ¼ k À 2 þ k þ 3 , and the entropy produced is log k f k r . This estimate of the EPR, which we name _ S blind , is a coarse-graining of _ S true , giving the relation _ S blind ≤ _ S true 47 . We find that _ S blind shows excellent agreement with_ S, indicating that the Gaussian approximation provides a good estimate for the observable dynamics even when the system is highly non-linear. To further benchmark our estimator, we calculate _ S using two alternative methods, one based on the thermodynamic uncertainty relation (TUR) 7,48 and one based on measuring first passage times (MFPT) 49  Similarly to_ S, both of these methods saturate to the true _ S for systems obeying linear dynamics. As such, they also approximate _ S blind , but we find that they provide a looser bound than_ S (Supplementary Fig. 4). Prior to Δμ HB , both_ S and _ S blind show a shift in their trends, but _ S true does not. The smooth transition is due to the finite system size we employ, and gets sharper as a power law as the system gets larger (Fig. 3a). The power law exponent measured from_ S is nearly linear, consistent with the Gaussian assumption. The exponent differs from that of _ S blind because our Gaussian assumption breaks down at the high values of Δμ where the maximum slope occurs (Fig. 3b).
The Hopf bifurcation for the Brusselator is supercritical 23 , meaning the limit cycle grows continuously from the fixed point when Δμ − Δμ HB ≪ 1. Further from the transition point, the trajectory makes a discontinuous transition. At our resolution in Δμ, this discontinuous transition is what underlies the shift in _ S blind of the Brusselator. This same transition is present in _ S true , but is difficult to detect numerically for reasons we explain here. In the deterministic limit, are the forward and reverse fluxes for transforming a B molecule into a C molecule. 〈x〉 is a constant, but by numerically integrating the deterministic version for Eq. (8), we observe a discontinuity in 〈y〉 above the Hopf bifurcation. However, J F ≫ J R , obscuring the discontinuity in _ S true (Fig. 3c). Upon coarse-graining, we have _ S blind ¼ Δμ J R blind À J F blind À Á , with J F blind ¼ bhxik þ 2 þ hxi 3 k À 3 and J R ¼ chyik À 2 þ hxi 2 hyik þ 3 . These two terms are equal to each other for Δμ < Δμ HB and diverge continuously when Δμ ⪆ Δμ HB , followed by the relatively large discontinuity in J R blind (Fig. 3c, inset). One gains further insight into the dynamics through the transition by studyingÊ (Fig. 2b). For Δμ < Δμ HB ,Ê exhibits a single peak that increases in amplitude while decreasing in frequency as Δμ increases. Above Δμ HB , the peak frequency makes a discontinuous jump, the magnitude of the peak grows rapidly, and additional peaks at integer multiples of the peak frequency appear due to the non-linear shape of the limit cycle attractor. These harmonics are expected for dynamics on a noncircular path. For Δμ < Δμ HB , the magnitude of the peak is independent of system volume, while it gains a linear volume dependence in the limit cycle. The width of the peak is also maximized near the transition, reflecting a superposition of frequencies present in the trajectories ( Supplementary Fig. 5).
To investigate how dynamical phase transitions manifest in the irreversibility of spatially extended systems, we simulate a reaction-diffusion Brusselator on a one-dimensional periodic lattice with L compartments, each with volume V, spaced a distance h apart. The full set of reactions are now where d j = D j /h 2 , and D j is the diffusion constant of chemical species j = {X, Y}. Qualitatively different dynamics occur based on the ratio D X /D Y . D X /D Y ≪ 1 yields static Turing patterns 10,29 . We focus on the D X /D Y ≫ 1 regime which exhibits dynamic, excitable waves. All values of {a i , b i , c i } are kept constant in each compartment.
In the steady state, the reaction-diffusion Brusselator has the same dynamics as the well mixed Brusselator, and so it is not surprising that it's EPR curve as a function of Δμ is similar (Supplementary Fig. 6). However, unlike the well-mixed system, the Hopf bifurcation signals the onset of qualitatively distinct dynamics in the reaction-diffusion system. Prior to the Hopf bifurcation, there are no coherent, spatial patterns in the system's dynamics (Fig. 4a). Above the Hopf bifurcation, system-spanning waves begin to emerge that synchronize the oscillations across the system (Fig. 4b). Following standard methods 50,51 , we define the synchronization order parameter, 0 ≤ r < 1, using where θ j (t) is the phase of the oscillator at position x j and time t, M is the number of oscillators (here, the number of lattice sites in our simulation), and T is the temporal extent of the data [Methods]. ψ denotes the overall phase, and r is close to zero in the asynchronous phase and approaches one as the oscillators synchronize. Below Δμ HB , r is low and rapidly approaches one as the system approaches the macroscopic bifurcation point (Fig. 4c). Like _ S, this transition occurs more sharply and closer to Δμ HB as the system size increases, approaching the discontinuous transition to the limit cycle behavior (Fig. 4c, inset) 52 . Throughout these changes, the system is driven further from equilibrium, as reflected in the increasing_ s (Fig. 4d). The shift to collective behavior is not reflected in _ s as it is almost identical to _ S found for the well-mixed Brusselator (Supplementary Fig. 6). Instead, E carries the signature of the dynamical phase transition. For Δμ < Δμ HB ,Ê shows peaks at high wavenumbers, reflecting that irreversibility is occurring incoherently over short length scales. Above Δμ HB , as the system shows synchronized oscillations, there is an abrupt shift in the peaks ofÊ to low q, indicating that this collective behavior carries the majority of the irreversibility (Fig. 5b,c). We also infer that the collective behavior is partially composed of traveling waves due to the streaks inÊ (Fig. 5b). The slight offset in the transition occurs for high values of Δμ < Δμ HB where small regions synchronize for short periods of time, but system wide oscillations are not observed (Supplementary Fig. 7). Furthermore, the transition moves closer to the macroscopic transition point with increased volume of the individual compartments ( Supplementary Fig. 7).

Discussion
Previous work has investigated the behavior of _ S at thermodynamic phase transitions with the work of 22 finding general signatures of discontinuous phase transitions in _ S which agree with our results. While 26 found _ S to have a discontinuity of its first derivative with respect to Δμ in a slightly modified version of the well-mixed Brusselator, work on the same system presented here did not find any non-analytic behavior in _ S true 30 . We show that a discontinuous phase transition exists in our model, but the magnitude of the discontinuity is small and difficult to detect in _ S true and is more easily seen in the coarse-grained _ S blind (Fig. 3). Further, other spectral decompositions of the dissipation rate either assume a particular form for the underlying dynamics 27 or require the measurement of a response function in addition to the correlation function 34 , which is often difficult to perform in experiments.
Here, we illustrated that the total irreversibility rate cannot distinguish between the dynamical phase transitions in the wellmixed and the spatially extended Brusselator ( Supplementary  Fig. 6). While the EPR quantifies the emergence of oscillations, the synchronization of the oscillations across space is only captured in E by its peak shifting from high to low wavenumber (Fig. 5). By simulating systems with increasing compartment volumes, this shift occurs closer to the macroscopic transition point ( Supplementary Fig. 7), similarly to the increasing sharpness of the shift in _ S for the well-mixed Brusselator (Fig. 3). Thus, synchronization is intimately related to the emergence of oscillations. We hypothesize that synchronization occurs due the presence of a slow segment of the Brusselator dynamics (Fig. 2a). The time spent in the slow portion of the dynamics allows neighboring oscillators to reduce their relative phase through their diffusive coupling, allowing previously out-of-sync lattice sites to synchronize via the low-cost mechanism of diffusion. This is further seen by the higher value of _ s blind for the reactiondiffusion Brusselator compared to _ S blind for the well-mixed Brusselator when Δμ < Δμ HB , but not for Δμ > Δμ HB (Supplementary Fig. 6). Once the oscillations are synchronized, diffusion between lattice sites at equal concentrations is an equilibrium process and does not produce entropy.
In summary, we have introduced the entropy production factor, E, a dimensionless, scalar function that quantifies irreversibility in macroscopic, non-equilibrium dynamics by measuring time-reversal symmetry breaking in the cross-covariances between multiple variables. Integrating E gives a lower bound on the net entropy production rate, _ s. Calculating E does not require knowledge about the form of the underlying dynamics and is easy to calculate for many types of data, including both random variables, such as the positions of driven colloidal particles 53 (Supplementary Note 9, Supplementary Fig. 8 and 9, Supplementary Table 5), and random fields, such as spatially heterogeneous protein concentrations in cells 54 . Furthermore, we stress that we are only able to resolve the irreversibility present in the observable dynamics of our chemical example. As discussed above, the presence of hidden dynamics will provide underestimates of irreversibility measured via Eq. (1) due to the data processing inequality 55 . Using other observable information, such as asymmetric transition rates 56 or the ratio of populations in observed states under stalled conditions 46 in Markov jump processes, can give tighter bounds on the entropy produced when unobserved, dissipative processes are present. While the examples considered here are simulations of 1+1 dimensional fields, there is nothing inherently different in the methodology if one were to analyze experimental data in 2 or 3 spatial dimensions, such as the 3+1 dimensional time series data attained using lattice-light sheet microscopy 57 .
In active matter, both living and non-living, the nonequilibrium dissipation of energy manifests in both time and space. With the method introduced here, compatible with widelyused computational and experimental tools, we provide access to these underexplored modes of irreversibility that drive complex spatiotemporal dynamics.

Methods
Calculating E from data. Estimate E requires estimating frequency-space covariance functions, or cross spectral densities (CSDs). Considering a set of M discrete, real variables measured over time: {X i (t)}, where t = Δt, …, T, with T = NΔt, and i = 1, …, M indexes the variables, we estimate the CSD using the periodogram, where x i ðωÞ ¼ FfX i ðtÞ À X i ðtÞ h igare the Fourier transforms of the centered variables over the frequencies ω n = 2πnT −1 for n ¼ ½À N 2 ; N 2 . The periodogram, is known to exhibit a systematic bias and considerable variance in estimating the true CSD. Both of these issues can be resolved by smoothing e C ij via convolution with a Gaussian with width σ. This is equivalent to multiplying e C ij in the time domain by a Gaussian of width σ −1 . We then define our smoothed CSD aŝ OnceĈ is calculated, we then use the discrete version of Eq. (3) to estimate E. The extension to higher-dimensional data is done as follows: taking into account the spatial lattice on which the data is taken in Eq. (12), convolving the result with a multivariate Gaussian in Eq. (13), and finally estimate _ s using the discrete version of Eq. (4). The choice of smoothing width, σ, should be guided by the maximum curvature seen in the structure factor, C ij58 .
Bias inÊ and_ S. Our estimates ofÊ and_ S are biased. The bias is found by calculating the expected value of_ S for a system in equilibrium. To do this, we assume that the true covariance function is C ij = δ ij and measurement noise plus finite sampling time and rate gives rise to Gaussian noise in both the real and complex parts of e C ij ðωÞ, obeying the symmetries required for C ij to be Hermitian.
We only cite the results here and refer the reader to Supplementary Note 4 for a full derivation. The bias for random variables is where M is the number of variables, ω max is the maximum frequency available, σ is the width of the Gaussian used to smooth e CðωÞ, and T is the total time. The bias for random fields is where L i is the length, q max i is the maximum wavenumber, and σ q i is the width of the Gaussian used to smooth e Cðq; ωÞ in the i th spatial dimension.
Simulation details. To simulate the driven Gaussian fields, Eq. (5), we nondimensionalize the system of equations using a time scale τ = (Dr) −1 and length scale λ = r −1/2 . We use an Euler-Maruyama algorithm to simulate the dynamics of the two fields on a periodic, one-dimensional lattice. We simulate Eq. (8) using Gillespie's algorithm 59 to create a stochastic trajectory through the (X, Y) phase plane with a well-mixed volume of V = 100. We calculate the true _ S of any specific trajectory z = {m j |j = 1, …, N} as follows. For each state m 0 , there exists a probability per unit time of transitioning to a new state m via a chemical reaction μ, denoted by W ðμÞ m;m 0 . At steady state, the true entropy produced is 44 Note that ΔS true is now itself a random variable that depends on the specific trajectory. We estimate _ S true by fitting a line to an ensemble average of ΔS true ( Supplementary Fig. 3), and compare that to_ S. We calculate _ S blind by considering the "rate" at which a transition can occur as the sum over all the rates that give rise to the observed transition in (X, Y), i.e., where P fμ j jm jÀ1 !m j g denotes a sum over all reaction pathways μ that give rise to the transition m j−1 → m j . This procedure coarse-grains ΔS true , giving ΔS blind ≤ ΔS true 47 . ΔS blind is the maximum entropy production that can be inferred by any method that observes trajectories in (X, Y), but which does not have access to the reaction pathways followed.
To simulate the reaction-diffusion Brusselator, Eq. (10), we take a compartment-based approach 60 where we treat each chemical species in each compartment as a separate species, and treat diffusion events as additional chemical reaction pathways. We nondimensionalize time by τ ¼ ðk þ 1 Þ À1 and use a Gillespie algorithm to simulate all reactions on a one-dimensional periodic lattice with L sites. See Supplementary Tables 1-5 for all simulation parameters used in each figure.
Synchronization order parameter. The synchronization order parameter given in Eq. (11), r, is a function of the oscillator phase at every lattice site j at time t, θ j (t).
In order to calculate θ from our data, we measure the oscillator's phase with respect to a trajectory's mean position over time, namely θ j ðtÞ ¼ arctan Y j ðtÞ À hYi X j ðtÞ À hXi where the angle is taken over space and time. This phase is then used to calculate r as given in Eq. (11).

Data availability
The data that support the findings of this study are available from the corresponding authors upon reasonable request.