Abstract
Statistical mechanics provides an effective framework to investigate information processing in biochemical reactions. Within such framework farreaching analogies are established among (anti) cooperative collective behaviors in chemical kinetics, (anti)ferromagnetic spin models in statistical mechanics and operational amplifiers/flipflops in cybernetics. The underlying modeling – based on spin systems – has been proved to be accurate for a wide class of systems matching classical (e.g. Michaelis–Menten, Hill, Adair) scenarios in the infinitesize approximation. However, the current research in biochemical information processing has been focusing on systems involving a relatively small number of units, where this approximation is no longer valid. Here we show that the whole statistical mechanical description of reaction kinetics can be reformulated via a mechanical analogy – based on completely integrable hydrodynamictype systems of PDEs – which provides explicit finitesize solutions, matching recently investigated phenomena (e.g. noiseinduced cooperativity, stochastic bistability, quorum sensing). The resulting picture, successfully tested against a broad spectrum of data, constitutes a neat rationale for a numerically effective and theoretically consistent description of collective behaviors in biochemical reactions.
Introduction
Since the pioneering work by Hopfield^{1} on kinetic proofreading and the early applications of stochastic techniques to reaction kinetics by Chay and Ho^{2} or Wyman and Phillipson^{3}, the combination of a number of recent significant results, both experimental (see e.g. refs 4, 5, 6, 7) and theoretical (see e.g. refs 8, 9, 10, 11), has boosted the current understanding of biochemical informationprocessing systems, namely of how the thermodynamics of biochemical reactions spontaneously encodes information processing.
These results stem from investigations scattered over different fields of biological research involving, for instance, intercellular^{8,12} and intracellular signalling^{5,10,13}, enzymatic cycles^{7}, ribo and toggle switches^{4,14,15,16,17}, ultrasensitive mechanisms^{18,19,20}, DNAcomputing^{13,21}, transcriptional and regulatory networks^{22,23,24}, and more.
The theoretical description of such systems is typically based on stochastic approaches, e.g., Fokker–Planck equations suitably adapted to the cases of interest, leading to the chemical extension of the master equation approach (see e.g. refs 9,24 and 25 and references therein).
Restricting to steady states, an alternative approach relies on statistical mechanics, as suggested by C.J. Thompson in his seminal work^{26}. Indeed, statistical mechanics turns out to be particularly effective for the description of universal behaviors of a wide range of biological systems (from the extracellular level of neural^{27,28,29} or immune networks^{12,30,31}, to the intracellular level of gene regulatory and protein networks^{30,32,33}). Moreover, as recently observed in the case of large systems^{34,35}, the statistical mechanical approach plays the role of a general stochastic framework that naturally highlights the structural and conceptual analogies between response functions in biochemical reaction kinetics and transfer functions in cybernetics (see Figs 1 and 2), thus tacitely working as a translator between these two worlds, that is crucial to show how information is handled by these biochemical systems.
However, the current research in biochemical information processing has been recently attributing particular importance to systems involving a relatively small number of units and this implies that the standard statistical mechanical picture, given in the thermodynamic limit (where the role of intrinsic noise can be suppressed^{36}), is not accurate. One of the goals of the present work is to extend the theoretical framework developed in refs 34 and 35 to allow for the description of systems of finite sizes.
Before proceeding it is worth stressing that the statistical mechanical description of chemical kinetics pursued here is based on the canonical ensemble and it is accomplished at the meanfield level. This is therefore clearly different from the statistical mechanical description based on the grancanonical ensemble introduced in the 70’s ^{37}. The advantage of the present formulation is that the use of a canonical framework allows us to establish structural bridges between the phenomenology of these biochemical systems on one side and the wellconsolidated theory of information processing systems (i.e. cybernetics) on the other side, since the latter (inflected for instance in terms of neural networks and learning machines) is mainly developed within the canonical formalism^{28}.
Along the same line we choose to adopt the meanfield perspective: admittedly, this implies a cost (as we give up a detailed description of the true architecture of the system under consideration and we deal with effective parameters to be properly renormalized), yet the reward lies in the possibility to directly compare the emerging response functions with transfer functions in cybernetics and therefore to understand how information is processed through a given reaction. Further, trough direct calibration of the renormalized key parameters over a small subset of data, the whole theory become of immediate experimental applicability.
Let us now present in more detail the statisticalmechanical framework adopted in this work and the results we obtain through this path. We consider large macromolecules (e.g., polymers, proteins, etc.), whose binding sites are dichotomic variables (Boolean logical operators to be thought of as Ising spins) that can be either empty or occupied by a ligand (e.g., a substrate): the logconcentration of free ligands plays the role of the external field in spin systems. The binding sites are not necessarily independent: in the special case where they are independent the emerging kinetics follows the socalled Michaelis—Menten law, and this is naturally captured by the lack of interaction between spins within the statistical mechanical route of formalization. On the contrary, if there is interaction, the kinetics can be cooperative (for positive values of the interaction), or anticooperative (for negative values of the interaction), and these behaviors are typically coupled to various names, e.g., Hill, Koshland, and Adair reactions, etc. (see e.g. ref. 37). More precisely, binding sites with identical structure/function (e.g., the four oxygencapturing arms of the hemoglobin) are modeled as a unique spin system, where each spin feels the external field (the oxygen logconcentration in this example) and interacts imitatively with other spins; conversely, binding sites with different structure/function are modeled as a spin system fragmented into more parties, where spins belonging to different parties compete to bind the ligand by antiimitative interactions (e.g., for the two insulincapture αsubunits in the insulin receptor, the system describing the receptor is split into two main parties i.e. arms—that anticooperate). This scheme naturally leads to a statistical mechanical scenario consisting in multipartite spin systems (on which—in turn—there has also been recent interest in the Statistical Mechanical community^{38,39,40,41,42,43}), with (positive or null) intraparty interactions and (positive or negative or null) interparty interactions; we will focus on the twoparty case, as schematized in Fig. 3. The behavior of this system in the presence of an external magnetic field h (i.e., the input) can be captured in terms of the average magnetization m (i.e., the output), which keeps track of the underlying collective features among spins. From a kinetics perspective, the function m(h) recovers the saturation function of a system of binding sites as the concentration of free ligands is tuned. The isotherm of the magnetization and the saturation function are just two different inflections of the same transfer function, namely of the same inputoutput relation.
In this paper the exact, explicit, solution of these magnetic systems (and of the reaction kinetics they code for too) is given also at finite volumes. In particular, we derive a set of linear multidimensional partial differential equations for the partition function of the bipartite spin model which are valid for any (either finite or infinite) number of spins N. The solution for the model is specified via a suitable initial datum. We show that in the thermodynamic limit N → ∞ the free energy related to these systems fulfils a set of Hamilton–Jacobi type equations that is completely integrable by the characteristics method and we provide the explicit solutions in terms of partial magnetizations, which in turn satisfy a set of two coupled equations of state. Via this route, the way these reactions handle information processing becomes transparent; such equivalence is summarized in Fig. 2 by means of illustrative examples.
Once wrote the general theory, at first we successfully compare the expected behavior of our theoretical saturation function with experimental results for several examples of cooperative, anticooperative and ultrasensitive kinetics of large systems: we show that the key parameters of the theory match always with remarkable accuracy the standard empirical indicators (e.g., the coupling constant used in the Hamiltonian spinlike representation of the reaction recovers the Hill coefficient of standard kinetics literature).
Then, we proceed by considering small systems (i.e. away from the thermodynamic limit) and we show that the solution at finite sizes can still be obtained explicitly (by separation of variables once an Ansatz on the initial state is given): the framework obtained in this way is used to explore and address several phenomena recently highlighted in the experimental literature on small system’s kinetics (e.g., we recover that systems devoiding of cooperativity can still display a cooperativelike behavior and, possibly, bistability phenomena due to stochastic effects, as for instance discussed in ref. 44).
The paper is organized as follows. First, we provide a brief survey of the statistical mechanical formulation of reaction kinetics as proposed by Thompson in ref. 26 and later developed in refs 34 and 35. Then, we derive differential identities for the partition function and the free energy related to these models and construct their solutions both in the thermodynamic limit and for small systems. Next, we discuss implications of our theory such as noiseinduced cooperativity and bistability, signal amplification and noise suppression. Finally, we summarize and comment on our results and discuss further outlooks.
Results
Standard chemical kinetics: statistical mechanical formalization and cybernetic interpretation
Hereafter we review main concepts of chemical kinetics, statistical mechanics and cybernetics, which we will be needed in the following; we refer to classical textbooks for a more extensive treatment (see e.g. refs 36,45, 46, 47, 48).
Chemicalkinetics framework
In many macromolecules (i.e. polymers, proteins, etc.) ligands bind in a nonindependent way. In particular, if, upon a ligand binding, the probability of further binding (by other ligands) is enhanced, as for example in the paradigmatic case of hemoglobin^{26}, the system is said to exhibit positive cooperativity; viceversa, cooperativity is negative where further binding of ligands is inhibited^{49} as in the case of some insulin receptors^{50} (see Fig. 3 for a schematic representation). Fundamental mechanisms underlying cooperativity depend, in general, on the microscopic details of the system under consideration. For instance, in the case of polymers, if two neighbor docking sites bind charged ions, the electrostatic attraction/repulsion may be responsible for positive/negative cooperativity.
In complete generality, let us consider a model where several identical hosting (macro)molecules P can bind overall N identical small molecules S (whose concentration is denoted by [S] ≡ α) on their structure; calling P_{j} the complex of a molecule P with j ∈ [0, N] molecules attached, at the chemical equilibrium we have
For the sake of simplicity, let us consider the case j = 1. The time evolution of the concentration [P_{0}] of the unbound protein P_{0} is governed by the equation
where are, respectively, the forward and backward rate constants for the state j = 1, and their ratio defines the association constant . In the steady state d[P_{0}]/dt = 0, we have
For generic j > 0 we have
Therefore, in general, one can write [P_{1}] = [P_{0}]K^{(1)}α, [P_{2}] = [P_{1}]K^{(2)}α = [P_{0}]K^{(1)}K^{(2)}α^{2}, and, by extension, . In many practical situations, a direct measure of [P_{j}] is not feasible. A convenient experimental observable is then given by the average number of ligands that, at the equilibrium, are bound to the macromolecule(s) and this is given by
The last expression in the previous equation is the wellknown Adair’s equation^{36}, obtained by iterating the relation (2).
In this work (as standard) we will focus on the normalized version of , called saturation function Y and defined as
Unless otherwise stated, Y represents the expected fraction of occupied sites in the whole set of molecules P under investigation, namely the global system’s response (i.e. the output signal) to the stimulus provided by the ligand’s concentration α (i.e. the input signal).
In a noncooperative system, one expects independent and identical binding with microscopic association constant K, and one can write K^{(j)} = (N − j + 1)K/j. In this case, Adair’s equation (3) for and Y reads as
The latter expression is the wellknown Michaelis–Menten equation^{36}.
Clearly, the kinetics becomes far less trivial as soon as interactions among binding sites occur. For the sake of simplicity, if we consider the limiting case where intermediate steps can be neglected, i.e.
we get that
More generally, accounting also for some degree of sequentiality^{36}, one obtains the wellknown Hill’s equation
where n_{H}, referred to as Hill coefficient, represents the effective number of interacting binding sites. We note that, by posing n_{H} = 1, equation (9) recovers the Michaelis–Menten law (6). In fact, in the noncooperative case, the number of binding sites effectively interacting is just one (i.e., there are no true interactions). For n_{H} > 1 the kinetics is said to be cooperative, while for n_{H} < 1 it is said anticooperative. If n_{H} > 1 the kinetics is said to be ultrasensitive.
Given a set of experimental measures of the saturation function Y(α), n_{H} is estimated as the slope of log[Y/(1 − Y)] versus α, measured at halfsaturation (namely, when Y = 1/2).
The statistical mechanics formalization
Statistical mechanics can provide a convenient language to describe biochemical kinetics and frame it into a (noisy) logical scaffold (see refs 34 and 35 for details). In the present section we briefly review how main concept of meanfield cooperative statistical mechanics (i.e. ferromagnetism) apply to the scenario of cooperative reaction kinetics.
We start from the microscopic model described above, consisting of an ensemble of identical macromolecules, carrying overall N binding sites labeled by i = 1, 2, …. Macromolecules are in a solution with smaller molecules (the substrate) and each binding site can accommodate only one molecule of the substrate. Notice that, following the meanfield approach, here we do not distinguish binding sites belonging to the same molecule, but we are treating the whole set of binding sites, pertaining to the whole set of molecules, at once.
Let us associate to each binding site an Ising spin σ where σ_{i} =+1 if the ith site is occupied and σ_{i} =−1 if it is empty. A particular configuration of the system is specified by the set {σ}, through which all the classical observables can be introduced. For instance, the magnetization is defined as and is directly related to the occupation number of binding sites by
in such a way that the saturation function (in terms of the magnetization) reads as
For simplicity, we first consider a system which does not exhibit collective behavior, that is, no interaction between binding sites is present and only the interaction between the binding sites and the ligand occours. The external field in statistical mechanics, namely a scalar parameter h, is introduced as the logconcentration log(α) of freeligand molecules^{26}. As there are no spinspin interactions, this system is mapped into a system of free spins σ thermalyzing in the presence of a magnetic field h and whose energy function (or Hamiltonian) is given by
Note that h is assumed to be independent of the site index, meaning that binding sites interact uniformly and independently with the ligands; this also means that the timescale for diffusion is fast with respect to the time scale for thermalization.
Once the Hamiltonian representing the model under investigation is defined, we can apply the standard statistical mechanics machinery, namely the Maxwell–Boltzmann probability distribution P({σ}) associated to a generic system state {σ}
where Z is the normalization, called partition function and β tunes the level of noise in the system, in such a way that for β → 0 the probability measure becomes flat and every state assumes the same chance to happen, while for β → ∞ the system collapses in configurations corresponding to the minima of the Hamiltonian.
Throughout the paper, given a generic function of the spins f(σ), the bracket averages 〈f(σ)〉 represent the averages over the Maxwell–Boltzmann probability distribution.
An explicit evaluation of the partition function Z allows for a direct measurement of the free energy F related to the model encoded by H, that is, F = (1/N)lnZ (notice that, in our derivation, just for mathematical convenience we work with the negative free energy ; hence, max(F) just corresponds to ). In general, F depends on the system configuration {σ} and on the system parameters, namely the external field, the level of noise, and the possible coupling between spins. Once these parameters are set, the maximization of F provides the related equilibrium state(s). This is because the free energy F is nothing but , where is the internal energy of the system (i.e. the Maxwell—Boltzmann average of the Hamiltonian at given noise β) and is the entropy, thus, maximizing F implies looking for the minimum of and, simultaneously, the maximum of (at given β).
In particular, for the system described by the Hamiltonian (12), the energy is minimized by those configurations where spins are aligned with the external field. Thus, in the absence of any source of noise, the system relaxes to a state where 〈m〉 = 1 (i.e., σ_{i} = + 1, ∀ i) if h > 0, or to a state where 〈m〉 = −1 (i.e., σ_{i} = −1, ∀ i) if h < 0. In the presence of noise the equilibrium state requires, beyond energy minimization, also entropy maximization: setting for simplicity β = 1 (without loss of generality since this is equivalent to rescale h → hβ) one can find that the equilibrium state corresponds to (see e.g. ref. 46).
Actually, one could reach the same result without relying on statistical mechanics, as explained hereafter. First, we need to relate h to the ligand concentration α. As mentioned above, the energy is minimized by those configurations where spins are aligned with the external field: if h > 0 molecules tend to bind to diminish energy, while if h < 0 bound molecules tend to leave occupied sites; this suggests that h plays the role of the chemical potential for the binding of the ligand molecules on the docking sites. Moreover, as observed in refs 26,34 and 35, the chemical potential can be expressed as the logarithm of the concentration of the substrate, upon proper normalization, namely
where α_{0} stands for the value of ligand concentration such that binding sites have the same probability of being occupied or unoccupied.
Crucially, under the meanfield approximation (and this is the only case, apart from linear chain models which, still, do not exhibit phase transitions and are of moderate interest), the probability P({σ}) of the configuration {σ} can be factorized as . One can therefore focus on the singlespin probability and, following^{26}, state that P(+1) is proportional to the concentration α and that P(−1) is proportional to the inverse of the concentration α, that is, P(+1) = Ce^{+h} and P(−1) = Ce^{−h}, where C is fixed in such a way that P(+1) + P(−1) = 1, i.e. C = 1/(2cos h(h)). Then, the expression implies that
Exploiting this result the average saturation function (see Eq. (11)) reads as
Using Eq. (13), and recalling that tanh(x) = [exp(2x) − 1]/[exp(2x) + 1], it is immediate to check that Eq. (14) coincides with the Michaelis–Menten equation (6) with the particular choice K = 1/α_{0}. This is perfectly consistent with the underlying assumption of independent binding sites (i.e. no couplings among spins) tacitly made when we defined the system under study, via the Hamiltonian (12).
Let us now generalize this scenario by introducing the simplest possible twobody interaction given by the Hamiltonian
The Hamiltonian (15) represents the wellknown Curie–Weiss theory of ferromagnetism^{46}. The first sum in the right hand side of (15) accounts for all possible N(N − 1)/2 interacting pairs of spins and the coupling is homogeneous as the constant J is the same for all pairs. If J > 0 the configurations where spins are aligned are more favored thus this choice naturally leads to a theory for cooperative kinetics, while J < 0 favors the configurations where spins compete and are misaligned thus working for the anticooperative case. Clearly, models with different values of J represent different chemical systems.
As well known, the condition of minimization for the Curie–Weiss free energy in the thermodynamic limit N → ∞, yields the socalled selfconsistency equation (see e.g. ref. 46)
Recalling the mapping (11), and reabsorbing β by the rescaling βJ → J and βh → h, the previous equation translates into the reaction kinetics vocabulary as
Eq. (16) implies that, at low noise levels, the Curie–Weiss model exhibits an abrupt change in the magnetization as a function of h. More precisely, a second order phase transition occurs at h = 0 (i.e., α = α_{0}) so that m, and then Y, is continuous while its derivative diverges as a function of J at the critical value J_{c} = 1. The phase transition is first order if, at fixed J > J_{c}, m (and then Y), is viewed as a function of h. In this case, for J > J_{c}, a jump occurs at h = 0 and the reaction is ultrasensitive (its transfer function mirrors that of an ON/OFF switch).
As a robustness check, we verify that, in the absence of interaction, the model is consistent with the classical Michaelis–Menten kinetics. Indeed, Eq. (17) can be written as
and, setting J = 0, the equation above clearly reduces to Eq. (6) with . In this case, no phase transitions occur and the model turns out to be not suitable to describe complex biochemical reactions.
As anticipated, in order to provide a quantitative information about how a system actually departs from the simplest Michaelis–Menten framework, the socalled Hill coefficient is introduced, defined as the slope of log[Y/(1 − Y)] versus α, which can be recast as
where the last expression was obtained using Eq. (17). It is straightforward to check that the Hill coefficient for the Michaelis–Menten model, obtained for J = 0, is n_{H} = 1 as expected. On the other hand, a positive value of J implies a positive cooperativity and the closer J is to the critical value J_{c} = 1 (from below) and the stronger the cooperativity exhibited by the system; values of J larger than 1 correspond to discontinuous saturations (i.e. systems showing ultrasensitive responses).
We stress that the relation (19) provides a crucial link between the experimentally accessible quantity n_{H} and the theoretical parameter J, namely the coupling strength of the underlying meanfield spin description.
We finally observe that smallcoupling expansions of the right hand side of (17), or, equivalently, of (18), lead to suitable polynomial approximations for the saturation function to be possibly compared with formulas obtained through classical (usually ODEbased) phenomenological approaches in chemical kinetics^{37}. For example, the first order expansion of the expression (18) around J = 0 gives
which is equivalent to Adair’s equation (3) provided that along with the rescaling .
The cybernetic interpretation
The cybernetic interpretation of chemical kinetics, extensively treated in refs 34 and 35, is based on the behavioral analogy between the saturation curves (or binding isotherms) in chemical kinetics, the selfconsistencies (i.e. stateequations) in statistical mechanics and the transfer functions in electronics, see Fig. 2. Similarly to saturation curves, selfconsistency functions and transfer functions are nothing but relations between an input (the ligand concentration α in chemical kinetics, the magnetic field h in statistical mechanics and the input voltage V_{in} in electronics) and an output (the saturation function Y in chemical kinetics, the expected magnetization 〈m〉 in statistical mechanics and the output voltage V_{out} in electronics).
As an illustrative example, let us consider the paradigmatic operational amplifier. In a regime of small input voltage V_{in}, the output voltage V_{out}, is described by the following transfer function
where G = (1 + R_{2}) is referred to as gain of the amplifier and R_{2} is the feedback resistor (allowing for true amplification as, if R_{2} = 0, then G = 1), as shown in Fig. 2 (left column). Similarities between this response function and the saturation curve in chemical kinetics can be clarified, using the statistical mechanics formalism, by comparing the mathematical structure of the response of these systems, namely comparing V_{out} in (21) with the average magnetization 〈m〉 of a ferromagnet in the linear regime [where x = (J〈m〉 + h) is small such that tanh(x) ~ x]:
Observing that, for J < J_{c} ≡ 1, the Hill coefficient (19) can be approximated as n_{H} = 1/(1 − J) ~ (1 + J), we can write
The conceptual termtoterm identification between inputs and outputs of the above transfer functions suggests that the Hill coefficient can be interpreted as the gain factor for the reaction (and we can already see why cooperativity is needed to amplify biochemical signals, as we require J > 0). Let us also note that Hill coefficients are typically of order ~10 or less, and this contributes to explain why amplification of biochemical circuits is rather low^{51} if compared to its electronic counterpart, where the gain can be of several orders of magnitude^{47}.
We emphasize that this behavioral analogy between biochemical kinetics, statistical mechanics and electronics goes far beyond the linear regime exploited above because all the related response functions (i.e. binding isotherms, selfconsistencies and transfer functions), far away from the linear regime, display intrinsic saturation effects (see Fig. 1. right panel): roughly speaking, if all the macromolecular binding sites are occupied, no matter how further we increase the ligand concentration, the system will not vary its response that is already maximal. The same applies to spin systems as, once all the spins are aligned with the magnetic field, a further increase in the latter can not produce any shift in the system’s response and the same holds for operational amplifiers too as, once the output voltage reaches the collector tension, any further increase in the input voltage will no longer produce a variation in the response.
Moreover, apart from the operational amplifier, also other devices naturally fit the picture provided for describing biochemical reactions in different regimes. For example, if J > J_{c} (i.e., n_{H} ≫ 1 and G ≫ 1) the expected magnetization m(h) as well as the saturation function Y(α) develop a discontinuity at h = 0 and α = α_{0} respectively (and we refer to ultrasensitive kinetics in biochemistry and firstorder phase transitions in statistical mechanics). The corresponding limit for the operational amplifier is the analogtodigital converter (ADCs) at V_{in} = 0 (see Fig. 2, center column).
Another example is given by the simplest bistable flipflop, constituted by two saturable operational amplifiers reciprocally inhibiting, in such a way that the output of one of the two amplifiers is used as the inverted input of the other amplifier: this is the simplest 1bit memory device as it is possible to assign a logical 0 (or 1) to one state and the other logical 1 (or 0) to the other state. As the two operational amplifiers (i.e. the two parties) interact inhibiting reciprocally, the translation of this circuit into a chemical circuit will require anticooperativity among the two parties, thus will be naturally accounted when anticooperativity is at work (see Fig. 2, right column).
When dealing with more complex reactions, such as reactions involving several components and several steps, our approach allows to recognize the various basic ‘devices’ at work and to build the whole circuitry in a cascade fashion so to figure out how the equivalent ‘electronic circuit’ effectively processes information as the reaction goes by. In particular, a suitable combination of the elementary bricks described above leads to the construction of (bio)logic chemical gates, as discussed in ref. 35.
Here, rather than exploring further that route, we keep the focus on the outlined fundamental bricks and analyze their behavior away from the thermodynamic limit.
The mechanical formulation of chemical kinetics
The analogies between the saturable systems described in the previous section have been developed in ref. 34 restricting to the thermodynamic limit. This is a plausible regime for experiments involving extensive solutions of reactants and, indeed, the thermodynamic limit underlies also standard approaches in early modeling (e.g., classical chemicalreaction kinetics)^{24,49,52,53}. However, thanks to novel experimental breakthroughs, finally a number of recent studies involves only small numbers of molecules thus questioning the validity of any description in terms of such a large scale limit^{54}. This broad class of novel experiments includes toggle switches^{4,44,55}, stochastic bistability^{8,22,23}, reactions with noiseinduced cooperativity^{9,10,14} and the whole quorum sensing^{12,25,56,57}, just to cite a few: interestingly, these novel experiments in small systems have highlighted such complex behaviors which can not be recovered from theories based on the thermodynamic limit. Scope of the current section is thus to extend the mapping described in the previous section in order to include small systems as well. This will be obtained in two steps. First, we need to frame the whole statistical mechanical treatment of chemical kinetics into the mathematical scaffold of classical mechanics; the latter, extensively relying on non linear PDEs techniques, allows for an optimal mathematical control of these systems at finite sizes^{58,59,60,61,62}. Then, through this route, we extend the statistical mechanical treatment of reaction kinetics even to the case of finite systems and solve for the latter. Finally, we check the overall robustness of our theoretical predictions by recovering these novel outlined phenomena.
In the following we first present the general mapping, then we handle the simpler case N → ∞ to check that the known limit is properly recovered, and finally we address the finiteN case in detail. We emphasize that, in both the regimes, the model is exactly solvable as a consequence of the complete integrability of the PDEs derived for the partition function and the (related) system’s free energy.
The differential identities for the partition function
Let us consider the statistical mechanical formulation of a system where binding sites are of two kinds and evaluate the exact partition function and the related free energy in the thermodynamic limit (N → ∞) and in the case of finite volumes (N ≪ ∞). The present theory can be straightforwardly extended to account for an arbitrary number of different binding sites^{38,39,40} and to address chain reactions (whose implementation can be helpful in several biochemical information processing systems^{21,51}).
The general model we consider is described by a Hamiltonian of the form
where σ and τ are Ising spins associated to the binding sites of type A and B respectively, J_{AB}, J_{AA}, J_{BB} are the pairwisecoupling constants between sites (of type A and type B, both of type A, both of type B, respectively), and the external fields h_{A} and h_{B} correspond to the chemical potentials associated to the party A and to the party B, respectively. The overall number of sites is N = N_{A} + N_{B}, in such a way that, setting γ = N_{A}/N, we have
The relative magnetizations are defined as
The partition function for the system can then be written as
where we have defined
By direct differentiation one can immediately verify that Z satisfies the following set of compatible PDEs
Based on formula (27), it can be proven that the solution Z(x_{A}, x_{B}, t_{A}, t_{AB}, t_{B}) must satisfy the initial condition
The free energy F of the system, defined as , consequently fulfils the set of equations
with initial condition F_{0}(x_{A}, x_{B}) = F(x_{A}, x_{B}, t_{A} = t_{B} = t_{AB} = 0), where
The thermodynamic limit
In this section we evaluate the free energy and the equations of state for the bipartite system (25) in the thermodynamic limit N → ∞, where the limit is taken keeping the ratio γ = N_{A}/Nconstant (that is, none of the two parties is negligible w.r.t. the other). Hence, neglecting O(1/N) terms in equation (30), we have the following system of PDEs
The equations above are expected to provide an accurate description of the thermodynamic solution within regions of thermodynamic variables {x_{A}, x_{B}, t_{A}, t_{AB}, t_{B}} such that the second order derivatives in equation (30) are bounded. We observe that the system of equation (32) is a completely integrable set of Hamilton–Jacobi type equations and the general solution can be calculated via the method of characteristics (see e.g. ref. 63). Hence we find that the general solution can be written as follows
where 〈m_{A}〉 and 〈m_{B}〉 are functions of the thermodynamics variables x_{A}, x_{B}, t_{A}, t_{AB}, t_{B} defined by
which can also be written as
The functions 〈m_{A, B}〉(x_{A}, x_{B}, t_{A}, t_{AB}, t_{B}) are obtained extremizing the free energy F(x_{A}, x_{B}, t_{A}, t_{AB}, t_{B}), i.e.
or, equivalently,
where the function w(〈m_{A}〉,〈m_{B}〉) is uniquely fixed via the initial condition on 〈m_{A}〉 and 〈m_{B}〉
thus giving
By evaluating equation (34) for the function w given in (35), we obtain the selfconsistency equations in the form
We observe that the equations of state (34) can be interpreted as the hodograph solution (see e.g. ref. 64) of the following set of (1 + 1)dimensional equations of hydrodynamic type for the partial magnetizations 〈m_{A}〉 and 〈m_{B}〉
The set of equations above is completely integrable as directly follows from the equation (32) with
By differentiating equations (36), where magnetizations 〈m_{A}〉 and 〈m_{B}〉 explicitly depend on x and t variables, one can show that all first and second derivatives
develop a gradient catastrophe on the hypersurface
The solutions to the algebraic equation (39) are singular points on the space of coupling constants (x_{A}, x_{B}, t_{A}, t_{B}, t_{AB}) where partial magnetizations develop a classical shock. Such shocks, as they are known in hyperbolic wave theory, are associated to phase transitions in statistical mechanics^{39,58,60,61,62}. The equation (39) and the selfconsistency equations (36) imply that the point of vanishing magnetization 〈m_{A}〉 = 〈m_{B}〉 = 0 develops a gradient catastrophe if
When addressing the comparison of these outcomes with recent experimental results later, we will be focusing on models with no intraparty interactions (zero coupling between spins of the same type), i.e. t_{A} = t_{B} ≡ 0: this means that binding sites of the same type neither cooperate nor compete, while interactions between binding sites of different nature will be retained and can be both cooperative or competitive. In this case the critical point will simplify into
By plugging the expression for w appearing in Eq. (35) into the equation (33), the required solution F reads as follows
Recalling (11), the relation between the average magnetizations 〈m_{A}〉 and 〈m_{B}〉 and the variables Y_{A}, Y_{B} is given by
Hereafter, in order to lighten the notation, the bracket for Y_{A, B} shall be dropped.
Before proceeding it is worth recalling that the free energy can be decomposed into , where the energetic and the entropic contribution are highlighted. In particular, the former takes the form
Following the minimum energy principle we look for the states that minimize . In particular, as long as J ≥ 0 and h_{A}h_{B} > 0, these states are given by Y_{A} = Y_{B} = 1 (if h_{A}, h_{B} > 0) and by Y_{A} = Y_{B} = 0 (if h_{A}, h_{B} < 0). Otherwise stated, if both the types of binding sites display a positive affinity with the ligand and reciprocal cooperativity is either absent or positive, then, according to purely energetic prescriptions, both the types of binding sites tend (not) to bind the ligand if its concentration α is (smaller) larger than α_{0}.
Let us now consider the entropic contribution.
To see the effect of the maximum entropy principle at work instead, we differentiate with respect to both Y_{A} and Y_{B}, and impose the maximum condition, getting Y_{A} = Y_{B} = 1/2: as expected, the states favored by the entropic term are those corresponding to half saturation for both binding sites (the most disordered states available to the system).
In general, when both entropic and energetic contributions are considered, the equilibrium states stem from an interplay between the two (as the various parameters are tuned, i.e. the noise level β, the relative size γ of the two parties, the halfsaturation reference α_{0}, and the strength of the reciprocal coupling J_{A}, J_{B}, J_{AB}).
As anticipated, the maximization of F allows accounting for both extremization and this leads to (see the saturation curves in Eq. (36) and the mapping in Eq. (43))
These theoretical behaviors are successfully used to fit experimental data from positivecooperative and negativecooperative systems (see Fig. 4) and they are also tentatively tested on ultrasensitive kinetics (see Fig. 5), although ultrasensitivity requires particular care because of the breakdown of the selfaveraging property of the saturation function.
In fact, within the statistical mechanical framework, we know that away from phase transitions the magnetization is a selfaveraging order parameter, that is, when N → ∞ the distribution of the magnetization becomes deltapeaked on its thermodynamic average value, namely, , while when N is finite this distribution is only a Gaussian (still centered on 〈m〉) whose variance vanishes as N grows scaling as . However, when J → J_{c} = 1, namely when the system exhibits a phasetransition (or when a shock develops in the language of nonlinear waves) the variance diverges as N → ∞. In practice, this means that, away from phase transitions, but, exactly on the critical point (i.e. J = J_{c} and h = 0), this relation does not hold any longer and fluctuations in the order parameter grows indefinitely with the size.
Analogous arguments apply to Y as well, since Y and m are linearly related. Then, the criticality (whose signatures emerge even in real systems as N gets large, α = α_{0} and J → J_{c}) has important consequences also from a practical perspective. For any J > J_{c}, as larger and larger systems are considered, the saturation function Y(α) displays a discontinuity at α = α_{0}: at that point the system jumps from a (almost) empty state to a (almost) fullyoccupied state (the jump is meant as a real discontinuity of Y(α) at α = α_{0} only for N → ∞ and the degree of occupation of the two extremal states depends on the level of noise). The lack of smoothness in the function Y(α) makes the application of regression techniques awkward. Thus, despite from a biochemical perspective ultrasensitive reactions are only particularly strong cooperative reactions, from a mathematical perspective these two types of reactions are actually very different.
In the next subsection we will develop the small N theory that offers a more refined level of description for these critical systems, and, in particular, we will succeed in evaluating the “jump” that Y(α) experiences at α = α_{0} at finite volume N using shock theory: this will result in a practical instrument that can be used in modern experiments on small systems reaction kinetics.
Finitesize solution
Here we assume that N is finite and fixed and we study the exact solution of the system (28) with the initial condition (29). Let us remark that the initial condition (29) can be equivalently written as
where, we recall, N = N_{A} + N_{B}. Observing that the initial condition is separable in the variables x_{A} and x_{B}, we then look for separable solutions of the form
Substituting the previous expression into the equation (28) we obtain
The required solution is
where
In particular, the finitesize magnetizations
take the following explicit expressions
Notice that, by fixing t_{A} = t, t_{AB} = t_{B} = 0, x_{B} = 0 and N_{A} = N, we recover the finitesize solution to the Curie–Weiss model, as expected. Also, it is immediate to recognize that exp[Λ_{k, l}(x_{A}, x_{B}, t_{A}, t_{B}, t_{AB})]/Z plays the role of the finitesize canonical probability for the configuration with k and l bounded sites out of N_{A} and of N_{B}, respectively (corresponding to a magnetizations 2k − N_{A} and 2l − N_{B}).
Fitting Eq. (50) on ultrasensitive kinetics (as shown in Fig. 5) produces sensibly better results than using the infinite limit counterpart coded by Eq. 47.
Implications of the theory for small systems
The successful comparison between the experimental saturation plots and the predictions from selfconsistencies obtained in the previous section provides a sound check of the analogy developed. Now, we aim to go further to recover novel emerging phenomena, whose experimental evidence is well documented. In particular, we now focus on cooperativelike and bistable behaviors induced by noise and stochastic effects^{4,8,22,23,44,55}. Next, this phenomenology is further investigated in its relationship with quorum sensing^{12} and we will finally offer a cybernetic interpretation of this kind of process: this will allow us to provide a natural and robust explanation about how, during a biochemical reaction, the signal (carried by the input) is amplified, while an eventual noise becomes spontaneously attenuated.
Noiseinduced cooperativity and bistability
It is wellknown that cooperative binding in real systems (i.e. at finite sizes) induces a bistable behavior, which we call cooperativityinduced bistability. On the other hand, growing attention has been recently captured by the evidence that bistability can also occur without cooperativity (i.e. it may happen in systems whose binding sites do not interact at all, simply as a consequence of the intrinsic stochasticity, see e.g. ref. 44 and references therein). We call this phenomenon noiseinduced bistability. Indeed, these two kinds of bistability display a significant empiric resemblance that makes one speculate that stochasticity can play a a role in the effective cooperativity. However, the two phenomena are conceptually different, as it can be inferred from the present statistical mechanical treatment. The simplest way to do this is by starting from the monopartite system discussed in Sec. 2 and compare the selfconsistent expression (17) with the saturation function of cooperative systems (9). Both expressions are recalled hereafter for clarity:
Notice that in Eq. (51) the explicit β is retained (i.e., the rescaling βJ → J and βh → h is no longer applied) since here we are focusing on the role of β.
Now, it is easy to see that stochastic effects may drift the system away from Michaelis–Menten behavior (toward a cooperativelike one), even if there is no cooperation among binding sites. In fact, by assuming that the coupling is strictly zero, i.e. by setting J ≡ 0 in the selfconsistency (51), we get
hence, recalling that , we have
The last expression coincides with the MichaelisMenten law (6) as long as β = 1 and α_{0} coincides with the Michaelis constant (defined as the ratio between dissociation and association constants related to the reaction considered, which corresponds to the concentration of the ligand at which the reaction rate is half its maximum value). More generally, letting β vary (hence rising or lowering stochastic effects in the system), we can reshuffle the previous expression as
which recovers the Hill law (9) as long as we take . Focusing on the dependence of Y on the concentration α, we find that β plays the same role as the Hill coefficient: the smaller the stochasticity (i.e. the larger β) the more “cooperative” the system. The reason for this behavior is apparent in the statistical mechanical picture, as low levels of noise make the spins of the system align faster (driven by the external field, rather than by themselves) by reducing overall the fluctuations. Of course, the displayed behavior depends globally, rather than locally, on the system energy, i.e. it may not be ascribed to the mutual interaction of sites, as it is the case for truly cooperative systems.
In Fig. 6 (left panel) we show the qualitative behavior of the saturation function Y as a function of the concentration α and with noise parameter β, according to Eq. (55).
Let us now proceed by showing that noise can even induce bistability. To this aim, we need to extend this approach to systems built of by twoparties (namely to the general model coded in Eq. (25)); despite being mathematically more involved, the reasoning for multipartite systems goes as much as the same as above. Indeed, profiting the mappings (43), the partial saturation curves may be shown to represent Hill law for each party also in the case of two (or several) interacting sites.
To see this, let us set
where α_{j} denotes the concentration of free molecules for the party j = A, B and denotes the reference value representing equal probability of being occupied and unoccupied for the party j.
Assuming again J = 0 and N_{A} = N_{B} = N/2 for the sake of simplicity, the event of each site being bound (respectively unbound) is independent of the state of the other sites, thus, denoting with the i^{th} spin (i = 1, …, N_{j}) of the party j, the total concentration
factorizes due to the independence of the probabilities (a major advantage of the meanfield approach) as
when ce α_{tot} = α_{A} ⋅ α_{B} (and the same conclusion may be drawn for the total reference value ). As a consequence of the superposition principle for the external fields and of (13), we then have
whence, having set N_{A} = N_{B}, we can put by symmetry (see Eq. (13))
Exploiting again the assumption J = 0, that is, in the absence of cooperativity Eq. (47) reads as
whence, replacing the hyperbolic tangent with its exponential expression, we recover the partial Hill laws
for the single parties. Notice that, in order for the analogy with eq. (9) to be preserved, the partial Hill coefficients coincides now with 2β, rather than β, for each single party consists of precisely one half of the total number of spins. This is indeed perfectly consistent with our previous remark on the global dependence on the noise β in the case of noiseinduced “cooperativity” and is indeed in sharp contrast with the property of the truly cooperative behavior to be invariant with respect to the system size.
In order to graphically illustrate the properties of the partial saturation curves Y_{j} with respect to the noise β, fix e. g. h_{A} = h_{B} = h_{tot}/2, so that and . By fixing a value of α_{tot} (or, which is the same, the values of ), we can interpret the α_{j}’s to be relative concentrations with respect to α_{tot}, hence rewriting Y_{B}(α_{B}) = Y_{B}(α_{tot}/α_{A}) ≡ Y_{B}(α_{A}) for α_{A} ∈ (0,1). This stochasticdriven behavior for twoparty systems is shown in Fig. 6 (right panel).
The physical reason for this switches lies in the intrinsic ergodicity of any dynamics involving systems with finite size: considering the simplest bistable system, its two free energy minima are separated by a maximum (i.e., a barrier) whose height is NδF and the characteristic time τ for the system to move from one minimum to another typically scales as thus, solely in the thermodynamic limit, this barrier becomes infinite and the system gets trapped forever into one of these minima (and ergodicity becomes broken): this is clearly discussed from a biochemical perspective in e.g. refs 16 and (and references therein). We checked that our theory correctly reproduces these spontaneous switches in time, as reported in Figs 7 and 8 (left panel) where we recover the original plots shown^{44} using the same parameters.
In our vocabulary, this happens because the asymptotic evaluation of the magnetization already for the simplest bistable monopartite spin model (i.e., a Curie–Weiss model) leads to the expansion of the form
for large N, where ξ is the solution to the selfconsistency equation
such that the free energy attains its minimum. Away from the thermodynamic limit, keeping only the first order correction to finite volume, we can approximate
and η can be interpreted as a stochastic contribution. Indeed, the magnetization 〈m〉 has zerovariance in the limit N → ∞ only (or in the pathological case of zero noise η ≡ 0). Solving the equation (59) for ξ and substituting in (58), we obtain (up to O(1/N))
Thus, as shown in Fig. 8 (right panel) a nonvanishing η breaks the symmetry of the isothermal curve for the average magnetization 〈m〉. In absence of external field, i.e. x = 0, a positive (negative) value of η selects a negative (positive) solution 〈m〉, hence random fluctuations of η are responsible for possible switches of the magnetization thus effective stochastic bistabilities typical of toggle switches.
To further check that our theory correctly reproduces such a flipflop (i.e. switch) like behavior, we can study the nullclines of the simplest Langevin dynamics coupled to our system (25), namely
where τ is the typical time constant (see Fig. 7), and the randomness tuned by η is standard white noise, namely 〈η_{A}(t)η_{B}(t′)〉 ∝ δ(A − B)δ(t − t′): keeping C_{A}, and C_{B} to label the two reactant’s concentrations as in the original paper^{22}, we can compare the nullclines of our system to those pertaining to the following archetypal toggle
where we have kept C_{A} and C_{B} to label the unnormalized reactant’s concentrations. The corresponding by now wellknown results are illustrated in Fig. 9 (lower panel) for the case .
Letting (and F_{B} analogously), the system’s (in)stability at equilibria may be checked with the sign of the differential’s eigenvalues
an equilibrium point being a minimum (i.e. stable) if both the eigenvalues are positive and a maximum (i.e. unstable) if both of them are negative, or otherwise a saddle point (unstable) if they are opposite in sign.
Condorcet theory: stochastic signal amplification and noise suppression
The Condorcet theorem was originally stated within a political context and concerns the relative probability of a given group of individuals arriving at a correct (binary, i.e. YESNO) decision. The assumptions underlying the simplest version of the theorem is that a group wishes to reach a decision by majority vote. One of the two outcomes of the vote is correct, and each voter has an independent probability p of voting for the correct decision. The theorem states that if p > 1/2 (i.e., each voter is more likely to vote correctly), then adding more voters to the pool increases the probability that the majority decision is correct. In the thermodynamic limit, the probability (that the majority votes correctly) approaches 1 as the number of voters diverges. On the other hand, if p < 1/2 (i.e., each voter is more likely not to vote correctly), then adding more voters makes things worse.
Condorcet theorem is therefore a majority rule that can be used as a tool to infer collective reliable predictions about the better of two options and, in recent times, deep analogies between Condorcet theory in social systems and quorum sensing in biological information processing systems have been developed: for instance, many species of bacteria use quorum sensing to coordinate gene expression according to the density of their local population^{56}, and decisions within the adaptive response of the immune system in mammals requires the implementation of quorum sensing by lymphocytes^{12,57} (that globally vote if attacking or resting when a novel antigen is presented to them and dialogue among themselves respecting the rules of biochemical reactions). In any case, this distributed decisional capability should emerge as a natural consequence of the underlying reaction kinetics of all the involved agents, thus we should be able to understand its genesis from our general treatment.
Note at first that this mechanism is also consistent with the biologicallyadapted^{18,34} electronic picture of signal amplification and noise suppression (where, in this analogy, the meaning of the signal is the correct vote and that of noise is the wrong one) as, actually, this is not too far from the Shannon coding theorem in Cybernetics: roughly speaking the latter states that, even if there is a huge noise on a cable (i.e. in the system), it is enough that the probability to send successfully a message through it remains strictly greater than onehalf to be sure that the the whole information that crossed the system cannot get lost (with large enough trial samples N → ∞). Indeed in all these saturable systems (as also voterlike models à la Condorcet are saturable systems by definition if the vote is binary) when collective features are at work, the correct output “yes” emerges neatly even at a relatively mild input, and the wrong output “no” is cleverly avoided even in the presence of significant noise, as shown in Fig. 10.
This kind of phenomenology is naturally captured within our chemical kinetics framework: in fact, in our system the stimulus (i.e., the logarithm of the ligand concentration) is provided by the field h and, according to its effective magnitude, it is expected to return a bound (“yes”) or a notbound (“no”) state for the ligands such that, if the stimulus is poor (i.e., chemical noise), it is suppressed in the output, while if the stimulus is relatively consistent (i.e., chemical signal), it gets amplified in the output, as discussed in detail in Fig. 10. Clearly the larger the coupling value J > 0 (or, alternatively, the higher the Hill coefficient of the reaction), the stronger the resulting amplification (see also Eq. (23)). This point is further deepened hereafter. If we look at the expression for the system’s energy (Eq. (44)), a key point is that when J > 0, the contribution 2JY(1 − Y) provides a boost (i.e. a gain) for the system’s response, further stabilizing the states Y = 0 (for low levels of input i.e. h < 0, that is interpreted as chemical noise and it is thus suppressed) and Y = 1 (for high level of input i.e. h > 0, that is interpreted as a chemical signal and it is thus amplified). The usefulness of this Condorcetlike mechanism at work in biochemistry becomes evident already in the paradigmatic case of hemoglobin (a cooperative protein responsible for oxygen transport in tissues): when in the lungs (rich of oxygen), hemoglobin uses cooperativity to bind to as much oxygen as possible; when in the tissues (poor of oxygen) it uses cooperativity to get rid of it, thus releasing oxygen in the tissue.
Discussion
In this manuscript we deepened our translation of biochemical kinetics into a statistical mechanical scaffold started in refs 34,35 and that allows a straightforward cybernetic interpretation of these phenomenologies. The final goal is to contribute in the quantitative understanding of the emergent computational properties possibly shown by large networks of biochemical reactions regarding cell signalling.
The route we paved is based on the onetoone, robust and sharp, structural and behavioral analogy between the response functions in biochemical reactions (i.e. saturation curves), the response functions of meanfield models in statistical mechanics (i.e. selfconsistencies), and the response functions of key amplifiers in electronics (i.e. transfer functions). In particular, the response functions of cooperative (anticooperative) systems match those pertaining to meanfield ferromagnets (antiferromagnets), which, in turn, overlap those characterizing an operational amplifier (flipflop).
We decided to keep the discussion at the meanfield level and in the canonical ensemble because it is within these limits that other theories regarding information processing systems (e.g., neural networks in Artificial Intelligence^{28}) have been developed in the past, and it is by comparing our results to these theories that we aim to learn how information is treated during biochemical reactions. Notice that, from a statistical mechanical perspective, neural networks are particular types of spinglasses, namely tricky realizations of ferromagnets and antiferromagnets broadly interacting, while from an electronic perspective, neural networks are realized by suitably combining large numbers of amplifiers and flipflops, thus the first steps in this structural equivalence should focus on these basic ingredients, that have been indeed the subject of the present paper.
Moreover, many of the biochemical reactions of current interest involve small numbers of agents and this makes theories in the thermodynamic limit^{54} too coarse for the purpose of tackling the proposed analogy in full generality. Therefore, in this manuscript we tried to fix this point by developing a proper mathematical technique able to account even for smallsized systems.
Summarizing the whole procedure, our route first requires mapping the original biochemical problem into a Hamiltonian formulation; the resulting model can then be embedded into a statistical mechanical framework. Next, the solution for this kind of systems is attainable by noting that their related free energies obey Hamilton–Jacobi type equations in the space of their parameters (ultimately mapping a problem in biochemistry into a problem of analytical mechanics): relying on the complete integrability of the system of multidimensional PDEs, we solve the general scenario in both the thermodynamic limit and the finite size case^{63}. In particular, we observed that the multidimensional equations of state can be constructed via the hodograph equations that in turn provide the solution to a system of hydrodynamic type^{64}. Following this route we are able to provide a theoretical description of several complex phenomena stemming from finitesize effects. This phenomenology is rather broad and includes the effective cooperativity induced by stochasticity, as well as an enhanced bistability when dealing with toggles. Crucially, our procedure naturally allows a further interpretation of the response functions of these biochemical systems in terms of cybernetics, that constitutes a novel and transparent way to analyze how information is handled during these reactions: once shown the onetoone behavioral correspondence between saturation functions in chemical kinetics, selfconsistencies in statistical mechanics and transfer function in electronics (these are all identical saturable response functions from a cybernetic perspective), we related Condorcet phenomenology with signal amplification and noise suppression and framed it into the elementary scenario of ferromagnetic gain.
We tested our theory both against classical experiments (i.e., in the large N limit), recovering all the main equations of reaction kinetics as suitable limits (i.e. Micaelis–Menten, Hill, Adair equations) as well as against novel experiments involving small system sizes, recovering the expected phenomenology^{6,16,44}.
Further developments of the theory now should proceed in three ways: on one side, still keeping the MaxwellBoltzmann prescription, we can combine small (biochemical) circuits together in order to analyze information processing in larger chemical networks. On the other side, efforts are still needed to enlarge this scheme in order to apply to general outofequilibrium regimes (for instance with timedependent field variations). Finally the above procedure can be further enriched by working in synergy with other well developed (and possibly alternative) mathematical methods, especially those already thermodynamically oriented i.e., thougth to deal with the complexity of evaluating the partition function at finite size as, for instance, those geometricallyoriented reported in the review by Ruppeiner^{65}.
Additional Information
How to cite this article: Agliari, E. et al. Complete integrability of information processing by biochemical reactions. Sci. Rep. 6, 36314; doi: 10.1038/srep36314 (2016).
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
 1.
Hopfield, J. Kinetic proofreading: a new mechanism for reducing errors in biosynthetic processes requiring high specificity. Proc. Natl. Acad. Sci. USA , 71, 4135–4139 (1974).
 2.
Chay, T. & Ho, C. Statistical mechanics applied to cooperative ligand binding to proteins. Proc. Natl. Acad. Sci. 70, 3914–3918 (1973).
 3.
Wyman, J. & Phillipson, P. A probabilistic approach to cooperativity of ligand binding by a polyvalent molecule. Proc. Natl. Acad. Sci. 71, 3431–3434 (1974).
 4.
Warren, P. & ten Wolde, P. Chemical models of genetic toggle switches. The Journal of Physical Chemistry B 109(14), 6812–6823 (2005).
 5.
Ricci, F., ValléeBélisle, A. & Plaxco, K. Highprecision, in vitro validation of the sequestration mechanism for generating ultrasensitive doseresponse curves in regulatory networks. PLoS Comp. Biol. 7(10), 1002171 (2011).
 6.
Gardner, T. S., Cantor, C. R. & Collins, J. J. Construction of a genetic toggle switch in escherichia coli. Letters to Nature 403, 339–342 (2000).
 7.
Samoilov, M., Plyasunov, S. & Arkin, A. Stochastic amplification and signaling in enzymatic futile cycles through noiseinduced bistability with oscillations. Proc. Natl. Acad. Sci. USA 102(7), 2310 (2005).
 8.
Artyomov, M., Das, J., Kardar, M. & Chakraborty, A. K. Purely stochastic binary decisions in cell signaling models without underlying deterministic bistabilities. Proc. Natl. Acad. Sci. USA 104(48), 18958 (2007).
 9.
Biancalani, T., Rogers, T. & McKane, A. Noiseinduced metastability in biochemical networks. Phys. Rev. E 86, 010106 (2012).
 10.
Bialek, S. S. W. Cooperativity, sensitivity and noise in biochemical signalling. Phys. Rev. Lett. 100, 258101 (2008).
 11.
Sartori, P. & Pigolotti, S. Thermodynamics of error correction. Phys. Rev. X 5, 041039 (2015).
 12.
Butler, T., Kardar, M. & Chakraborty, A. Quorum sensing allows t cells to discriminate between self and nonself. Proc. Natl. Acad. Sci . 29(29), 11833–11838 (2013).
 13.
Paulsson, J., Berg, O. & Ehrenberg, M. Stochastic focusing: fluctuationenhanced sensivity of intracellular regulation. Proc. Natl. Acad. Sci. USA 97(13), 7153 (2000).
 14.
Wang, J., Zhang, J., Yuan, Z. & Zhou, T. Noiseinduced switches in network systems of the genetic toggle switch. BMC System Biol . 1, 50 (2007).
 15.
Allen, R., Warren, P. & ten Wolde, P. Sampling rare switching events in biochemical networks. Phys. Rev. Lett. 94(1), 018104 (2005).
 16.
Warren, P. & ten Wolde, P. Enhancement of the stability of genetic switches by overlapping upstream regulatory domains. Phys. Rev. Lett. 92(12), 1281101 (2004).
 17.
Kim, K.Y. & Wang, J. Potential energy landscape and robustness of gene regulatory network: toggle switch. PLoS Comp. Biol . 3(3), 60 (2007).
 18.
Zhang, Q., Bhattacharya, S. & Endersen M. Ultrasensitive response motifs: basic amplifier in molecular signalling networks. Open, Biol . 3, 130031 (2013).
 19.
Thattai, M. & van Oudenaarden, A. Attenuation of noise in ultrasensitive signaling cascades, Biophys. J. 82, 2943 (2002).
 20.
Bradshaw M., Kubota, Y., Meyer T. & Schulman, H. An ultrasensitive ca^{2+}/calmodulindependent protein kinase iiprotein phosphatase 1 switch facilitates specificity in postsynaptic calcium signaling. Proc. Natl. Acad. Sci. 100(18), 10512–10517 (2003).
 21.
Zhang, D., Turberfield, A., Yurke, B. & Winfree, E. Engingeering entropydriven reactions and networks catalyzed by DNA. Science 318, 1121–1125 (2007).
 22.
Elowitz, M. & Leibler, S. A synthetic oscillatory network of transcriptional regulators. Nature Lett . 403, 335 (2000).
 23.
Kepler, T. B. & Elston, T. C. Stochasticity in transcriptional regulation: origins, consequences and mathematical representations. Biophys. J. 81, 3116 (2001).
 24.
Tian, T. & Burrage, K. Stochastic models for regulatory networks. Proc. Natl. Acad. Sci. USA 103(22), 8372 (2006).
 25.
TanaseNicola, S. & ten Wolde, P. R. Regulatory control and the costs and benefits of biochemical noise. PLoS Comp. Biol . 4(8), 1000125 (2008).
 26.
Thompson, C. J. Mathematical Statistical Mechanics (Princeton University Press, 1972).
 27.
Hopfield, J. & Tank, D. Computing with neural circuits: A model. Science , 233, 625–633 (1986).
 28.
Amit, D. J. Modeling brain function (Cambridge University Press, 1987).
 29.
Agliari, E., Barra, A., Galluzzi, A., Guerra F. & Moauro, F. Multitasking associative networks. Phys. Rev. Lett. 109, 268101 (2012).
 30.
Mora, T., Walczak, A., Bialek, W. & Callan, C. Maximum entropy models for antibody diversity. Proc. Natl. Acad. Sci. USA 107(12), 5405 (2010).
 31.
Agliari, E., Annibale, A., Barra, A., Coolen, A. & Tantari, D. Immune networks: Multitasking capabilities near saturation. J. Phys. A , 46, 415003 (2013).
 32.
Stanley, H. E., Buldyrev, S. V., Goldberger, A. L., Goldberger, Z. D., Havlin, S., Mantegna, R. N., Ossadnik, S. M., Peng, C.K. & Simons, M. Statistical mechanics in biology: how ubiquitous are longrange correlations? Physica A 205, 214–253 (1994).
 33.
PrugelBennett, A. & Shapiro, J. Analysis of genetic algorithms using statistical mechanics. Phys. Rev. Lett. 72(9), 1305 (1994).
 34.
Agliari, E., Barra, A., Burioni, R., Di Biasio, A. & Uguzzoni, G. Collective behaviours: from biochemical kinetics to electronic circuits. Scientific Reports 3, 3458 (2014).
 35.
Agliari, E., Altavilla, M., Barra, A., Dello Schiavo, L. & Katz, E. Notes on stochastic (bio)logic gates: computing with allosteric cooperativity. Scientific Reports 5, 9415 (2015).
 36.
House, J., Principles of chemical kinetics (Elsevier Press, 2007).
 37.
Ayers P. & Parr, R. Variational principles for describing chemical reactions: the fukui function and chemical hardness revisited. J. Amer. Chem. Soc. 122(9), 21010–22018 (2000).
 38.
Barra, A., Contucci, P., Mingione, E. & Tantari, D. Multispecies meanfield spinglasses: Rigorous results. Annales Henri Poincaré 16(3), 691 (2015).
 39.
Barra, A., Galluzzi, A., Guerra, F., Pizzoferrato, A. & Tantari, D. Mean field bipartite spin models treated with mechanical techniques. Eur. Phys. J. B 87, 74 (2014).
 40.
Barra, A., Genovese, G. & Guerra, F. Equilibrium statistical mechanics of bipartite spin systems. J. Phys. A 44, 245002 (2011).
 41.
Auffinger A. & Chen, W. Free energy and complexity of spherical bipartite models. J. Stat. Phys. 157, 40–59 (2014).
 42.
Panchenko, D. The free energy in a multispecies SherringtonKirkpatrick model. Annals Probab. 43 3494–3513 (2015).
 43.
Genovese G. & Tantari, D. Nonconvex multipartite ferromagnets. J. Stat. Phys. 163(3), 492–513 (2016).
 44.
Lipshtat, A., Loinger, A., Balaban, N. Q. & Biham, O. Genetic Toggle Switch without Cooperative Binding. Phys. Rev. Lett. 96, 188101 (2006).
 45.
Mazza G. & Benaim, M. Stochastic Dynamics for Systems Biology (Taylor & Francis Group, 2014).
 46.
Ellis, R. S. Entropy, large deviations and Statistical Mechanics (Springer, 2005).
 47.
Millman, J. & Grabel, A.Microelectronics (McGraw Hill, 1987).
 48.
Wiener, N. Cybernetics; or control and communication in the animal and the machine (John Wiley: Oxford, , 1948).
 49.
Koshland jr., D. E. The structural basis of negative cooperativity: receptors and enzymes. Curr. Opin. Struc. Biol. 6, 757–761 (1996).
 50.
De Meyts, P., Roth, J., Neville Jr, D. M., Gavin, J. R. & Lesniak, M. A. Insulin interactions with its receptors: experimental evidence for negative cooperativity. Biochem. Biophys. Res. Comm ., 55(1), 154 (1973).
 51.
Seelig, G., Soloveichik, D., Zhang, D. & Winfree, E. Enzymefree nucleic acid logic circuits. Science 314, 1585–1589 (2006).
 52.
van Kampen, N. G., Stochastic Processes in Physics and Chemistry (NorthHolland Personal Library, 2007).
 53.
Levitzki, A. & Koshland, jr, D. E. Negative cooperativity in regulatory enzymes. Proc. Natl. Acad. Sci. 62, 1121–1128 (1969).
 54.
Liphardt, J. Thermodynamic limits. Nature Physics 8, 2012 (2012).
 55.
Andrecut M. & Kauffman, S. Noise in genetic toggle switch models. J. Integr. Bioinform . 23, 3424 (2006).
 56.
Miller M. & Bassler, B. Quorum sensing in bacteria. Annual Rev. Microbiol. 55(1), 165–199 (2001).
 57.
Agliari, E., Barra, A., Del Ferraro, G., Guerra, F. & Tantari, D. Energy inselfdirected B lymphocytes: A statistical mechanics perspective. J. Theor. Biol. , 375, 21–31 (2015).
 58.
Barra, A., Di Lorenzo, A., Guerra, F. & Moro A. On quantum and relativistic mechanical analogues in meanfield spin models. Proc. Royal Soc. London A , 470, 20140589 (2014).
 59.
Arsie, A., Lorenzoni, P. & Moro, A. On integrable conservation laws. Proc. Royal Soc. London A 471, 20140124 (2014).
 60.
Barra A. & Moro, A. Exact solution of the van der Waals model in the critical region. Annals of Physics 359, 290 (2015).
 61.
Moro, A. Shock dynamics of phase diagrams. Annals of Physics , 343, 49 (2014).
 62.
De Nittis G. & Moro, A. Thermodynamic phase transitions and phase singularities. Proc. Royal Soc. London A 468, 701–719 (2012).
 63.
Courant R. & Hilbert, D. Methods of Mathematical Physics (WileyVHC, 2008).
 64.
Tsarev, S. Geometry of hamiltonian systems of hydrodynamic type. Generalized hodograph method. Izvestija AN USSR Math. 54(5), 1048–1068 (1990).
 65.
Ruppeiner, G. Riemannian geometry in thermodynamic fluctuation theory. Rev. Mod. Phys. 67(3), 605 (1995).
 66.
Solomatin, S. V., Greenfeld, M. & Hershlag, D. Implications of molecular heterogeneity for the cooperativity of biological macromolecules. Nature Structural & Molecular Biology 18(6), 732–734 (2011).
 67.
Suzuki, Y., Moriyoshi, E., Tsuchiya, D. & Jingami, H. Negative Cooperativity of Glutamate Binding in the Dimeric Metabotropic Glutamate Receptor Subtype 1*. The Journal of Biological Chemistry 279(34), 35526–35534 (2004).
 68.
Watson, L. C., Kuchenbecker, K. M., Schiller, B. J., Gross, J. D., Pufall, M. A. & Yamamoto, K. R. The glucocorticoid receptor dimer interface allosterically transmits sequencespecific DNA signals. Nature Structural & Molecular Biology 20, 876 (2013).
 69.
Garnier, A., Berredjem, Y. & Botton, B. Purification and Characterization of the NADDependent Glutamate dehydrogenase in the Ectomycorrhizal fungus Laccaria bicolor (maire) orton purification and characterization of the naddependent glutamate dehydrogenase in the ectomycorrhizal fungus Laccaria bicolor (maire) orton urification and characterization of the naddependent glutamate dehydrogenase in the ectomycorrhizal fungus Laccaria bicolor (maire) orton. Fungal Genetics and Biology 22, 168–176 (1987).
 70.
Glover, G., D’Ambrosio, D. & Jensen, R. Versatile properties of a nonsaturable, homogeneous transport system in Bacillus subtilis: Genetic, kinetic, and affinity labeling studies. Proc. Natl. Acad. Sci. USA 72, 814–818 (1975).
Acknowledgements
Authors wish to thank the London Mathematical Society for the partial support to this research through the Scheme 4 Grant Ref. 41553 and INdAM (Istituto Nazionale di Fisica Mathematica)  GNFM (Gruppo Nazionale per la Fisica Matematica) for partial financial support through the Grant "Meccanica Statistica per il Deep Learning" and Faculty of Engineering and Environment University of Northumbria Newcastle for supporting open access publication. A.B. is grateful to LIFE group for partial financial support through Programma INNOVA, Progetto MATCH. A.M. is grateful to the Department of Mathematics of Sapienza University of Rome for the kind hospitality. L.D.S. gratefully acknowledges funding through the CRC 1060 at the University of Bonn.
Author information
Affiliations
Dipartimento di Matematica, Sapienza Università di Roma, Italy
 Elena Agliari
Department of Computer Science, Sapienza Università di Roma, Italy
 Adriano Barra
Istituto Nazionale d’Alta Matematica (GNFMINdAM), Rome (IT)
 Elena Agliari
 & Adriano Barra
Institut für Angewandte Mathematik, Rheinische FriedrichWilhelmsUniversität Bonn, Germany
 Lorenzo Dello Schiavo
Department of Mathematics and Information Science, University of Northumbria Newcastle, United Kingdom
 Antonio Moro
Authors
Search for Elena Agliari in:
Search for Adriano Barra in:
Search for Lorenzo Dello Schiavo in:
Search for Antonio Moro in:
Contributions
All authors, E.A., A.B., L.D.S. and A.M. have equally contributed to all phases of realisation of this manuscript in all its parts.
Competing interests
The authors declare no competing financial interests.
Corresponding author
Correspondence to Antonio Moro.
Rights and permissions
This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Further reading

A Conditional Curie–Weiss Model for Stylized Multigroup Binary Choice with Social Interaction
Journal of Statistical Physics (2018)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.