Introduction

Ludwig Boltzmann defined entropy as the logarithm of state multiplicity. The multiplicity of independent (but possibly interacting) systems is typically given by multinomial factors that lead to the Boltzmann–Gibbs entropy and the exponential growth of phase space volume as a function of the degrees of freedom. In recent decades, much attention was given to systems with long-range and coevolving interactions that are sometimes referred to as complex systems1. Many complex systems do not exhibit an exponential growth of phase space2,3,4,5. For correlated systems, it typically grows subexponentially6,7,8,9,10,11,12,13,14, systems with superexponential phase space growth were recently identified as those capable of forming structures from its components5,15. A typical example of this kind are complex networks16, where complex behavior may lead to ensemble inequivalence17. The most prominent example of structure-forming systems are chemical reaction networks18,19,20. The usual approach to chemical reactions—where free particles may compose molecules—is via the grand-canonical ensemble, where particle reservoirs make sure that the number of particles is conserved on average. Much attention has been given to finite-size corrections of the chemical potential21,22 and nonequilibrium thermodynamics of small chemical networks23,24,25,26. However, for small closed systems, fluctuations in particle reservoirs might become nonnegligible and predictions from the grand-canonical ensemble become inaccurate. In the context of nanotechnology and colloidal physics, the theory of self-assembly27 gained recent interest. Examples of self-assembly include lipid bilayers and vesicles28, microtubules, molecular motors29, amphibolic particles30, or RNA31. The thermodynamics of self-assembly systems has been studied, both experimentally and theoretically, often dealing with particular systems, such as Janus particles32. Theoretical and computational work have explored self-assembly under nonequilibrium conditions33,34. A review can be found in Arango-Restrepo et al.35.

Here, we present a canonical approach for closed systems where particles interact and form structures. The main idea is to start not with a grand-canonical approach to structure-forming systems but to see within a canonical description which terms in the entropy emerge that play the role of the chemical potential in large systems. A simple example for a structure-forming system, the magnetic coin model, was recently introduced in Jensen et al.15. There n coins are in two possible states (head and tail), and in addition, since coins are magnetic, they can form a third state, i.e., any two coins might create a bond state. The phase space of this model, W(n), grows superexponentially, \(W(n) \sim {n}^{n/2}{e}^{2\sqrt{n}} \sim {e}^{n\mathrm{log}\,n}\). We first generalize this model to arbitrary cluster sizes and to an arbitrary number of states. We then derive the entropy of the system from the corresponding log multiplicity and use it to compute thermodynamic quantities, such as the Helmholtz free energy. With respect to Boltzmann–Gibbs entropy, there appears an additional term that captures the molecule states. By using stochastic thermodynamics, we obtain the appropriate second law for structure-forming systems and derive the detailed fluctuation theorem. Under the assumption that external driving preserves microreversibility, i.e., detailed balance of transition rates in quasi-stationary states, we derive the nonequilibrium Crooks’ fluctuation theorem for structure-forming systems. It relates the probability distribution of the stochastic work done on a nonequilibrium system to thermodynamic variables, such as the partial Helmholtz free energy, temperature, and size of the initial and final cluster states. Finally, we apply our results to several physical systems: we first calculate the phase diagram for the case of patchy particles described by the Kern–Frenkel potential. Second, we discuss the fully connected Ising model where molecule formation is allowed. We show that the usual second-order transition in the fully connected Ising model changes to first-order.

Results

Entropy of structure-forming systems

To calculate the entropy of structure-forming systems, we first define a set of possible microstates and mesostates. Let us consider a system of n particles. Each single particle can attain states from the set \({{\mathcal{X}}}^{(1)}=\{{x}_{1}^{(1)},\ldots ,{x}_{{m}_{1}}^{(1)}\}\). The superscript number (1) indicates that the states correspond to a single-particle state, and m1 denotes the number of these states. A typical set of states could be the spin of the particle {↑,↓}, or a set of energy levels. Having only single-particle states, the microstate of the system consisting of n particles is a vector (X1, X2, …, Xn), where \({X}_{k}\in {{\mathcal{X}}}^{(1)}\) is the state of kth particle. Let us now assume that any two particles can create a two-particle state. This two-particle state can be a molecule composed of two atoms, a cluster of two colloidal particles, etc. We call this state as a cluster. This two-particle cluster can attain states \({{\mathcal{X}}}^{(2)}=\{{x}_{1}^{(2)},\ldots ,{x}_{{m}_{2}}^{(2)}\}\). A microstate of a system of n particles is again a vector (X1, X2, …, Xn), but now either \({X}_{k}\in {{\mathcal{X}}}^{(1)}\) or \({X}_{k}\in {{\mathcal{X}}}^{(2)}\times {{\mathbb{Z}}}_{n}^{2}\). For instance, a state of particle k belonging to a two-particle cluster can be written as \({X}_{k}={x}_{1}^{(2)}({k}_{1},{k}_{2})\). The indices in the brackets tell us that the particle k belongs to the cluster of size two in the state \({x}_{1}^{(2)}\) and the cluster is formed by particles k1 and k2 (k1 < k2). Indeed, either k1 = k or k2 = k.

Now assume that particles can also form larger clusters up to a maximal size, m. Consider m as a fixed number, m ≤ n. Generally, clusters of size j have states \({{\mathcal{X}}}^{(j)}=\{{x}_{1}^{(j)},\ldots ,{x}_{{m}_{j}}^{(j)}\}\). The corresponding states of the particle are always elements from sets \({{\mathcal{X}}}^{(j)}\times {{\mathbb{Z}}}_{n}^{j}\) with the restriction that if the kth particle is in a state \({x}_{i}^{(j)}({k}_{1},\ldots ,{k}_{j})\) then kl < kl+1, for all l and one kl = k. Consider an example of four particles. Particles are either in a free state or they form a cluster of size two. A state of each particle is either s(1)—a free particle, or x(2)(i, j)—a cluster compound from particles i and j. As an example, a typical microstate is Ψ = (x(1), x(2)(2,3), x(2)(2,3), x(1)), which means that particles 1 and 4 are free and particles 2 and 3 form a cluster.

Now consider a mesoscopic scale, where the mesostate of the system is given only by the number of clusters in each state \({x}_{i}^{(j)}\). Let us denote \({n}_{i}^{(j)}\) as the number of clusters in state \({x}_{i}^{(j)}\). The mesostate is therefore characterized by a vector \({\mathbb{N}}=\left({n}_{i}^{(j)}\right)\), which corresponds to a frequency (histogram) of microstates. The normalization condition is given by the fact that the total number of particles is n, i.e., \({\sum }_{ij}j{n}_{i}^{(j)}=n\). For example, a mesostate, \({{\mathbb{N}}}_{{{\Psi }}}\), corresponding to a microstate Ψ is \({{\mathbb{N}}}_{\psi }=\left({n}^{(1)}=2,{n}^{(2)}=1\right)\), denoting that there are two free particles and one two-particle cluster.

The Boltzmann entropy36 of this mesostate is given by

$$S\left({\mathbb{N}}\right)={\mathrm{log}}\,{{W}}\left({\mathbb{N}}\right),$$
(1)

where W is the multiplicity of the mesostate, which is the number of all distinct microstates corresponding to the same mesostate. To determine the number of all distinct microstates corresponding to a given mesostate, let us order the particles and number them from 1 to n. By permutation of the particles we obtain the different possible microstates. The number of all permutations is simply n!. However, some permutations correspond to the same microstate and we are overcounting. In our example with one cluster and two free particles, the permutations (4, 2, 3, 1) and (1, 3, 2, 4) correspond to the same microstate Ψ = (x(1), x(2)(2, 3), x(2)(2, 3), x(1)). However, permutation (2, 1, 3, 4) corresponds to the microstate \({{\Psi }}^{\prime} =({x}^{(2)}(1,3),{x}^{(1)},{x}^{(2)}(1,3),{x}^{(1)})\). This microstate is a distinct microstate corresponding to the same mesostate, \({{\mathbb{N}}}_{{{\Psi }}}\equiv {{\mathbb{N}}}_{{{\Psi }}^{\prime} }=\left({n}^{(1)}=2,{n}^{(2)}=1\right)\).

The number of microstates giving the same mesostate can be expressed as the product of configurations with the same state for each \({x}_{i}^{(j)}\). Let us begin with the particles that do not form clusters. The number of equivalent representations for one distinct state is \(\left({n}_{i}^{(1)}\right)!\), which corresponds to the number of permutations of all particles in the same state. For the cluster states, one can think about equivalent representations of one microstate in two steps: first permute all clusters, which gives \(\left({n}_{i}^{(j)}\right)!\) possibilities. Then, permute the particles in the cluster, which gives j! possibilities for every cluster, so that we end up with \({(j!)}^{{n}_{i}^{(j)}}\) combinations.

As an example, consider the case of four particles. First, we look at free particles that attain states \({x}_{1}^{(1)}\) or \({x}_{2}^{(1)}\). Let us consider a mesostate \({{\mathbb{N}}}_{1}=\left({n}_{1}^{(1)}=2,{n}_{2}^{(1)}=2\right)\), i.e., two particles in the first state and two particles in the second. The number of distinct microstates corresponding to the mesostate \({{\mathbb{N}}}_{1}\) is given by \(W({{\mathbb{N}}}_{1})=4!/(2!2!)=6\). All microstates that belong to the mesostate \({{\mathbb{N}}}_{1}\) are

$$\begin{array}{ll}({x}_{1}^{(1)},{x}_{1}^{(1)},{x}_{2}^{(1)},{x}_{2}^{(1)})&({x}_{1}^{(1)},{x}_{2}^{(1)},{x}_{1}^{(1)},{x}_{2}^{(1)})\\ ({x}_{1}^{(1)},{x}_{2}^{(1)},{x}_{2}^{(1)},{x}_{1}^{(1)})&({x}_{2}^{(1)},{x}_{2}^{(1)},{x}_{1}^{(1)},{x}_{1}^{(1)})\\ ({x}_{2}^{(1)},{x}_{1}^{(1)},{x}_{2}^{(1)},{x}_{1}^{(1)})&({x}_{2}^{(1)},{x}_{1}^{(1)},{x}_{1}^{(1)},{x}_{2}^{(1)})\end{array}$$

Now imagine that the four particles are either free or form two-particle clusters. The microstate of a particle is either x(1) or x(2)(i, j). Let us consider a mesostate \({{\mathbb{N}}}_{2}=\left({n}^{(1)}=0,{n}^{(2)}=2\right)\), i.e., two clusters of size two. The number of distinct microstates is just \(W({{\mathbb{N}}}_{2})=4!/(2!{(2!)}^{2})=3\). The microstates corresponding to the mesostate \({{\mathbb{N}}}_{2}\) are

$$\begin{array}{l}({x}^{(2)}(1,2),{x}^{(2)}(1,2),{x}^{(2)}(3,4),{x}^{(2)}(3,4))\\ ({x}^{(2)}(1,3),{x}^{(2)}(2,4),{x}^{(2)}(1,3),{x}^{(2)}(2,4))\\ ({x}^{(2)}(1,4),{x}^{(2)}(2,3),{x}^{(2)}(2,3),{x}^{(2)}(1,4))\end{array}$$

For example, a microstate (x(2)(2, 1), x(2)(2, 1), x(2)(4, 3), x(2)(4,3)) is the same as the first microstate because we just relabel 1↔2 and 3↔4. In summary, the multiplicity corresponding to \({x}_{i}^{(j)}\) is \(({n}_{i}^{(j)})!{(j!)}^{{n}_{i}^{(j)}}\), and we can express the total multiplicity as

$$W({\mathbb{N}})=\frac{n!}{\prod _{ij}\left(({n}_{i}^{(j)})!{(j!)}^{{n}_{i}^{(j)}}\right)}.$$
(2)

Using Stirling’s formula \(\mathrm{log}\,n!\approx n\mathrm{log}\,n-n\), we get for the entropy

$$S({\mathbb{N}}) \approx n{\mathrm{log}}\,n-n\\ -\sum _{ij}\left({n}_{i}^{(j)}{\mathrm{log}}\,{n}_{i}^{(j)}-{n}_{i}^{(j)}+{n}_{i}^{(j)}{\mathrm{log}}\,j!\right).$$
(3)

Using the normalization condition, \(n={\sum }_{ij}j{n}_{i}^{(j)}\), and combining the first term with the remaining ones, we get the entropy per particle in terms of ratios \({\wp }_{i}^{(j)}={n}_{i}^{(j)}/n\)

$${\mathcal{S}}({\mathbb{N}})=\frac{S(\{{n}_{i}^{(j)}\})}{n}=-\sum _{ij}\left[\frac{{n}_{i}^{(j)}}{n}{\mathrm{log}}\,\left(\frac{{n}_{i}^{(j)}}{n}\right)\right.\\ -\left.\frac{{n}_{i}^{(j)}}{n}{\mathrm{log}}\,\left(\frac{j!}{{n}\,^{j-1}}\right)-\frac{{n}_{i}^{(j)}}{n}+\frac{j{n}_{i}^{(j)}}{n}\right].$$
(4)

Normalization is given by \({\sum }_{ij}\,j{\wp }_{i}^{(j)}=1\). Therefore, \({p}_{i}^{(j)}=j{\wp }_{i}^{(j)}\) can be interpreted as the probability that a particle is a part of a cluster in state \({x}_{i}^{(j)}\). On the other hand, the quantity \({\wp }_{i}^{(j)}\) is the relative number of clusters. Since \({\sum }_{ij}\frac{j{n}_{i}^{(j)}}{n}=1\), we neglect the constant without changing the thermodynamic relations.

In the remainder, we denote thermodynamic quantities per particle by calligraphic script and total quantities by normal script. We express the entropy per particle as

$${\mathcal{S}}(\wp )=-\sum _{ij}{\wp }_{i}^{(j)}\left({\mathrm{log}}\,{\wp }_{i}^{(j)}-1\right)\\ -\sum _{ij}{\wp }_{i}^{(j)}{\mathrm{log}}\,\left(\frac{j!}{{n}^{j-1}}\right),$$
(5)

or equivalently in terms of the probability distribution, \({p}_{i}^{(j)}\), as

$${\mathcal{S}}(P)=-\sum _{ij}\frac{{p}_{i}^{(j)}}{j}\left({\mathrm{log}}\,\frac{{p}_{i}^{(j)}}{j}-1\right)\\ -\sum _{ij}\frac{{p}_{i}^{(j)}}{j}{\mathrm{log}}\,\left(\frac{j!}{{n}^{j-1}}\right).$$
(6)

Finite interaction range

Up to now, we assumed an infinite range of interaction between particles, which is unrealistic for chemical reactions, where only atoms within a short range form clusters. A simple correction is obtained by dividing the system into a fixed number of boxes: particles within the same box can form clusters, particles in different boxes cannot. We begin by calculating the multiplicity for two boxes. For simplicity, assume that they both contain n/2 particles. The multiplicity of a system with two boxes, \(\tilde{W}\left({n}_{i}^{(j)}\right)\), is given by the sum of all possible partitions of \({n}_{i}^{(j)}\) clusters with state \({x}_{i}^{(j)}\) into the first box (containing \({\,}^{1}{n}_{i}^{(j)}\) clusters) and the second box (containing \({\,}^{2}{n}_{i}^{(j)}\) clusters), such that \({n}_{i}^{(j)}={\,}^{1}{n}_{i}^{(j)}+{\,}^{2}{n}_{i}^{(j)}\). The multiplicity is therefore

$$\tilde{W}\left({n}_{i}^{(j)}\right)=\sum _{{\,}^{1}{n}_{i}^{(j)}+{\,}^{2}{n}_{i}^{(j)}={n}_{i}^{(j)}}W\left({\,}^{1}{n}_{i}^{(j)}\right)W\left({\,}^{2}{n}_{i}^{(j)}\right),$$
(7)

where W is the multiplicity in Eq. (2). The dominant contribution to the sum comes from the term, where \({\,}^{1}{n}_{i}^{(j)}={\,}^{2}{n}_{i}^{(j)}={n}_{i}^{(j)}/2\), so that we can approximate the multiplicity by \(\tilde{W}({n}_{i}^{(j)})\approx W{({n}_{i}^{(j)}/2)}^{2}\). Similarly, for b boxes we obtain the multiplicity

$$\tilde{W}({n}_{i}^{(j)})=W{({n}_{i}^{(j)}/b)}^{b}=\frac{{[(n/b)!]}^{b}}{\prod _{ij}\left({[({n}_{i}^{(j)}/b)!]}^{b}{(j!)}^{{n}_{i}^{(j)}}\right)}.$$
(8)

By defining the concentration of particles as \(c=n/b\), the entropy per particle becomes

$${\mathcal{S}}(\wp )=-\sum _{ij}{\wp }_{i}^{(j)}\left({\mathrm{log}}\,{\wp }_{i}^{(j)}-1\right)\\ -\sum _{ij}{\wp }_{i}^{(j)}{\mathrm{log}}\,\left(\frac{j!}{c^{j-1}}\right),$$
(9)

or, respectively,

$${\mathcal{S}}(P)=-\sum _{ij}\frac{{p}_{i}^{(j)}}{j}\left({\mathrm{log}}\,\frac{{p}_{i}^{(j)}}{j}-1\right)\\ -\sum _{ij}\frac{{p}_{i}^{(j)}}{j}{\mathrm{log}}\,\left(\frac{j!}{c^{j-1}}\right).$$
(10)

Note that the entropy of structure-forming systems is both additive and extensive in the sense of Lieb and Yngvason37. It is also concave, ensuring the uniqueness of the maximum entropy principle. For more details and connections to axiomatic frameworks, see Supplementary Discussion.

Equilibrium thermodynamics of structure-forming systems

We now focus on the equilibrium thermodynamics obtained, for example, by considering the maximum entropy principle. Consider the internal energy

$$U({n}_{i}^{(j)})=\sum _{ij}{n}_{i}^{(j)}{\epsilon }_{i}^{(j)}=n\sum _{ij}{\wp }_{i}^{(j)}{\epsilon }_{i}^{(j)}=n\ {\mathcal{U}}({\wp }_{i}^{(j)}).$$
(11)

Using Lagrange multipliers to maximize the functional

$${\mathcal{S}}(\wp )-\alpha \left(\sum _{ij}j{\wp }_{i}^{(j)}-1\right)-\beta \left(\sum _{ij}{\wp }_{i}^{(j)}{\epsilon }_{i}^{(j)}-{\mathcal{U}}\right),$$
(12)

leads to the following:

$$-{\mathrm{log}}\,{\hat{\wp }}_{i}^{(j)}-{\mathrm{log}}\,\left(\frac{j!}{{c}^{j-1}}\right)-\alpha j-\beta {\epsilon }_{i}^{(j)}=0,$$
(13)

and the resulting distribution is

$${\hat{\wp }}_{i}^{(j)}=\frac{{c}^{j-1}}{j!}\exp \left(-j\alpha -\beta {\epsilon }_{i}^{(j)}\right).$$
(14)

Here, we introduce the partial partition functions, \({{\mathcal{Z}}}_{j}=\frac{{c}^{j-1}}{j!}\sum _{i}{e}^{-\beta {\epsilon }_{i}^{(j)}}\), and the quantity Λ = eα. Λ is obtained from

$$\sum _{ij}j{\hat{\wp }}_{i}^{(j)}=\mathop{\sum }\limits_{j=1}^{m}j\ {{\mathcal{Z}}}_{j}\ {{{\Lambda }}}^{j}=1,$$
(15)

which is a polynomial equation of order m in Λ. The connection with thermodynamics follows through Eq. (13). By multiplying with \({\hat{\wp }}_{i}^{(j)}\) and summing over i,j, we get \({\mathcal{S}}(\wp )-\sum _{ij}{\hat{\wp }}_{i}^{(j)}-\alpha -\beta \ {\mathcal{U}}=0\). Note that \(\sum _{ij}{\hat{\wp }}_{i}^{(j)}=\sum _{ij}{\hat{n}}_{i}^{(j)}/n=M/n={\mathcal{M}}\) is the number of clusters, divided by the number of particles in the system. The number of clusters per particle is

$${\mathcal{M}}=\sum _{ij}{\hat{\wp }}_{i}^{(j)}=\sum _{j}{{\mathcal{Z}}}_{j}\ {{{\Lambda }}}^{j}.$$
(16)

The Helmholz free energy is thus obtained as

$${\mathcal{F}}={\mathcal{U}}-\frac{1}{\beta }{\mathcal{S}}=-\frac{\alpha }{\beta }-\frac{1}{\beta }{\mathcal{M}}.$$
(17)

Finally, we can write the total partition function as

$${\mathcal{Z}}=\exp (-\beta {\mathcal{F}})=\frac{1}{{{\Lambda }}}\mathop{\prod }\limits_{j=1}^{m}\exp ({{{\Lambda }}}^{j}{{\mathcal{Z}}}_{j}).$$
(18)

Comparison with the grand-canonical ensemble

To compare the presented exact approach with the grand-canonical ensemble, consider the simple chemical reaction, 2XX2. Without loss of generality, assume that free particles carry some energy, ϵ. We calculate the Helmholtz free energy for both approaches in Supplementary Information. In Fig. 1, we show the corresponding specific heat, \(c(T)=-T\frac{{\partial }^{2}{\mathcal{F}}}{\partial {T}^{2}}\). For large systems, the usual grand-canonical ensemble approach and the exact calculation with a strictly conserved number of particles converge. For small systems, however, there appear notable differences. This is visible in Fig. 1, where only for large n and low concentrations, \(c\), the specific heat for the exact approach (squares) and the grand-canonical ensemble (triangles) become identical. The inset shows the ratio of the specific heat, cC/cGC − 1, vanishing for large n. For large systems, the exact approach and the the grand-canonical ensemble are equivalent.

Fig. 1: Specific heat, c(T), for the reaction 2XX2 for the presented canonical approach with an exact number of particles in comparison to the grand-canonical ensemble.
figure 1

The specific heat for the canonical ensemble (C) is drawn by squares, and the specific heat for the grand-canonical ensemble (GC) is drawn by triangles. n denotes the number of particles. For small systems the difference of the approaches becomes apparent. The inset shows the ratio of the specific heat calculated from the exact approach to the one obtained from the grand-canonical ensemble, cC/cGC − 1. For large n the quantity decays to zero for any temperature.

Relation to the theory of self-assembly

In many applications, the number of energetic configurations for each cluster size is so large that one is only interested in the distribution of cluster sizes. For this case, it is possible to formulate an effective theory considering contributions from all configurations that is known as the theory of self-assembly. For an overview, see Likos et al.27.

To compute the free energy in terms of the cluster-size distribution, we define the latter as

$${\hat{\wp }}^{(j)}=\sum _{i}{\hat{\wp }}_{i}^{(j)}={{{\Lambda }}}^{j}{{\mathcal{Z}}}_{j}.$$
(19)

This is the distribution obtained from a free energy of the ideal gas of clusters, as discussed in Fantoni et al.32 for the case of Janus particles and in Vissersa et al.38 for the more general case of one-patch colloids. The entropy of the relative cluster size can be introduced as

$${{\mathcal{S}}}_{c}(\wp )=-\mathop{\sum }\limits_{j=1}^{m}{\wp }^{(j)}\left({\mathrm{log}}\,{\wp }^{(j)}-1\right).$$
(20)

By introducing the partial free energy as

$${{{\Phi }}}_{j}=-\frac{1}{\beta }{\mathrm{log}}\,{{\mathcal{Z}}}_{j},$$
(21)

the energy constraint takes the form of the expected free energy, averaged over cluster size, \({{\Phi }}=\mathop{\sum }\nolimits_{j = 1}^{m}{\wp }^{(j)}{{{\Phi }}}_{j}\). The cluster-size distribution is obtained by maximization of the functional

$${{\mathcal{S}}}_{c}(\wp )-{\alpha }_{c}\left(\mathop{\sum }\limits_{j=1}^{m}j{\wp }^{(j)}-1\right)-\beta \left(\mathop{\sum }\limits_{j=1}^{m}{\wp }^{(j)}{{{\Phi }}}_{j}-{{\Phi }}\right).$$
(22)

It is clear that Eq. (19) is the solution of the maximization. The free energy can be now expressed as

$${{\mathcal{F}}}_{c}={{\Phi }}-\frac{1}{\beta} {S}_{c}=-\frac{{\alpha }_{c}}{\beta }-\frac{{\mathcal{M}}}{\beta },$$
(23)

which has the same structure as when calculated in terms of \({\wp }_{i}^{(j)}\).

Examples for thermodynamics of structure-forming systems

We now apply the results obtained in the previous section to several examples of structure-forming systems. We particularly focus on how the presence of mescoscopic structures of clustered states leads to the macroscopic physical properties. In the presence of structure formation, there exists a phase transition between a free particle fluid phase and a condensed phase, containing clusters of particles. This phase transition is demonstrated in two examples.

The first example on soft-matter self-assembly describes the process of condensation of one-patch colloidal amphibolic particles. This condensation is relevant in applications in nanomaterials and biophysics. The second example covers the phase transition of the Curie–Weiss spin model for the situation where particles form molecules. In Supplementary Information, we discuss the additional examples of a magnetic gas and a size-dependent chemical potential.

Kern–Frenkel model of patchy particles

Recently, the theory of soft-matter self-assembly has successfully predicted the creation of various structures of colloidal particles, including clusters of Janus particles32, polymerization of colloids38, and the crystallization of multipatch colloidal particles39. Kern and Frenkel40 introduced a simple model to describe the self-assembly of amphibolic particles with two-particle interactions. rij denotes a unit vector connecting the centers of particles i and j, rij is the corresponding distance, and ni and nj are unit vectors encoding the directions of patchy spheres. The Kern–Frenkel potential was defined as

$${U}_{ij}^{KF}=u({r}_{ij}){{\Omega }}({{\bf{r}}}_{ij},{{\bf{n}}}_{i},{{\bf{n}}}_{j}),$$
(24)

where

$$u({r}_{ij})=\left\{\begin{array}{ll}\infty ,&{r}_{ij}\le \sigma \hfill \\ -\epsilon ,&\sigma \, <\, {r}_{ij}<\sigma +{{\Delta }}\\ 0,&{r}_{ij}\, > \, \sigma +{{\Delta }}.\hfill \end{array}\right.$$

and

$${{\Omega }}({{\bf{r}}}_{ij},{{\bf{n}}}_{i},{{\bf{n}}}_{j})=\left\{\begin{array}{ll}\ 1&{\rm{if}}\left\{\begin{array}{ll}{{\bf{r}}}_{ij}\cdot {{\bf{n}}}_{i}> \cos \theta &{\rm{and}}\\ {{\bf{r}}}_{ij}\cdot {{\bf{n}}}_{j}> \cos \theta &\end{array}\right.\\ \,\,0,&{\rm{otherwise}}.\hfill \end{array}\right.$$

The characteristic quantity, \(\chi ={\sin }^{2}(\theta /2)\), is the particle coverage. In the theory of self-assembly, the cluster-size distribution is determined by the partial partition functions Eq. (19). Due to the enormous number of possible configurations, it is impossible to calculate \({{\mathcal{Z}}}_{j}\) analytically and simulation methods were introduced, including a grand-canonical Monte Carlo method and successive umbrella sampling; for a review, see Rovigatti et al.41. Instead of calculating the exact value of \({{\mathcal{Z}}}_{j}\), we use a stylized model based on Fantoni et al.32. There the partial partition function is parameterized as \(\frac{\mathrm{log}\,{{\mathcal{Z}}}_{j}}{j\epsilon }=b\tanh (aj)\), where b < 0 and a > 0 are the model parameters. While for small cluster sizes, the free energy per particle decreases linearly with the size, for larger clusters, it saturates at b. To calculate the average cluster size, Eq. (16), one has to solve the equation for Λ, Eq. (15). In Fig. 2, we show the phase diagram of the patchy particles for b = − 3 and a = 25 and n = 100. The average number of clusters, M, plays the role of the order parameter. In the phase diagram, one can clearly distinguish three phases. At high temperature, we observe the liquid phase, where most particles are not bound to others. At low temperatures, we have a condensed phase with macroscopic clusters. The two phases are separated by a coexistence phase, where both large clusters and unbounded particles are present. The coexistence phase (gray region) is characterized by a bimodal distribution that can be recognized by calculating the bimodality coefficient42. Results presented in Fig. 2 qualitatively correspond to results obatined in Fantoni et al.32 for the case of Janus particles with χ = 0.5.

Fig. 2: Phase diagram for the self-assembly of patchy particles for n = 100 particles.
figure 2

The average cluster size (M) as a function of temperature (T) and concentration (c) is seen. The cluster size is given by the color and ranges from M = 0 (purple) to M = 100 (red). We observe three phases: the liquid and condensed phase are divided by a coexistence phase (gray area). Coexistence is characterized by a bimodal distribution that can be detected with a shift in the bimodality coefficient.

Curie–Weiss model with molecule formation

To discuss an example of a spin system with molecule states, consider the fully connected Ising model43,44,45,46 with a Hamiltonian that allows for possible molecule states

$$H({\sigma }_{i})=-\frac{J}{n-1}\sum _{i\ne j,\ \ \,\text{free}\,}{\sigma }_{i}{\sigma }_{j}-h\sum _{j,\ \ \,\text{free}\,}{\sigma }_{j}\ .$$
(25)

Molecule states neither feel the spin–spin interaction nor the external magnetic field, h. Therefore, the sum only extends over free particles. In a mean-field approximation, we use the magnetization, \(m=\frac{1}{n-1}{\sum }_{i\ne j}{\sigma }_{i}\), and express the Hamiltonian as HMF(σi) = −(Jm + h)∑j,freeσj. The self-consistency equation \(m=-\frac{\partial F}{\partial h}{| }_{h = 0}\) leads to an equation for m that is calculated numerically (Supplementary Information) and that is shown in Fig. 3. Contrary to the mean-field approximation of the usual fully connected Ising model (without molecule states), the phase transition is no longer second-order but becomes first-order. There exists a bifurcation where solutions for m = 0 and m > 0 are stable. The second-order transition is recovered for small systems, n→0. The critical temperature is shifted toward zero for increasing n. We performed Monte Carlo simulations to check the result of the mean-field approximation; see Supplementary Information.

Fig. 3: Magnetization of the fully connected Ising model with molecule states for n = 50 and n = 200 particles, for a spin–spin coupling constant, J = 1.
figure 3

Results of the mean-field approximation (solid lines) are in good agreement with Monte Carlo simulations (symbols). Errorbars show the standard deviation of the average value obtained from 1000 independent runs of the simulations (see Supplementary Information for more details). The inset shows the well-known result for the fully connected Ising model without molecule states. Without molecule formation, we observe the usual second-order transition. With molecules, the critical temperature decreases with the number of particles and the phase transition becomes first-order.

Stochastic thermodynamics of structure-forming systems

Consider an arbitrary nonequilibrium state given by \({\wp }_{i}^{(j)}\equiv {\wp }_{i}^{(j)}(t)\), and imagine that the evolution of the probability distribution is defined by a first-order Markovian linear master equation, as is usually assumed in stochastic thermodynamics47,48

$${\dot{\wp }}_{i}^{(j)}=\sum _{kl}{w}_{ik}^{jl}{\wp }_{k}^{(l)}=\sum _{kl}\left({w}_{ik}^{jl}{\wp }_{k}^{(l)}-{w}_{ki}^{lj}{\wp }_{i}^{(j)}\right).$$
(26)

\({w}_{ik}^{jl}\) are the transition rates. Note that probability normalization leads to \({\sum }_{ij}j{\dot{\wp }}_{i}^{(j)}=0\). Given that detailed balance holds, \({w}_{ik}^{jl}{\hat{\wp }}_{k}^{(l)}={w}_{ki}^{lj}{\hat{\wp }}_{i}^{(j)}\), the underlying stationary distribution, obtained from \({\dot{\wp }}_{i}^{(j)}=0\), coincides with the equilibrium distribution Eq. (14). From this we get

$$\frac{{w}_{ik}^{jl}}{{w}_{ki}^{lj}}=\frac{j!}{l!}{c}^{l-j}\exp \left[\alpha (l-j)+\beta \left({\epsilon }_{k}^{(l)}-{\epsilon }_{i}^{(j)}\right)\right].$$
(27)

The time derivative of the entropy per particle is

$$\frac{{\mathrm{d}}{\mathcal{S}}}{{\mathrm{d}}t}=-\sum _{ij}{\dot{\wp }}_{i}^{(j)}{\mathrm{log}}\,{\wp }_{i}^{(j)}-\sum _{ij}{\dot{\wp }}_{i}^{(j)}{\mathrm{log}}\,\left(\frac{j!}{{c}^{j-1}}\right).$$
(28)

Using the master Eq. (26) and some straightforward calculations, we end up with the usual second law of thermodynamics

$$\frac{{\rm{d}}{\mathcal{S}}}{{\rm{d}}t}={\dot{{\mathcal{S}}}}_{i}+\beta \dot{{\mathcal{Q}}},$$
(29)

where \(\dot{{\mathcal{Q}}}\) is the heat flow per particle and \({\dot{{\mathcal{S}}}}_{i}\) is the nonnegative entropy production per particle, see Supplementary Information.

Let us now consider a stochastic trajectory, x(τ) = (i(τ),j(τ)), denoting that at time τ, the particle is in state \({x}_{i(\tau )}^{(j(\tau ))}\). We introduce the time-dependent protocol, l(τ), that controls the energy spectrum of the system. The stochastic energy for trajectory x(τ) and protocol l(τ) can be expressed as \(\epsilon (\tau )\equiv {\epsilon }_{i(\tau )}^{(j(\tau ))}(l(\tau ))\). We assume microreversibility from which follows that detailed balance is valid even when the energy spectrum is time-dependent (due to protocol l(τ)). We define the stochastic entropy as

$$s({\bf{x}}(\tau ))=-\left({\mathrm{log}}\,{\wp }_{i(\tau )}^{(j(\tau ))}(\tau )-1\right)-{\mathrm{log}}\,\left(\frac{j(\tau )!}{{c}^{j(\tau )-1}}\right).$$
(30)

We show that \(\dot{s}({\bf{x}}(\tau ))={\dot{s}}_{i}({\bf{x}}(\tau ))+{\dot{s}}_{e}({\bf{x}}(\tau ))\), where \({\dot{s}}_{i}\) is the stochastic entropy production rate and \({\dot{s}}_{e}\) is the entropy flow equal to \(\dot{q}/T\), where \(\dot{q}\) is the heat flow in Supplementary Information.

The time-reversed trajectory is \(\tilde{{\bf{x}}}(\tau )=(i(T-\tau ),j(T-\tau ))\), and the time-reversed protocol is \(\tilde{l}(\tau )=l(T-\tau )\). The log-ratio of the probability, \({\mathcal{P}}\), of a forward trajectory and the probability, \(\tilde{{\mathcal{P}}}\), of the time-reversed trajectory under the time-reversed protocol is equal to \({{\Delta }}\sigma ={{\Delta }}{s}_{i}+{\mathrm{log}}\,\frac{{j}_{0}}{{\tilde{j}}_{0}}\), where j0 = j(τ = 0) and \({\widetilde{j}}_{0}=\widetilde{j}(\tau =0)\), see Supplementary Information. Hence, \({\mathrm{log}}\,\frac{{\mathcal{P}}({\bf{x}}(\tau ))}{\tilde{P}(\tilde{{\bf{x}}}(\tau ))}={{\Delta }}\sigma\), which leads to the fluctuation theorem49

$${\mathrm{log}}\,\frac{P({{\Delta }}\sigma )}{\tilde{P}(-{{\Delta }}\sigma )}={{\Delta }}\sigma .$$
(31)

Assuming that the initial state is an equilibrium state, introducing the stochastic free energy, f(τ) = ϵ(τ) − Ts(τ), and combining the first and the second law of thermodynamics, we get Δsi = β(w − Δf). The stochastic free energy of an equilibrium state is \(f({\hat{\wp }}_{i}^{(j)})=-j\frac{\alpha }{\beta }-\frac{1}{\beta }\), see Supplementary Information.

If we start in an equilibrium distribution with j(τ = 0) = j0 and the reverse experiment also starts in an equilibrium distribution with \(\tilde{j}(\tau =0)={\tilde{j}}_{0}\), by plugging this into Eq. (31) and a simple manipulation, we have

$$\frac{{\mathcal{P}}({\bf{x}}(\tau )| {j}_{0})}{\tilde{{\mathcal{P}}}(\tilde{{\bf{x}}}(\tau )| {\tilde{j}}_{0})}=\exp \left(\beta w-\beta \left[{{{\Phi }}}_{{\tilde{j}}_{0}}(\tilde{l}(0))-{{{\Phi }}}_{{j}_{0}}(l(0))\right]\right),$$
(32)

where Φj is the partial free energy Eq. (21). Finally, by a straightforward calculation, we obtain Crooks’ fluctuation theorem49,50

$$\frac{P(w| {j}_{0})}{\tilde{P}(-w| {\tilde{j}}_{0})}=\exp (\beta (w-{{\Delta }}{{{\Phi }}}_{j}))$$
(33)

where \({{\Delta }}{{{\Phi }}}_{j}={{{\Phi }}}_{{\tilde{j}}_{0}}(\tilde{l}(0))-{{{\Phi }}}_{{j}_{0}}(l(0))\). For technical details, see Supplementary Information.

Discussion

We presented a straightforward way to establish the thermodynamics of structure-forming systems (e.g., molecules made from atoms or clusters of colloidal particles) based on the canonical ensemble with a modified entropy that is obtained by the proper counting of the system’s configurations. The approach is an alternative to the grand-canonical ensemble that yields identical results for large systems. However, there are significant deviations that might have important consequences for small systems, where the interaction range becomes comparable with system size. Note that our results are valid for large systems (in the thermodynamic limit) as well as small systems at nanoscales. We showed that fundamental relations such as the second law of thermodynamics and fluctuation theorems remain valid for structure-forming systems. In addition, we demonstrated that the choice of a proper entropic functional has profound physical consequences. It determines, for example, the order of phase transitions in spin models.

We mention that we follow a similar reasoning as has been used in the case of Shannon’s entropy: originally, Shannon’s entropy was derived by Gibbs in the thermodynamic limit using a frequentist approach to statistics (probability is given by a large number of repetitions). However, once the formula for entropy had been derived, its validity was extended beyond the thermodynamic limit, which corresponds to the Bayesian approach. It has been shown, e.g., by methods of stochastic thermodynamics, that the formula for the Shannon’s entropy and the laws of thermodynamics remain valid for systems of arbitrary size (with the exception of systems with quantum corrections) and arbitrarily far from equilibrium47. In this paper, we follow the same type of reasoning for the case of structure-forming systems.

Typical examples where our results apply are chemical reactions at small scales, the self-assembly of colloidal particles, active matter, and nanoparticles. The presented results might also be of direct use for chemical nanomotors51 and nonequilibrium self-assembly35. A natural question is how the framework can be extended to the well-known statistical physics of chemical reactions23,24,25,26 where systems are composed of more than one type of atom.