'Materials by design' has arguably been the dominant refrain among the materials community since the mid-twentieth century, when, for example, the first understanding of structure–function relationships in synthetic polymers began to emerge. And it has indeed become possible to design materials from first principles: to predict crystal structures and mechanical or electronic properties, direct self-assembly, engineer defects and relate atomic-scale to bulk behaviour.

At least, after a fashion. Such drawing-board methods tend to be bespoke and selective: they are tailored to particular classes of material rather than being generic, perhaps based in part on an empirical training of algorithms more than on a deep understanding of how composition and processing methods can guarantee particular outcomes. The challenge is an example of an inverse problem: given a specified set of outputs, can one deduce what are the required inputs?

Credit: PHILIP BALL

Miskin et al. (Proc. Natl Acad. Sci. USA 113, 34–39; 2016) have developed a new formalism for such macro-to-micro design. They treat it as a problem in statistical physics, the aim being to calculate the microstates that give certain statistical bulk properties. Traditional optimization procedures, such as a Monte Carlo simulated-annealing search in the parameter space that characterizes a material, don't take any explicit account of the microstates themselves. Rather, one simply considers a bunch of parameters that describe the material and tries to find the parameter values that optimize some cost function relating to the target property. This can result in a lengthy, random walk in parameter space: if every guess is initially as 'bad' as any other, there is nothing to guide the search.

In contrast, Miskin et al. relate the values of the design parameters to the underlying configurations of the components. This provides additional knowledge that can be exploited in the search: specifically, information about how fluctuations away from ensemble averages alter the quality of the optimization. This information, available 'for free' in the fluctuations, can then be put to work to speed up convergence to the solution. To put it metaphorically, it's somewhat like a walker seeking to descend from high ground: rather than leaping blindly from spot to spot and seeing if she has got any lower, she can make little exploratory forays around each location to look for local downward gradients.

Miskin et al. show that their algorithm progresses towards an optimum more quickly and systematically than standard optimization techniques for two test cases: altering the coupling constants in a two-dimensional magnetic (Ising) model to maximize the magnetization, and trapping a thermal particle in a particular well of a sinusoidal energy landscape. Emboldened by that success, they set their routine more demanding tasks, to which it proved equal. One was to design the monomer interaction strengths of a hexameric polymer chain so that it would fold into an octahedral cluster. Another, directly related to a real-world materials target, was to tune the flexibility and interactions of block copolymers so that they self-assemble into defect-free striped or lamellar microphases.

Because the configurations sampled don't have to be at thermodynamic equilibrium, the method can handle dynamical processes. It can also examine the sensitivity of component configurations to, say, temperature changes or applied fields, and so could in principle be used to optimize processing methods as well as ingredients. It could make 'design' less of an art and more of a science.