Predicting environmental conditions takes up a tiny proportion of an intelligent car's processing power, but makes it much more efficient. Credit: G. HALING/SPL

The most efficient machines remember what has happened to them, and use that memory to predict what the future holds. That is the conclusion of a theoretical study1 by Susanne Still, a computer scientist at the University of Hawaii at Manoa and her colleagues, and it should apply equally to ‘machines’ ranging from molecular enzymes to computers. The finding could help to improve scientific models such as those used to study climate change.

“The idea that predictive capacity can be quantitatively connected to thermodynamic efficiency is particularly striking,” says Christopher Jarzynski, who studies statistical mechanics at the University of Maryland in College Park.

It might feel perfectly familiar for a computer simulation of weather, say, to construct a model of the environment and use it for prediction. But it seems peculiar to think of a biomolecule such as a motor protein doing the same thing.

Yet that is just what it does, say Still and her colleagues. A molecular motor works by undergoing changes in the conformation of the proteins that comprise it, and “the conformation it is in now is correlated with what states the environment passed through previously”, says Gavin Crooks, a biophysicist at the Lawrence Berkeley National Laboratory in Berkeley, California, and a co-author of the study, which was published last month in Physical Review Letters.

For example, a protein might be switched into its active state by binding a metal ion, which in a sense records that such ions are in the vicinity. In this way, the state of the molecule at any instant embodies a memory of its past.

Forewarned and forearmed

Information that provides clues about the future state of the environment is useful, because it enables the machine to ‘prepare’ — to adapt to future circumstances, and thus to work as efficiently as possible. “My thinking is inspired by dance, and sports in general, where if I want to move more efficiently then I need to predict well,” says Still.

Alternatively, think of a vehicle fitted with a smart driver-assistance system that uses sensors to anticipate its imminent environment and react accordingly — for example, by recording whether the terrain is wet or dry, and thus predicting how best to brake for safety and fuel efficiency.

That sort of predictive function costs only a tiny amount of processing energy compared with the total energy consumption of a car.

But for a biomolecule it can be very costly to store information, so its memory needs to be highly selective. Environments are full of random noise, and there is no gain in the machine ‘remembering’ all the details. “Some information just isn't useful for making predictions,” says Crooks.

Information overload

If a biomolecular machine does inadvertently store such useless information — a stray hydrogen ion stuck to part of a protein chain, altering its shape to no particular purpose, say — then this information must be erased sooner or later, because the machine’s memory is finite. (In that example, there are only so many places on a protein that a hydrogen ion can stick.)

But erasing irrelevant data costs energy: it results in heat being dissipated, which makes the machine inefficient. So there is a finely balanced trade-off between the benefits of information processing and the inefficiencies caused by poor anticipation.

Because biochemical motors and pumps have indeed evolved to be efficient, says Still, "they must therefore be doing something clever — something tied to the cognitive ability we pride ourselves with: the capacity to construct concise representations of the world we have encountered, which allow us to say something about things yet to come”.

This balance, and the search for concision, is precisely what scientific models have to negotiate. If you are trying to devise a computer model of a complex system, in principle there is no end to the information that it might incorporate. But in doing that you risk simply constructing a one-to-one map of the real world — not really a model at all, just a mass of data, many of which might be irrelevant to prediction.

Efficient models should achieve good predictive power without remembering everything. “This is the same as saying that a model should not be overly complicated — that is, Occam's razor,” says Still. She hopes that knowledge of this connection between energy dissipation, prediction and memory might help researchers to improve algorithms that minimize the complexity of their models.