A scientific revolution, for the philosopher and historian Thomas Kuhn who made the term popular, involves a unique kind of scientific change. Revolutionary science isn't 'tradition preserving' work, such as research that helps explore more fully the implications of some established set of principles, filling in the gaps and making a successful but incomplete picture more complete. Rather, it is 'tradition shattering' science, which may destroy as much as it builds up, at least at the outset.

The signs of pre-revolutionary crisis seem most evident not in the concepts but in the basic practices of science.

Kuhn mostly focused, of course, on revolutions in the conceptual theories of science — the Copernican Revolution, and similar episodes linked to relativity and quantum theory. But a scientific revolution might also emerge from practices as well as concepts, indeed out of anything that transforms in a discontinuous way how science is done. What precedes any revolution, of course, is mounting stress as problems, defying resolution, grow more urgent. Today, the signs of such pre-revolutionary crisis seem most evident not so much in the concepts but in the basic practices of science, especially in publishing and funding — inherently not the most exciting things to talk about, to be sure, but possibly among the most important determinants of the future of science.

A trend of recent years is the movement to judge the value of journals, papers and individual scientists in a more 'objective' way through quantitative measures such as citation counts and impact factors, which it is supposed eliminate the human bias associated with peer review and other forms of qualitative human assessment. Writing in the journal Statistical Science, Robert Adler and colleagues cite a UK government report that asserts that today's methods for judging the quality of university research will soon be phased out (available at http://go.nature.com/1iSjqL). “Metrics, rather than peer-review”, the report demands, “will be the focus of the new system.”

There are many ways, of course, to tally citations, and serious scientists now debate the advantages of one measure over another. But before considering technical points it's worth questioning where this rush to quantification comes from. The trend has clearly been driven in part by the new availability of vast quantities of citation data and the ease of computing with it, but it also coincides with the spreading influence of management consultants through governments and universities, and one of their favourite doctrines — that 'you can only manage what you can measure'.

That may be true in simple manufacturing processes, for example, where measurements can capture quality, production speed and costs. But when aiming to assess far more complex things such as human ability or research value, such an attitude often gives only the illusion of improvement. Citation numbers give a very partial view of the value of any work, and a journal's impact factor reduces everything that might be said about it — its willingness to publish ideas out of the mainstream, for example — to a single number. Such an approach clearly simplifies the making and defending of decisions, but it doesn't improve their quality.

Indeed, experiments reported last year in Science found that people facing highly complex problems typically make better decisions by using their gut feelings to weigh up the influence of multiple factors, rather than by doing calculations (J. Whitson & A. Galinsky, Science 322, 115–117; 2008). Calculation is better for simple problems, but when problems become sufficiently complex — such as deciding which of several houses to buy, or who to hire for some job — a desire to calculate leads people to focus inappropriately on one or two aspects to the exclusion of others. People make better decisions if they can live with uncertainty.

This would imply that the choice of which proposals to fund or which researchers to hire would benefit from efforts to find and develop decision-makers with open minds, and to trust their vision rather than their ability to crunch numbers.

What makes the prevailing trend towards facile counting potentially revolutionary is the reaction it is stirring-up among scientists. Zoologist Peter Lawrence relates in a recent essay the (typical) plight of a young scientist who, in the first two years of his position, faced overwhelming demands to find new students and postdocs while at the same time publish new work and, of course, prepare lots of grant applications (P. Lawrence, PLoS Biology 7, e1000197; 2009). Rather than high-quality science, researchers have to focus on activities that count towards the measurable numbers on which their future depends.

As with any emerging crisis, it's not easy to see a way out. Lawrence suggests that proposals should be much simpler, with a limit on the number of papers cited (to emphasize quality over quantity), and that grants should last longer. Surely there may be some more radical ideas, but the crisis may have to grow worse before they emerge.

However, some more radical proposals are beginning to target the shortcomings of the publication process itself. For example, ecologist Stefano Allesina has explored the idea that journals might bid on the manuscripts they want to publish (http://arxiv.org/abs/0911.0344; 2009). He envisages a preprint server to which authors could submit their papers after they have reviewed at least three other papers (thereby guaranteeing a match between overall reviewer numbers and papers). A paper, once reviewed three times, would automatically advance to another category of 'ripe' papers, for which journal editors could then bid.

Allesina's tentative conclusion — based on some (admittedly crude) agent-based simulations — is that such a system might well be better for authors, leading to faster publication and a smaller reviewing burden, but perhaps not for journals, for which the editors would face more work in trolling through the pool of ripe papers to find the interesting ones. One thing this system doesn't address, of course, is the many other services editors perform in addition to selecting papers, including editing for readability and the creation of other valuable journal content such as features and News-and-Views-type articles, and so forth.

But the promise of web-based collaborative tools to enable truly radical departures from tradition seems clear. Whether anything like a true revolution in scientific practice will emerge is unknown, but the basic mechanisms science has used for so long, wedded to today's short-sighted demand for quantification, look increasingly dysfunctional. Science is suffering, and will suffer, until we find a better way.