Credit: ILLUSTRATION BY JONATHAN BURTON

During the twentieth century, biology — traditionally a descriptive science — became one of hypothesis-driven experimentation. Tightly coupled with this was the increasing dominance of reductionism, the idea that complex biological systems can be understood by dismantling them into their constituent pieces and studying each in isolation. Implicit here was the notion that observations should only be made to support or attack hypothesized mechanisms of action, and that simple observation — phenomenology for its own sake — is of relatively little use.

These approaches served us well over the past half-century: witness the revolutions in molecular and cellular biology, immunology, neurobiology and genetics. Our insights into pathogenetic mechanisms exceed the wildest speculations of 50 years ago.

Now the dominant position of hypothesis-driven research is under threat. Many feel that traditional conceptual tools cannot map the enormous complexity that allows single cells and complex organisms to thrive, and that recent technological innovations have created a viable alternative. My students can gather certain types of experimental data 1,000 and even 10,000 times faster than I could 40 years ago.

In cancer research, the new technologies promise to change the landscape of diagnosis, therapy and insights into disease pathogenesis. So have the old ways of doing business — of testing hypotheses — become anachronisms? I think not.

Better or just bigger?

Is it worth extinguishing 30 small-scale projects for an attack at the systems-wide level?

The era of the new biology — genomics, proteomics, metabolomics — began with the sequencing of the human genome a bit more than a decade ago. Its successes are indisputable: tens of thousands of research programmes, many focused on identifying and characterizing specific genes, have benefited enormously from the creation and study of this database.

Large-scale efforts such as the Human Genome Project are portrayed as the future, and as central to the discipline of systems biology that has since emerged. Increasing proportions of national research budgets are being diverted to them. But is it worth extinguishing 20 or 30 small-scale, hypothesis-driven projects to make room for an attack at the systems-wide level?

From a cancer researcher's perspective, the successes of hypothesis-driven science are clear and undeniable. They stretch back over half a century and continue week after week, month after month, to yield new conceptual insights. By contrast, the new ways of doing biology are so untested that their long-term benefits are still hard to project. Nonetheless, it's useful to make comparisons, if only because economic necessities force them to be made.

See online collection.

Analysis of expression arrays, which show which genes are active in a tumour sample, have shown that cancers previously viewed as a single entity have different pathogenetic mechanisms and respond differently to therapy. The use of genome-wide libraries of small interfering RNAs to inhibit large cohorts of genes, and so identify those behind cancer, is a blend of the old and the new. It lacks a clear preconception of what will eventually be found, but contains clear hypotheses about the biological phenotypes that will result. This approach, still in its infancy, has been remarkably productive.

Sequencing of entire tumour genomes (or their coding exons) has a more mixed record. These projects consume an enormous amount of resources and researchers' energy. The dividends to date have been modest: the discovery of several new oncogenes and tumour suppressor genes involved in tumour formation (for example, BRAF, IDH1/2 and translocations in prostate carcinomas), and a general measure of the degree of genetic instability of various tumour-cell genomes.

These massive data-generating projects have yet to yield a clear consensus about how many somatic mutations are required to create a human tumour, and have given us few major breakthroughs in our understanding of how individual tumours develop. The cost of each conceptual insight has been very high indeed — although this is likely to change as technology costs tumble. Meanwhile, countless smaller experimental research programmes — proven sources, year-after-year, of conceptual innovation — have struggled to survive.

High stakes

Arguably the most ambitious large-scale venture involves assembling the many interacting signalling components within individual cells into wiring diagrams. These elaborate maps, sometime termed 'hairballs', and the computer algorithms that model signal processing, could shed light on why and how individual cells respond to external signals and predict their future behaviour. Although aesthetically pleasing, they have yielded few conceptual insights into how and why cells and tissues behave the way they do. Some feel that a thorough understanding of individual signal-processing components is an essential prerequisite to predicting the behaviour of entire signalling circuits — a notion often dismissed as old-style reductionism.

The stakes here are high. The repercussions of major agencies shifting their funding allocations will be felt for a generation. Running laboratories focused on small-scale, hypothesis-driven research has become unattractive for many young people because of the enormous difficulty of procuring enough money to launch and expand such a research programme. The long-term effects will be an increasing inability of many biological disciplines to attract the brightest young people — and they are, after all, the engines of scientific progress. Without them, we are lost.