Sir

An increasing problem for reviewers, in providing adequate reviews for science journals, is not simply fraudulent data submission or manipulation (see Correspondence Nature 439, 782–784; 200610.1038/439782b), but the information density and sheer bulk of data that now have to be supplied as part of publishing modern biological science. This is particularly true with ‘omics’-type data sets (transcriptomics, proteomics, metabonomics and so on), which are now collected in parallel in systems-biology studies.

Many referees are experienced and learned scientists, but they are also very busy people who may well get several papers a week to referee. Do we really have time to read the 60-plus pages of supplementary data that often accompany a major paper? Do we even have the tools and expertise needed to analyse and check the veracity of raw ‘omics’ data sets? A typical data set formatted to meet MIAME (minimum information about a microarray experiment) requirements may contain millions of discrete data.

To check whether these data have been scaled, normalized and processed correctly — within a data set that might have taken a couple of postdocs two years to process — is a difficult task, even if the referee has the time, the knowledge and the right software.

In the data-rich ‘omics world’ of today, the referee's task has become more complex and challenging than could have been envisaged only a few years ago.

Furthermore, there is increasing demand for integrative papers that cover many types of bioanalytical measurement and multivariate statistics at different levels of biomolecular organization. The scientific community needs to reassess the way it addresses the peer-review problem, taking into account that referees are only human and are now being asked to do a superhuman task on a near-daily basis.