Submit a biomedical-research paper to Nature or other high-profile journals, and a common recommendation often comes back from referees: perform additional experiments. Although such extra work can provide important support for the results being presented, all too frequently it represents instead an entirely new phase of the project, or does not extend the reach of what is reported. It is often expensive and unnecessary, and slows the pace of research to a crawl. Among scientists in my field, there is growing concern that escalating demands by reviewers for the top journals, combined with the increasingly managerial role assigned to editors, now represents a serious flaw in the process of peer review.

Here, I offer some suggestions. The generalizations that follow have their pleasant exceptions, but the trend is that useful interventions are becoming exactly that — exceptions.

Rather than reviewing what is in front of them, referees often design and demand experiments for what would be better addressed in a follow-up paper. It is also commonplace for reviewers to suggest tests that, even if concluded successfully, do not materially affect conclusions. These are known in the trade as reviewer experiments. The demands seem to increase with the impact factor of a journal, as if referees feel that they need to raise the bar on the journal's behalf.

This has a serious and pernicious impact on the careers of young scientists, because it is not unusual for a year to pass before a paper is accepted into a high-profile journal. As a result, PhD degrees are delayed, postdocs may have to wait an entire year to compete for jobs and assistant professors can miss out on promotions.

The system also adds to tension between established, tenured lab heads charged with proper allocation of limited resources, and students and postdocs whose careers rely on papers in high-impact journals. The two sides will disagree on whether to cut their losses and consider lower-ranked journals, or to cave in to reviewers' demands.

Extra months of experiments increase costs without any obvious advantage. ,

The extra months of experiments increase costs for labs, without any obvious advantage for science. Although journals profit handily when prospective authors offer the best science possible, most do not spend money to produce it. For the publishing industry, this is an accepted business model, but it should come with greater responsibilities.

The scientific community should rethink how manuscripts are reviewed. Referees should be instructed to assess the work in front of them, not what they think should be the next phase of the project. They should provide unimpeachable arguments that, where appropriate, demonstrate the study's lack of novelty or probable impact, or that lay bare flawed logic or unwarranted conclusions. They should abandon the attitude that screams: "look, I've read it, I can be as critical as the next dude and ask for something that's not yet in the manuscript", a reflexive approach to reviewing that has unfortunately become more or less standard. Many reviewers are also, of course, authors, who will receive such unreasonable demands in their turn, so why does the practice persist? Perhaps there is a sense of 'what goes around comes around', and scientists relish the chance to inflict their experiences on others.

The problem is made more acute by the unwillingness of editors to express their opinions. Instead, they consult an increasing number of reviewers (four or five is no longer an exception) in search of a majority opinion. Rather than taking a hard look at reviews and the experiments requested by referees, editors seem to default to the position that almost every requested experiment or revision can be justified. Editors often do not (or cannot?) assess revised manuscripts, and so send them out to reviewers again, losing more time and often bringing still more demands for further experiments.

I see three steps that journals can take to improve this deteriorating situation. First, they should insist that reviewers provide a rough estimate of the anticipated extra cost (in real currency) and effort associated with experiments they request. This is not unlike what all researchers are typically asked to provide in grant applications. Second, journals should get academic editors with expertise in the subject to take a hard look at whether the requests of reviewers will affect the authors' conclusions, and whether they can be implemented without undue delay. Third, reviewers should give a simple yes or no vote on the manuscript under scrutiny, barring fatal shortcomings in logic or execution. Once editors have decided that, in principle, the results are of interest to their publication and its readership (which is their editorial prerogative), passing a simple test of logical rigour and quality of data should be enough to get them through peer review. Multiple revisions rarely affect the overall conclusions of a study, as many an editor (and author, for that matter) would agree.

These changes would save time, speed exciting science to the public eye and provide much-needed clarity to authors — with significant savings to boot. Having read some of the biographies of the founders of molecular biology, it is hard to escape the impression that, once, the mechanics of science were indeed thus. It is worth revisiting the experiment, I should think.