Quality and value: How can we research peer review?

Nature (2006) | doi:10.1038/nature05006

Improving the peer-review process relies on understanding its context and culture.

Peer review gets a bad rap these days, and there is much talk that something should be done about it. Scientific societies need to know more about the alleged problems before they try to fix the peer-review process. These problems are often cast as ethical ones: unintended bad consequences of well-meaning editorial processes.

What might an empirical investigation of these problems look like? As editor of the Journal of Empirical Research on Human Research Ethics, I frequently discuss this sort of question with people thinking about doing research in evidence-based ethical problem solving. Understanding the context and culture in which ethical problems arise typically points the way to likely solutions. Thinking about an empirical investigation of the peer-review process suggests elements of a research design.

We could begin with the usual laundry list of complaints:

  • Discouraging reviews, including insulting, unhelpful or unqualified reviewers; editors who seem to give equal weight to good, poor, ambiguous and contradictory reviews; the rejection of papers that are quite favourably reviewed.
  • Dishonest reviews, including reviewers who express prejudices, have conflicts of interest or have not read the paper carefully or thoroughly enough.
  • Impossible requirements, when reviewers or editors are expected to detect any dishonest science, such as fabrication, falsification or plagiarism.

Do all of these complaints reflect the same problem? Would one solution cure all problems? Some scenarios taken from real cases suggest otherwise:

  • A novice submits a naive paper to the field's foremost journal that is rejected with two reviews that feel more like a poke in the eye than useful feedback. After revisions, the manuscript is submitted to an obscure journal and accepted, conditional on responding to good suggestions by reviewers.
  • A young researcher submits a complex, carefully developed paper to a leading journal. It is rejected with suggestions of significant further research needed before the paper is publishable. A year later, a nearly identical paper is published in another top journal, by a well-known, well-connected researcher who happens to be on the previous journal's review board. In the interests of damage control, the young researcher does not complain and moves on to another research project.
  • A manuscript is published with flawed methodology and an introduction that is largely plagiarized - problems that are identified later by a reader.

Now, let us imagine editors' perspectives:

  • Most peer reviews can contribute to improving a paper.
  • Finding competent, open-minded, unbiased reviewers appropriate for each paper is challenging: the best reviewers are busy people. Vetting first-time reviewers occurs by trial and error. Even people well-known in their field may be poor reviewers.
  • Page limits requiring a 90% rejection rate can require rejection of some quite good papers.
  • Some editors lack the resources to give full attention to each paper, the reviews of it and how each author is advised to respond.
  • Editing a new journal, in which the editor plays a major role in shaping submissions to fit a nascent field, is different from editing a well-established, mainstream journal.
  • A journal that investigated and responded to every semblance of scientific misbehaviour or poor methodology would increase the time taken to review and be accused of overstepping its role. Some bad papers inevitably slip through the editorial process.

And reviewers' perspectives:

  • Reviewing is a lot of pro bono work for a busy professional if done properly, but it is also a responsibility to the community, a learning experience to keep up with the field and perhaps a status symbol.
  • An agreement to review often turns into a time crunch, resulting in delay or a hurried review.

No doubt there are other scenarios and perspectives that should be added to this discussion. But these few suggest some testable hypotheses, such as:

  • Most complaints are about premier journals with rejection rates of 90-95%. They provide only three options: the reviewers' comments and simple rejection; request for revision based on reviews; and acceptance. These journals have a long publication lag.
  • Most authors would benefit from a better understanding of how journals operate.
  • Editors benefit from having policy statements about handling complaints.

Additional hypotheses can be generated through focus groups of self-selected critics of peer review. More important to query, however, are the managing editors who oversee the review process. In particular, they should be quizzed on how they select, use, evaluate and manage reviewers, and how they explain their decisions to authors. Other questions might include how editors vary by background, resources, rejection rate, autonomy, prestige, age and field of their journal.

Questions can be refined using focus groups of editors to help identify key issues, appropriate ways to pose questions and how to stratify editors by type of journal. Verbal interviews would provide insight into how editors handle their roles, specific issues they face and how they manage their role in the context of their journal. After the survey is developed, piloted on a stratified sample of editors and revised as needed, it would be sent to as large a stratified sample of editors as feasible. The reward for responding should be detailed feedback concerning the results of the study and recommended best practices for editors.

Results of such a survey can only clarify problems, not solve them. Feedback to editors may stimulate some problem-solving. Publication of the survey results in a high-profile scientific journal is essential to stimulate broader examination of editorial policies and practices. This should be coupled with feedback to scientific societies, and organizations such as the Council of Science Editors, which sets standards for editing, and the Committee on Publication Ethics, which evaluates codes of conduct for editors and shares information on dealing with problems.

One suspects that peer review is a bit like democracy - a bad system but the best one possible. It seems to be one that takes different forms in different (scientific) cultures and can be tweaked to improve its operation. Let us hope that future research will discover and disseminate the best ways to fine-tune the system within the constraints of each type of journal.

RELATED LINKS:

Council of Science Editors

Committee on Publication Ethics

Joan E. Sieber is professor emerita in the department of psychology, California State University, Hayward, and a senior research associate at Simmons College in Boston, Massachusetts. She has studied decision processes in many contexts including those of scientists facing ethical questions. She founded and edits the Journal of Empirical Research on Human Research Ethics (www.csueastbay.edu/JERHRE) and gives talks everywhere people are willing to hear about evidence-based ethical problem-solving.

Visit our peer-to-peer blog to read and post comments about this article.

Extra navigation

.
ADVERTISEMENT