Stefan Füglistaler: testing the air.

You have written an interesting but provocative paper that is likely to stir up debate. Where should you try to publish it? Stefan Füglistaler, an atmospheric chemist at the ETH, the Swiss Federal Institute of Technology in Zurich, found himself asking this question last December. In the end, he chose not to submit his work to one of the field's established journals, but to an online newcomer with an unusual approach to peer review.

Füglistaler and his colleagues submitted their work — a new explanation for how large nitrogen-containing particles originate in the Arctic atmosphere — to Atmospheric Chemistry and Physics (ACP). Before sending papers out for peer review, ACP posts them online and holds a commentary session where scientists can debate the work, or simply offer helpful pointers. For new ideas such as Füglistaler's, it seems like the perfect testing ground.

ACP is not alone. A handful of other journals have launched experiments in 'open' peer review. Thanks to the Internet, the kind of debate that takes place at conferences can now be incorporated into the editorial process. For advocates of opening up peer review, it is an idea whose time has come. “If I'm right,” says Drummond Rennie, a deputy editor of The Journal of the American Medical Association (JAMA), “all journals will be doing this in the future.”

Drummond Rennie: ready for change.

The founders of ACP believe that they are providing an alternative to a flawed system. In traditional peer review, editors solicit comments on a paper from relevant experts, and use these as a basis for a decision on whether to publish. Most scientists agree that the process, if performed properly, is a good way of assessing new research. But very few would argue that it is problem-free.

Authors sometimes claim, for example, that good work is rejected because it clashes with the reviewers' own studies or opinions — or simply because the ideas expressed are too 'left field'. Reviewers can also miss technical errors. “I sometimes wonder how some papers get through,” says Thomas Koop, an executive editor of ACP, who is also at the ETH. Papers that span different disciplines cause particular problems, as individual reviewers are often only familiar with one of the fields involved.

Climate of change

These issues got atmospheric scientists talking. In June 2000, Ulrich Pöschl, an atmospheric chemist at the Technical University of Munich, approached his fellow researchers with a proposal to develop a forum where papers could be discussed before being submitted for formal peer review. Scientists in the field would have a chance to offer tips and comments, and the authors could then defend or revise their work.

Peers go public: Thomas Koop (right) and Ulrich Pöschl back the idea of open commentary.

The group decided to launch an online journal with open commentary sessions prior to formal peer review, and convinced the European Geophysical Society to host the experiment. The founders formed an editorial board, designed a website and laid out the details of how the commentary stage would work. Last September, ACP was born.

The submission process starts with a quick once-over from relevant members of ACP's 60-strong editorial board. An editor assigned to the paper decides whether technical corrections are needed. If the paper meets the basic standards, it is posted on ACP's website. Registered researchers can then post comments, to which the authors can respond.

The discussion is moderated by the assigned editor, who can edit out any personal attacks or inflammatory comments. At the end of eight weeks, the authors have the option of revising the paper or submitting it for traditional peer review. The reviewers for this latter stage are selected before the paper goes online — and they can also comment during the initial stage, albeit anonymously. If a manuscript makes it through the entire process, as 12 have done to date, it is officially accepted for publication. And, because ACP is an online journal, papers can be made available immediately.

Kaarle Hämeri, an atmospheric physicist at the Finnish Institute of Occupational Health in Helsinki, has had two papers accepted for publication by ACP. He says that ACP published his work faster than other journals he had submitted to, and rates the quality of the other papers published as high. But, like other authors who have published with ACP, Hämeri found his paper attracted only limited debate. So far, it seems, the atmospheric-science community hasn't exactly jumped at the concept of open peer review.

Web of words: online archives can allow informal peer review of papers before formal publication.

Füglistaler's paper, for example, generated just two online comments, before moving on to formal review. Hämeri, whose first paper received only three comments, two from the reviewers assigned to the manuscript, was disappointed with the lack of discussion.

“It will take a little time to get acquainted with the idea of interactive commenting,” says Pöschl. But he hopes to see an increase in activity now that the European Geophysical Society has started advertising ACP.

Although ACP has yet to establish itself, successful experiments in other fields suggest that the process can work. After a debate at the 1996 European Conference on Artificial Intelligence, held in Budapest, researchers in the field asked Erik Sandewall, a mathematician and computer scientist at Linköping University in Sweden, to develop a new journal that would make peer review more discursive. “Criticism at seminars is seen as valuable to scientists,” says Sandewall. “If there are going to be critical comments, it is better to capture them at an early stage.”

Six months after the meeting, Electronic Transactions on Artificial Intelligence (ETAI) started recruiting editors. Like ACP, ETAI relies on open discussion moderated by editors, followed by confidential review. But there are slight differences between the two.

Artificial intelligence (AI) is made up of a relatively small number of related fields, and ETAI is divided into sections that cater for these subdisciplines. Subscribers are alerted by e-mail when a paper in their area is posted on the website. Signed comments from researchers, as well as anonymous notes from preselected reviewers, are e-mailed to all subscribers as well being archived in an online log linked to the paper. At the end of the discussion, authors can revise their manuscripts. The paper then goes to the reviewers, who have already asked their questions and now just stamp the paper as accepted or rejected.

Rapid results

Credit: MAJO XERIDAT

Iliano Cervesato, a computer scientist at ITT Industries in Alexandria, Virginia, believes that ETAI offers authors better insights compared with traditional journals. And because it is not hampered by print production schedules, it can publish papers more quickly despite the additional open commentary step. ETAI published a paper from Cervesato's group in just six months — a sprint compared with the one to two years that AI journals often take, he says.

AI researchers normally present fresh ideas at meetings in the form of technical reports. Although not “masterpieces of literature”, says Cervesato, these reports mark a researcher's territory during the long wait for formal publication. ETAI offers a halfway house, helping researchers get peer-reviewed and better-written versions of technical reports into the public domain quickly.

Whether or not open peer review really offers any advantages over traditional techniques should become clearer when The Medical Journal of Australia (MJA) finishes the second of its peer-review experiments. In the first study, which started in March 1996 and lasted for just over a year, papers that had already been accepted for publication were posted online, together with comments made by referees during the review process. Readers were invited to discuss the papers, and authors had the option of revising their papers before publication in the print version after seeing what the readers had to say.

Around half of the 56 papers attracted comments, and about 2% of MJA's readers took part in the debates. Seven authors revised their manuscripts as a result. The journal's editors concluded that open peer review is useful, but not a substitute for formal review.

Paper chains

The journal has since begun a larger study. Starting in October 1998, some papers have been posted on a password-protected website. The review consists of an online discussion to which authors, editors and assigned peer reviewers have access. A panel of six consultants, chosen to represent a broader range of the journal's readership than the peer reviewers provide, can also comment.

Editors decide whether to accept, reject or ask for changes after three to four weeks of discussion. Accepted papers, and the associated discussions, are then placed on an open website and readers are invited to comment. Authors can again revise their papers on the basis of these comments before publication in the print version.

The editors running the study are rating the quality and efficiency of the procedure by discussing it with authors and reviewers, and comparing the results with those from a control group of papers that have gone through traditional peer review.

The trial is currently on hold while MJA upgrades the website it uses for the debates, although Craig Bingham, the journal's communications development manager, says that he is confident the system will improve dialogue between authors and reviewers. The British Medical Journal is conducting its own study of open peer review but is reluctant to discuss the experiment as the editors want to submit the results for publication.

But not everyone is impressed with open peer review. Stevan Harnad, a cognitive scientist based at the University of Southampton, UK, has spent 25 years editing the journal Behavioral and Brain Sciences. An editor, he says, is a gatekeeper. He points to the papers that pass across his desk. “As an editor, I waste a lot of time on raw sludge,” he says. “We'd waste even more time if everyone looked at papers online without knowing which ones were good and which were bad.”

Advocates of open peer review counter that the process need not entail sacrificing quality control. Editors can still weed out technically deficient papers, they argue. And the nature of an open forum, they claim, motivates researchers to put up better papers. “There is an embarrassment factor,” says Sandewall. “People think twice about submitting garbage.”

David Poole, a computer scientist at the University of British Columbia in Vancouver, agrees. He published a paper on decision theory and logic in ETAI in 1998. In order to brave the threat of peer attack, he needed to be confident about his work. “You can't go on a fishing expedition,” he says.

Harnad accepts that open peer review might work well within small, cooperative communities such as the AI fraternity, but argues that it is difficult to implement for journals with broader readerships. As an alternative, Harnad points to online archives, where papers can be deposited before or at the same time as they are submitted to journals. Many such archives have sprung up in the past 10 years, and some scientists see communication between the researchers who use them as an informal form of peer review.

Into the vault

Some archives, such as the Electronic Colloquium on Computational Complexity, which started storing papers in this mathematical subject in 1994, involve a small amount of editorial input. The colloquium has an editorial board, but members merely check papers for technical errors and ensure that the submission is correctly categorized. Comments and corrections are kept with the original paper, but there is no final acceptance process and authors often submit their paper to a traditional journal after it has been discussed online.

Paul Ginsparg (bottom) has recently wheeled the successful pre-print archive he set up to a new home at Cornell University (top). Credit: M. OKONIEWSKI/AP; P. GINSPARG

Other archives have even less of a role for editors. The arXiv server housed at Cornell University in Ithaca, New York, and founded 10 years ago by physicist Paul Ginsparg at the Los Alamos National Laboratory in New Mexico, is the best-known example. The archive now allows researchers to post physics, computer science and mathematics papers online before they are submitted for publication, and imposes no editorial control.

Harnad, who runs CogPrints, a comparable archive for the cognitive sciences, says that most submitted papers will eventually go through conventional peer review, and appear in a traditional journal. Most archive users seem happy that these repositories coexist alongside traditional journals, but a minority argue that the informal peer communication provided by archives will eventually replace formal publication. “By the time I go through peer review, my work is old news,” says theoretical physicist Giulio Ruffini, director of Starlab Barcelona, a private research centre that places little importance on publishing in peer-reviewed journals. “Archiving is fast and easy and puts a time stamp on what you have done so that you can later claim ownership.”

Stolen words

But in other fields, archives have not taken off. In medicine, for example, some researchers have argued that uncorrected mistakes in a manuscript could cost lives. And in molecular biology, some papers are like recipes: having read the methods section, it would be easy to repeat the work and publish it elsewhere.

The potential for plagiarism in fast-moving, highly competitive fields may also limit the adoption of open peer review. Already, journal editors deal with occasional accusations of plagiarism by reviewers (see Nature 413, 102–104; 2001). Authors also frequently name individuals whom they would like excluded as reviewers — just imagine the flood of complaints if all researchers routinely had access to molecular biology papers being discussed online prior to formal publication.

Such objections might explain why few of the world's 20,000 or so peer-reviewed scientific journals are flirting with open peer review. Those that have, such as ACP, tend to be new or less prestigious publications.

Philip Campbell, editor of Nature, says that some of the innovations are worth watching. But neither Nature nor Science has plans to implement any form of open peer review. And, despite Rennie's enthusiasm, JAMA is not planning to change the way it handles papers either.

Part of the problem, says Science's editor-in-chief, Donald Kennedy, is a lack of information about alternative methods of handling submitted papers. “What we have now is a situation with lots of models out there and no systematic effort to bring them together and look at the advantages and disadvantages of each,” he says. For smaller journals, the logistical difficulty of moderating an online commentary poses an additional problem.

Rennie — a long-standing proponent of innovation in peer review — argues that cultural inertia is also a factor. “There is an inborn conservatism of scientists, which is huge,” he says. Andrew Odlyzko, a mathematician at the University of Minnesota in Minneapolis who studies trends in online publication, agrees. “Change is usually slow, especially when it is sociological change,” he says.

Without a clear demonstration that open peer review makes for better papers, few journals seem likely to adopt new methods. But for advocates of reform, the opportunities for innovation allowed by online scientific discussion are too great to ignore. Changing peer review, says Rennie, is “one grand experiment”.

Atmospheric Chemistry and Physics → http://www.copernicus.org/EGS/acp

Electronic Transactions on Artificial Intelligence → http://www.ida.liu.se/ext/etai

The Medical Journal of Australia → http://www.mja.com.au

Electronic Colloquium on Computational Complexity → http://www.eccc.uni-trier.de/eccc

ArXiv → http://www.arxiv.org

CogPrints → http://cogprints.soton.ac.uk