Two months after we started a blog that tracks scientific retractions — Retraction Watch — in 2010, one of us (A.M.) told The New York Times that we weren't sure we would have enough material to post with any regularity. That concern turned out to be unfounded — in just 16 months, we have written about some 250 retractions. Little did we know that, in scientific publishing, 2011 would become the Year of the Retraction.

Here's what grabbed everyone's attention: retractions have increased 15-fold over the past decade, while the number of papers has risen by less than 50% (see Nature 478, 26–28; 2011). It is not clear why, and it is always dangerous to draw too many conclusions from what is a relatively rare occurrence — some 300 retractions among 1.4 million papers published annually. About 90 retractions, after all, have come from one author, Joachim Boldt, a German anaesthesiologist, largely because he failed to obtain the appropriate approvals for his research.

Credit: ILLUSTRATION BY DAVID PARKINS

Still, it is clear that software that detects plagiarism has played a part in the retraction spike, as has the larger number of eyeballs on papers, thanks to the Internet. It is important to point out that an increase in retractions isn't necessarily a bad thing, because they correct the scientific record. But the greater visibility of papers and retractions today adds to the evidence revealing why editors need to handle retractions more transparently. In turn, researchers need to stop emphasizing the paper so much.

What is needed, instead, is a system of publication that is more meritocratic in its evaluation of performance and productivity in the sciences. It should expand the record of a scientific study past an individual paper, including additional material such as worthy blog posts about the results, media coverage and the number of times that the paper has been downloaded.

The expanded model would also make it crystal clear to readers when a paper has been corrected or retracted, and why. This would start with better notices from journals to explain those changes. Take the retraction1 of a paper earlier this year in The EMBO Journal by immunologist Silvia Bulfone-Paus. Bulfone-Paus was at the centre of a misconduct scandal at the Research Centre Borstel in Germany, where she stepped down as lab head under pressure in 2010. The Borstel board found her “ultimately responsible” for the misconduct in her lab and for failing to deal with it in a timely and open manner. (Bulfone-Paus has made few public statements about the case, but she has noted that her results were confirmed by other researchers.) In 2011, journals retracted 13 of her published articles, the stated reasons varying from detailed explanations such as “evidence of data manipulation in Fig. 2C, 4B, and 9, a clear violation of ASM's ethical standards”2, to the wholly unhelpful “This article has been withdrawn by the authors”.

Lines like the latter make us want to pull out whatever hair we have left on our heads. Journal readers should find them similarly frustrating. But we singled out this particular notice for concern not because it said too little, but because, in our view, it allowed the authors to say too much.

Too laissez-faire?

The EMBO Journal's notice1 also included this: “The authors declare that key experiments presented in the majority of these figures were recently reproduced and that the results confirmed the experimental data and the conclusions drawn from them.”

It should be no secret whether something has been peer reviewed.

The statement from Bulfone-Paus and her colleagues described new data and signalled to readers that they could still rely on the original paper, even though it had been retracted. It suggested that the journal stood behind the statement. But when we asked the editor whether that was the case, we were told: “We did not formally investigate this case at the journal and we have not seen this data, as it does not affect the retraction.”3

We've seen a similar lack of close editorial review in correction notices, too. Two recent corrections in Nature, and one in Nature Medicine, which can only be described as massive, describe in painful detail the number of errors in the original papers. In one, images were improperly labelled and cropped, requiring a solid page of text to explain the changes and how they affect the paper, while another acknowledged that images had been manipulated, which was “not acceptable”.

One of those correction notices, published on 28 September of this year, included this line: “We have also included results from a new, reproduced experiment recently performed with an additional cohort of animals that shows exactly the same results.”4 Including new data in a correction notice seemed unusual, so we wanted to know if that line had been subject to peer review. As we reported on our blog5, the journal wouldn't say, responding only that peer review is confidential, and that we should talk to the authors — who never responded to our requests for comment.

We don't mean to question the claims in these particular notices, and we appreciate the arguments for keeping the peer-review process confidential, but we believe it should be no secret whether something has been peer reviewed. Any publishing scientist would surely want journals to assure their readers that vigorous peer review is occurring. After all, peer review is not merely a deterrent, like hydrogen bombs, but an essential element of quality control for journals and for the research community writ large. If it doesn't occur, we think journals owe it to readers to say so, and to explain why. An affidavit from the authors is insufficient.

Requiring any new findings discussed in a correction or retraction notice to be peer reviewed is one step on the route to keeping the scientific record as up-to-date as possible. We believe that some editors also need to work to ensure that their retraction notices say why a paper has been withdrawn, rather than just the exasperating “this paper was retracted by the authors” that we see so often.

Editors have many reasons to pay more attention to retraction and correction notices. For one, scientists often cite papers after they've been retracted, and a clear, unambiguous note explaining why the findings are no longer valid might help to reduce that. But, more importantly, a vaguely worded note that includes further claims from researchers whose work has been seriously questioned, in turn raises questions about the integrity of the journal itself, and about the overall scientific record.

Post-publication review

An even more important step for boosting the long-term credibility of the scientific record is for journals — and scientists — to embrace post-publication peer review. We saw glimmers of this new world after Science published the 'arsenic life' paper6. Bloggers such as biologist Rosie Redfield attacked the paper7, and journalist Carl Zimmer interviewed a dozen experts who had sharp criticisms8. But NASA, who employs the lead author, Felisa Wolfe-Simon, refused to engage in the debate until Science published a compilation of letters to the editor on the subject (see Nature 474, 19; 2011). (Wolfe-Simon, for her part, has said that the critics are misinterpreting her group's paper.) Responding to critiques in real time has not quite gained widespread acceptance, but many scientists were forced to sit up and take notice of how the scientific record of a paper is expanding.

True, the current system does already allow for critiques. There are letters to the editor, which are robust but also limited by space and often slow to appear. There are online comments on papers, but hardly anyone uses them. Even when scientists do comment, many journals refuse to investigate anonymous criticisms, a policy we've argued against elsewhere. (We applaud the fact that Nature does look into such critiques.) Faculty of 1000, in which experts flag important papers in their field, is another approach to post-publication peer review.

But these methods are scattered, and there is no reasonable way for scientists to have them all in hand when they're citing a paper.

These developments are why CrossMark (www.crossref.org/crossmark), soon to be launched by CrossRef — a collaborative agency formed by publishers — is so promising. The idea is for every piece of content to include a clickable logo that will let a reader know whether there have been any corrections, retractions or other revisions. It is a solution to the fact that such changes are at best difficult to find — and are sometimes not mentioned at all on 'current' versions of papers.

That is the 'Status' tab on CrossMark. But the platform will also have a 'Record' tab that gives publishers a way to take the idea even further. They will be able to include material they didn't produce, such as blog posts, media coverage, letters, additional data and metrics such as downloads.

This does not mean the end of journals. In fact, it could strengthen the value and extend the imprimatur of those journals that are willing to embrace these new tools, allowing them to isolate the useful notes from the cacophony of what is available, and judge the value of a particular post-publication contribution. Readers will reward those value judgements, passing to their colleagues those papers with additional content that validates and expands on the results, rendering them particularly trustworthy. If journals aren't willing to start reviewing and compiling additional content related to their papers, someone else will do it.

All of this may mean fewer retractions, because editors would no longer feel limited to such a blunt instrument. We see many papers retracted now that may not need to be, but they contain some nuances that editors don't know how to handle. In the new system, a fleshed-out addendum, or correction, could suffice if the paper included some of the post-publication discussion. We would hope to see fewer cases of the sort that happened this autumn at the Elsevier journal Genomics: an “un-retraction” of a study. The journal resurrected a paper that it had withdrawn for authorship reasons, and nothing to do with the substance of the paper9. And publishers (we hope) would no longer issue retractions for their own errors, such as running the same study twice10.

Such a decline would be bad for business at Retraction Watch, but we would be happy if it meant that the scientific record had become more self-correcting.

We realize that diminishing the importance of the scientific paper will require universities and funding agencies to come up with new ways to judge researchers' productivity. This is a change that most scientists should find heartening. After all, tenure and grant decisions rely heavily on current publication metrics — a flawed system that doesn't reflect how science works. Many people, including Nature editor-in-chief Phil Campbell and Cameron Neylon, a senior scientist at the UK Science and Technology Facilities Council, have already begun thinking about how to give credit for contributions other than papers, such as depositing data and writing software or especially worthwhile critiques (see Nature 469, 286–287; 2011).

There are other hurdles. How should scientists treat papers that are hardly read, so are never evaluated post-publication? Does a lack of comment mean that the findings and conclusions are extremely robust, or that no one has cared enough to check? Including readership metrics alongside comments should help here.

None of these issues, however, should stand in the way of taking the crucial steps to make the scientific record more self-correcting. And it is possible now, for the first time, because of the power of the Internet. As a blog that attracts 150,000 page views a month and has tapped into a community of scientists that wants to keep the scientific record up-to-date, Retraction Watch is evidence of how science publishing has changed. It is time to change it further.