The integrity of the scientific record has always been a top priority for the scientific community, and several recent initiatives have underlined how seriously some stakeholders take it. Science Exchange's reproducibility initiative (https://www.scienceexchange.com/reproducibility/), the website Retraction Watch (http://retractionwatch.wordpress.com/) and the Nature journals' new data-reporting standards (Nat. Med. 19, 508, 2013) are different routes towards a common goal: anything that becomes part of the scientific literature must be accurate and trustworthy.

Whistleblowers deserve credit for the current emphasis on enforcing scientific integrity. By scrutinizing published figures in great detail and disseminating (often anonymously) their findings on the internet, these individuals constantly push journals to be cautious about the papers they publish and exert growing pressure to clean the scientific record through corrections and retractions.

However, despite all of these developments, retracting a paper is an increasingly complicated process that requires participation from the authors, their institutional office of research, the journal and, in some cases, legal teams.

In an ideal case, the author of the paper voluntarily contacts the journal to request a retraction after discovering a critical flaw in the study (such as an inability to reproduce the results) or after a reader alerts the author of a serious problem with the report. We commend the courage and honesty of these authors; unfortunately, most cases are far from this ideal and often start with an anonymous accusation from a whistleblower.

If Nature Medicine receives a report of an evident problem with a published paper, we bring it to the attention of the author. When presented with the irregularity, most authors quickly find an explanation—it's almost always a clerical mistake and very rarely an experimental error. For instance, a micrograph or a band in a gel may have been duplicated because it was used as a placeholder and never replaced with the correct image.

Sometimes, the explanation is not convincing. For example, the duplicated micrograph might be the mirror image of the original, making it less likely that it was a simple mistake. In such cases, we contact the authors' institution, asking for an investigation into the integrity of all the data in the paper. Unexpectedly, our experience on this front has been largely disappointing.

In scientifically emerging countries such as China, academic institutions may not have an office in charge of investigating matters of research integrity, or it may be extraordinarily difficult to identify the people with the authority to conduct such an investigation (or even to elicit a response from them). But even in the United States, where most institutions have procedures in place to deal with investigations of this sort, we have observed that people with the authority and responsibility to investigate often go to great lengths to protect their researchers and almost never conclude that a retraction might be warranted.

Let's say that a researcher under investigation claims that a set of original data cannot be found because the computer with the results has been lost. In such a case, the institution will probably accept this explanation and, at best, will ask the scientist to repeat the experiment. The scientist will then provide new data to support their conclusions, and the institution will accept these data as sufficient evidence to declare the case closed.

Internal investigations are often sent to national authorities or granting agencies to establish their legitimacy. But even in those instances not much changes and, more often than not, the matter will be closed without further discussion or investigation. In the example above, the granting agency would probably accept the missing-computer argument and simply endorse ineffectual actions to deal with the authors, such as remedial courses in good lab-bookkeeping practices. These actions often amount to little more than a slap on the wrist.

It may be puzzling to see that authorities often do not take a strong stance against cases of poor data integrity and fail to request that dubious studies be retracted. This timid approach can be explained (but not justified) by the general aversion of institutions to legal disputes. Retracting a paper invariably affects an author's reputation. So, unless there is incontrovertible evidence of wrongdoing by the scientist in question, this scientist could claim damages and sue his or her institution. In the lost-computer example, the institution's authorities (on the basis of advice from their lawyers) may reasonably conclude that this is not incontrovertible evidence of wrongdoing and hesitate to seek a retraction.

In all fairness, academic institutions often conclude the reports from their investigations by saying that it is up to the journal to determine with the author whether the paper needs to be retracted. In other words, their efforts largely focus on establishing whether the errors in the paper were the result of scientific misconduct. If they were not or if there is inconclusive evidence, institutions leave the question of how to fix the scientific record for the journals to answer. However, scientific journals face legal considerations similar to those of academic institutions—our own lawyers may advise us against retracting a paper if we don't have sufficient evidence to substantiate the case. Moreover, the natural tendency of an author to refuse to retract a paper is markedly bolstered by a favorable outcome from the institution's investigation, which essentially clears them of any wrongdoing. These facts might help explain why it is very uncommon to see a journal unilaterally retract a study, publishing instead, in some cases, an “expression of concern.”

What can the different stakeholders within the scientific community do to tackle the growing problem of 'unretractable' papers? For starters, lab heads need to remember that they are ultimately responsible for the data generated in their labs and therefore need to be vigilant. We have had cases in which the principal investigator feels cheated and entirely blames the postdoc who produced the corrupt data. This is an unacceptable stance from someone who accepts the responsibilities that come with being the corresponding author of a paper.

Institutions, in turn, could do more by becoming tougher with scientists who disregard the basic principles of data integrity. To reduce their fear of legal battles, institutions could contractually establish strict rules on how data need to be stored and handled before and after publication. By contractually requiring any new hire to observe a minimum set of data-integrity standards and outlining tough disciplinary measures for those who fail to meet them, institutions could cover their backs against potential lawsuits and, more importantly, would do a great service to the scientific enterprise. Although many institutions may claim that they already have such rules in place, it would be a good idea to revise and update them to meet the current climate of growing mistrust of scientific publications—they need to realize that a slap on the wrist is no longer enough punishment.

Scientific journals also have work to do if they are to maintain their role as gatekeepers of the scientific record. In this regard, the Nature journals are becoming more aggressive in their requirement for access to raw data before publication of a paper. In fact, our journals have plans to make these data available to readers, and we expect this measure to increase the overall quality and integrity of the scientific record.

Journals could also become more proactive in the fight against poor data integrity while minimizing their legal risks by indicating to potential authors that publication of their work is contingent on the authenticity of the data. If the integrity of the data turns out to be compromised, journals could reserve the right to unilaterally retract the paper.

It is also necessary to move away from the binary language of correction and retraction. Whenever the scientific record needs to be revised, journals should provide a more detailed explanation of what the problem was, if there was an investigation, what the outcome was and what has been done as a result to fix the problem. The availability of this extra information would allow readers to make more informed decisions on the integrity of the study in question.

This extra information is starting to be available, and it is no longer uncommon to find very long correction or retraction notices. However, more needs to be done, as current institutional policies often dictate that, if a paper is investigated and no evidence of wrongdoing is obtained, the mere existence of the investigation must be kept confidential. Academic institutions must therefore become more open in sharing this information in the interests of transparency. The scientific community would benefit from hearing not only about those cases in which data-integrity problems were disclosed, but also about those cases in which authors have been completely cleared of wrongdoing.

The growing concern with ensuring scientific integrity needs strong actions from the whole community. Only through the concerted action of the different stakeholders can we aspire to fulfill our aspiration of keeping a trustworthy and accurate scientific record.