What is the most effective way for mentors to prevent misconduct among trainees? First, they should make sure that those trainees understand the importance of research integrity. Consistently modelling good practice beats lecturing hands down, and discussing ethical guidelines at laboratory meetings helps the team to appreciate honesty — and the grim consequences of misconduct.

But mentors should also understand the motivation behind some acts of misconduct, and the steps they can take to make sure that misguided trainees don't commit scientific fraud.

Credit: STOCK4B CREATIVE/GETTY

While dean of a US biomedical institution more than a decade ago (before my time at the Stowers Institute), I dealt with three cases of scientific misconduct. Each led to an admission of misconduct, sanctions against the perpetrator by the US Office of Research Integrity (ORI) and public disclosure of the person's identity. One case also led to the retraction of several publications. In none of the cases was there any wrongdoing on the part of the mentors.

In the first case, a postdoctoral fellow running 90-minute experiments found that data points tended to plateau after the first 30 minutes. Concluding that nothing of interest happened in the final hour, the postdoc started fabricating those data points. By taking this shortcut, the postdoc quickly generated data, which the mentor incorporated in a manuscript that was then submitted for publication. After belatedly examining the postdoc's lab notebook, the mentor discovered the discrepancy between data collected and data included in figures, and withdrew the manuscript — but not before it had been accepted by a journal. When confronted, the postdoc confessed, and was fired by the host institution. The case took a twist when the postdoc formally accused the mentor of encouraging misconduct by pressuring trainees to generate data. The host institution conducted an inquiry according to ORI standards and found no evidence that other trainees in the lab perceived unusual pressure to produce results.

Fortunately for the scientific reputation of the mentor, he required lab members to maintain bound notebooks that included details of all experiments and data. The case taught him always to scrutinize the relevant notebook entries before submitting a manuscript.

In the second misconduct case, a mentor had asked a postdoc to purify a protein sample so that only a single band remained in a western-blot assay. Instead, the postdoc used a physical mask so that only one band was recorded. A technician found the discarded mask and took it to the mentor, who confronted the postdoc; he admitted falsifying the results. The postdoc was fired by the host institution and sanctioned by the ORI.

Mentors should not avoid a discussion on research integrity just because of their own discomfort.

What if the technician had not discovered the mask? The mentor could still have taken steps to safeguard the integrity of the work and his own reputation. Having urged the postdoc to purify the sample until a blot showed only one band, he could have sought evidence of the purification steps in the postdoc's lab notebook. And he could have asked a second member of the research team to verify that the results were reproducible.

The final case wreaked havoc on a mentor's research programme. It started when a graduate student falsified a cell-killing assay and fabricated data to support the mentor's favoured hypothesis. The fraud continued when the mentor retained the culprit as a postdoc, on the basis that no one else could “make the assay work properly”. Over the course of several years, the postdoc manufactured data for multiple publications.

After the postdoc left the lab to join a biotechnology company, the mentor assigned a new postdoc to perform the assay. When he could not get the expected results, the new postdoc personally paid the former postdoc to perform the assay for him. Sure enough, the results supported the mentor's theory. But the former postdoc would not show the new postdoc how he performed the assay.

The former postdoc then left his biotechnology job, and the mentor rehired him, assigning him responsibility for the assay. But the rehired postdoc would only perform the assay late at night, after everyone had left. Frustrated, the new postdoc hid in the lab one night and saw the culprit pipette a radioactive label directly into scintillation vials, without any attempt to recover it from experimental samples of labelled cells that had been exposed to the (hypothetical) killing agent. The new postdoc reported his observations to the mentor, who immediately informed the dean.

The dean learned that the mentor had never given blinded specimens to the culprit. To avoid having to rely entirely on the new postdoc's testimony, the university's chief academic officer advised the mentor to set up a sting operation. The mentor prepared specimens labelled as experimental that contained no radioactivity. When assayed by the rehired postdoc, these specimens yielded radioactivity that only he could have added to the scintillation vials. When first confronted, he denied everything. The ORI reviewed results of the investigation and concluded that the rehired postdoc had engaged in misconduct. Only then did he acknowledge his guilt. The mentor and co-authors from multiple institutions retracted four high-profile publications that had been based on the fabricated data.

On reflection, what could a mentor have done to prevent this debacle? Simply keeping experimental and control specimens blinded during analysis would have sufficed. Moreover, I believe that a mentor in such circumstances should hear alarm bells if only one person in a lab can get the assay to work. Whenever results depend on human manipulation or measurement, team members should verify each other's work. When a new person joins the lab, the mentor can make it clear that verification practices do not reflect mistrust. For consistency, co-workers should also repeat the mentor's measurements.

Do such practices prevent fraud? They certainly make it more difficult. Just as importantly, they protect against inadvertent error and subconscious bias. Many of us wish for data that support our theories, and trainees may anticipate outcomes that would please the mentor. In general, evidence would suggest that very few trainees curry favour by fabricating data, but mentors should be careful not to encourage misconduct by signalling their disappointment when a trainee's data confound expectations. The chances of falsification or fabrication of results are greatly reduced when a lab uses only blinded specimens and when other lab members are always responsible for independently verifying reproducibility.

In my experience, mentors often avoid discussing scientific misconduct with lab members, perhaps out of a misguided concern that doing so might imply mistrust. There are ways, however, to circumvent this. For example, mentors could broach the topic by first discussing the increasing incidence of retractions (up tenfold in the past decade; see Nature 478, 26–28; 2011). In that way, they can engage trainees without calling into question anyone's integrity.

Mentors should not avoid a discussion on research integrity just because of their own discomfort. The potential consequences for careers and reputations are too severe.