Take one prestigious laboratory. Add some pressing grant deadlines and a dash of apprehension about whether the applications will succeed. Throw in an overworked lab head, a gang of competitive postdocs and some shoddy record-keeping. Finally, insert a cynical scientist with a feeling that he or she is owed glory. It sounds hellish, but elements of this workplace will be familiar to many researchers. And that's worrying, as such an environment is, according to sociologists, the most fertile breeding ground for research misconduct.

Just a decade ago, such a statement would have been speculation. But sociologists are increasingly confident that they understand why scientists cheat. Studies of disgraced researchers, a series of high-profile misconduct cases, and a stream of government funding have created the discipline of research into research integrity. The results are a better understanding of those who betray science, and of the climate in which they do so. They also suggest how misconduct can be reduced, although there are good reasons to think it will never be eliminated.

To tease apart the factors behind acts such as fabricating data or unfairly appropriating ideas, sociologists say we must turn away from the media glare that surrounds extreme cases such as Woo Suk Hwang, the South Korean stem-cell scientist who faked high-profile papers on human cloning1. Such cases are to research integrity what serial killers are to crime prevention, says Kenneth Pimple, an ethicist at Indiana University in Bloomington. Hwang and others grab the headlines, but minor acts of misconduct are much more common, and potentially more damaging to scientific progress.

That runs against the grain of traditional thinking on misconduct, at least among scientific societies, which have often argued that cheating is due mainly to a few bad apples. But that view now looks much less tenable. When scientists funded by the US National Institutes of Health were asked in 2002 about misconduct, a third said they had committed at least one of ten serious errant acts, such as falsifying data or ignoring important aspects of the regulations regarding human subjects2. Young physicists also returned disturbing results when questioned by the American Physical Society in 2003: more than 10% had observed others giving less than truthful descriptions of research techniques or analyses, for example.

To understand what is driving these figures, researchers would like to study confirmed cases of misconduct. Information here is sparse, as convicted scientists do not generally rush to tell their stories.

What data there are lie mainly in the files of the US Office of Research Integrity (ORI), which oversees biomedical misconduct investigations in the United States. Mark Davis, a criminologist at Kent State University in Ohio, trawled 92 ORI cases from 2000 and earlier and revealed seven factors frequently associated with misconduct3. Some involve research climate, such as a lack of support from superiors or competition for promotion. Others, such as a tendency to blame the difficulty of a particular experimental task, point to ways in which individuals justify their own actions.

Pressure cooker: the competition between and within biomedical research labs can be intense. Credit: TEK IMAGE/SPL

Habit-forming

To get a finer-grained image of these factors, sociologists turn to survey data. At the University of Oklahoma in Norman, for example, researchers asked doctoral students how they would react to specific ethical dilemmas, such as if a researcher takes an idea from a colleague and, without letting that colleague know, uses it to apply for funding.

Many of the factors seen in Davis's survey cropped up again, but by asking students about their past and present experiences the Oklahoma team, which presented its work at the ORI conference in Tampa, Florida, last year, added new details. In particular, its work suggests that past experience, such as graduate training, can be more important than the current climate in which people work. Day-to-day aspects matter — interpersonal conflict was associated with unethical decisions, for example — but former experiences, such as having worked in a lab where the head showed positive leadership, seem to be more important.

Employees are more likely to behave unethically if they believe their managers are treating them unfairly.

The need for support extends beyond the level of research groups. Brian Martinson from the HealthPartners Research Foundation in Minneapolis and his colleagues have studied misconduct using the theory of organizational justice, which states that employees are more likely to behave unethically if they believe their managers are treating them unfairly. Sure enough, Martinson's survey respondents were more likely to admit misconduct if they felt that governing structures, such as funding review bodies, had treated them badly4. In an as-yet unpublished role-playing exercise, Patricia Keith-Spiegel of Simmons College in Boston, Massachusetts, also found that researchers were more likely to act unethically if a negative decision by a review board was not properly explained.

With these results in place, misconduct experts can make tentative statements about how to limit problems. Robust and positive mentoring is top of the list. At the ORI, for example, director Chris Pascal says that a hands-on principal investigator (PI) who talks to junior scientists regularly and stresses the need to run experiments properly, rather than rushing out results, can make a big difference: “Get a PI like that and the risk of misconduct is much lower.” Institutional polices that insist on good record-keeping are also essential, adds Pascal.

Both pieces of advice seem straightforward, but neither are followed as much as they should be. A 2003 ORI survey5 concluded, for example, that one in four lab heads did not take their supervisory roles seriously enough. Many institutions also fail to enforce data-management policies, says Pascal. That is worrying, as poor record-keeping was present in almost 40% of more than 550 misconduct cases studied in a survey published this month6.

Would these actions address all the causes of misconduct? Almost all scientists have felt pressure at some point in their careers, yet the surveys suggest that the majority do not commit even minor misdeeds. “It doesn't push everyone over the edge,” says Nicholas Steneck, a historian at the University of Michigan in Ann Arbor who has worked on misconduct policies for the ORI and other organizations. “It comes down to individuals.”

Here the research is more controversial. Alongside environmental factors, the Oklahoma group has looked for links between personality traits and ethical decisions. For each of the four areas looked at, from data management to experimental practice, the team found that subjects with high ratings for narcissism returned low scores. A sense of entitlement — 'I'm owed this result because of my hard work' — also predisposes researchers to misconduct. But so does having a trusting view of others.

Could identifying fraud risks create a Minority Report-style police state? Credit: 20TH CENTURY FOX/DREAMWORKS/KOBAL COLLECTION

Such results might improve understanding, but the potential for abuse worries some. Martinson says the situation reminds him of the science-fiction movie Minority Report, in which premonitions are used to apprehend individuals thought to be future criminals. Universities might, for example, choose to reduce misconduct risk by screening for traits such as narcissism in potential employees. “What you can do with this is frightening,” says Martinson. “It doesn't lead to positive social control. At the extreme it leads to a police state.”

That need not be case, argues Stephen Murphy, a PhD student who has worked on several of the Oklahoma studies. Rather than weed out individuals, personality work can inform research-integrity courses. Asked about ethical decisions, for example, most researchers declare themselves more ethical than their colleagues. Illustrating this by giving scientists a questionnaire and then sharing the results with a group is a powerful way of showing researchers that they are more flawed than they think, says Murphy. Those with narcissistic tendencies would not be treated any differently, but they might benefit more from such an exercise.

Over the edge

Yet even when knowledge of individual and environmental factors has been plugged into integrity training, there remains a bigger question about the way science is run. Many scientists work under enormous pressure, even in an era of relatively generous funding. Career paths in science exacerbate the situation. Unlike many other professions, scientists must constantly prove themselves by publishing papers. Success also depends not just on the view of a few immediate colleagues, but on that of the whole field. “It's about building a reputation,” says Martinson. “In science that's the coin of the realm.”

The case of obesity expert Eric Poehlman shows how these factors can push people over the edge. During his trial last year, when he was sentenced to a year in jail after admitting falsifying data in papers and grant applications, he said: “The structure...created pressures which I should have, but was not able to, stand up to. I saw my job and my laboratory as expendable if I were not able to produce.”

The similarities between Poehlman's testimony and that of many other fraudsters point to factors that institutions can tackle. The problem is that many of the risk factors for misconduct also seem to be what makes for good science. Most would agree that competition is needed to allocate over-subscribed research funds appropriately, as well as to push individuals to evaluate their ideas. So even with the best research environment and training, fraud is unlikely to disappear. “It's the dark side of competition,” says Martinson. While the pressure remains, so will some level of misconduct.