More than a third of US scientists, in a survey of thousands, have admitted to misbehaving in the past three years. The social scientists who carried out the study of research misconduct warn that because attention is focused on high-profile, serious cases, a broader threat from more minor deeds is being missed.

Their conclusions may hit a nerve, particularly among scientific societies in the United States. Throughout the 1990s, these groups fought to limit their government's definition of misconduct and the types of behaviour it is responsible for policing.

Brian Martinson of the HealthPartners Research Foundation in Minneapolis, Minnesota, and his colleagues mailed an anonymous survey to thousands of scientists funded by the National Institutes of Health. They asked the scientists whether they were guilty of misbehaviours ranging from falsifying data to inadequate record keeping.

Of 3,247 early- and mid-career researchers who responded, less than 1.5% admitted to falsification or plagiarism, the most serious types of misconduct listed. But 15.5% said they had changed the design, methodology or results of a study in response to pressure from a funding source; 12.5% admitted overlooking others' use of flawed data; and 7.6% said they had circumvented minor aspects of requirements regarding the use of human subjects (see page 737).

Overall, about a third admitted to at least one of the ten most serious offences on the list — a range of misbehaviours described by the authors as “striking in its breadth and prevalence”.

But Arthur Caplan, director of the Center for Bioethics at the University of Pennsylvania, Philadelphia, cautions against concluding that the structure of science is corroded. He points out that dropping an outlying data point is not the same as plagiarizing a paper.

“I don't mean to say that the problems identified don't merit deliberation and a response,” he says. “But there may be a tendency if you just read the headlines to say, ‘Oh my goodness, the ethical house of science is collapsing around us’.”

Martinson counters that, although individual cases may not be as serious as fraud, the survey reveals a threat to the integrity of science that is not captured by narrow definitions of misconduct. “The majority of misbehaviours reported to us are more corrosive than explosive,” he says. “That makes them no less damaging.”

He thinks the main cause of all the questionable behaviour is the increasing pressure that scientists are under as they compete to publish papers and win grants. “We need to think about the working conditions in science that can be addressed,” he says, suggesting better salaries and employment conditions for young scientists, and a more transparent peer-review process.

He is at pains to stress that he does not think governments should expand regulation of scientific behaviour. And when it was shown Martinson's study, the Federation of American Societies for Experimental Biology, based in Bethesda, Maryland, was quick to reiterate its support for the narrow definition of misconduct that was officially agreed in 2000.

“The US government adopted ‘fabrication, falsification and plagiarism’ as the defining criteria, a policy with which we concur,” says Paul Kincade, the federation's president. That means the government cannot investigate or punish any behaviours outside that definition.

In 2002, scientific societies led by the federation and the Washington-based Association of American Medical Colleges fought a government office's plan to collect data on such behaviours (see Nature 420, 739–740; 200210.1038/420739b). The societies argued such monitoring should be the responsibility of scientists themselves.

Martinson and his colleagues say their study is the first attempt to quantify such activities. They hope their results will persuade scientists to stop ignoring the wider range of misbehaviour.