US behavioural researchers have been handed a dubious distinction — they are more likely than their colleagues in other parts of the world to exaggerate findings, according to a study published today.

The research highlights the importance of unconscious biases that might affect research integrity, says Brian Martinson, a social scientist at the HealthPartners Institute for Education and Research in Minneapolis, Minnesota, who was not involved with the study.

“The take-home here is that the ‘bad guy/good guy’ narrative — the idea that we only need to worry about the monsters out there who are making up data — is naive,” Martinson says.



The study, published in Proceedings of the National Academy of Sciences1, was conducted by Daniele Fanelli, an evolutionary biologist at the University of Edinburgh, UK, and John Ioannidis, a physician at Stanford University in California. The pair examined 82 meta-analyses in genetics and psychiatry that collectively combined results from 1,174 individual studies. The researchers compared meta-analyses of studies based on non-behavioural parameters, such as physiological measurements, to those based on behavioural parameters, such as progression of dementia or depression.



The researchers then determined how well the strength of an observed result or effect reported in a given study agreed with that of the meta-analysis in which the study was included. They found that, worldwide, behavioural studies were more likely than non-behavioural studies to report ‘extreme effects’ — findings that deviated from the overall effects reported by the meta-analyses.
 And US-based behavioural researchers were more likely than behavioural researchers elsewhere to report extreme effects that deviated in favour of their starting hypotheses.



“We might call this a ‘US effect,’” Fanelli says. “Researchers in the United States tend to report, on average, slightly stronger results than researchers based elsewhere.”

This 'US effect' did not occur in non-behavioral research, and studies with both behavioural and non-behavioural components exhibited slightly less of the effect than purely behavioural research. Fanelli and Ioannidis interpret this finding to mean that US researchers are more likely to report strong effects, and that this tendency is more likely to show up in behavioural research, because researchers in these fields have more flexibility to make different methodological choices that produce more diverse results.

The study looked at a larger volume of research than has been examined in previous studies on bias in behavioural research, says Brian Nosek, a psychologist at the University of Virginia in Charlottesville. However, he and other researchers say that this study shows only a correlation, so it does not prove that being a behavioural researcher or working in the United States causes the more extreme results. Behavioural studies may report more extreme outcomes because they examine more diverse conditions, researchers argue.

“One cannot straightforwardly conclude that the predictors are causes of the outcomes,” Nosek says. “To do an experimental test, we would need random assignment to biological or behavioural research and to US or non-US locations.”

Fanelli says that the new paper shows that behavioral research outcomes are more variable than in another fields - genetics — which has tighter methodological standards. A key question raised by this study, Fanelli says, is why such differences lead more often towards favourable extreme results in the United States.

“Whatever methodological choices are made, those made by researchers in the United States tend to yield subtly stronger supports for whatever hypothesis they test,” Fanelli says.

Fanelli and Ioannidis do not explain why that might be. They found that the ‘small-study effect’, in which overall results are biased towards positive, extreme findings because negative findings from small studies are not published, did not explain their results.



“It has to be because of methodological choices made before the study is submitted,” Fanelli says, possibly under pressure from the ‘publish or perish’ mentality that takes hold when career progress depends on high-profile publications.

Zubin Master, a bioethicist at Albany Medical College in New York, finds this explanation credible. “The current economic climate may further add to the pressure on researchers to publish in high-profile journals in order to enhance their chances of securing research funds,” he says.

But how to verify that possibility is a bigger question.

“The value of this study is not to say that this phenomenon is hugely worse in the United States, or in this field of science compared to that one,” Martinson says. “But the fact that you can show it raises the question of what it means.”