The launch of the clinicaltrials.gov registry in 2000 seems to have had a striking impact on reported trial results, according to a PLoS ONE study1 that many researchers have been talking about online in the past week.

A 1997 US law mandated the registry’s creation, requiring researchers from 2000 to record their trial methods and outcome measures before collecting data. The study found that in a sample of 55 large trials testing heart-disease treatments, 57% of those published before 2000 reported positive effects from the treatments. But that figure plunged to just 8% in studies that were conducted after 2000. Study author Veronica Irvin, a health scientist at Oregon State University in Corvallis, says this suggests that registering clinical studies is leading to more rigorous research. Writing on his NeuroLogica Blog, neurologist Steven Novella of Yale University in New Haven, Connecticut, called the study “encouraging” but also “a bit frightening” because it casts doubt on previous positive results.

Irvin and her co-author Robert Kaplan, chief science officer at the Agency for Healthcare Research and Quality in Rockville, Maryland, focused on human randomized controlled trials that were funded by the US National Heart, Lung, and Blood Institute (NHLBI). The authors conclude that registration of trials seemed to be the dominant driver of the drastic change in study results. They found no evidence that the trend could be explained by shifting levels of industry sponsorship or by changes in trial methodologies.

Irvin says that by having to state their methods and measurements before starting their trial, researchers cannot then cherry-pick data to find an effect once the study is over. “It’s more difficult for investigators to selectively report some outcomes and exclude others,” she says.

Many online observers applauded the evident power of registration and transparency, including Novella, who wrote on his blog that all research involving humans should be registered before any data are collected. However, he says, this means that at least half of older, published clinical trials could be false positives. “Loose scientific methods are leading to a massive false positive bias in the literature,” he writes.

Following up on these positive-result studies would be interesting, says Brian Nosek, a psychologist at the University of Virginia in Charlottesville and the executive director of the Center for Open Science, who shared the study results on Twitter in a post that has been retweeted nearly 600 times. He said in an interview: “Have they all held up in subsequent research, or are they showing signs of low reproducibility?”

Irvin and Kaplan note that there are other possible explanations for this decreasing rate of positive results, such as improving cardiovascular health care, which could be making it harder to tease out the extra benefits of new treatments. David Gordon, a director with NHLBI’s division of cardiovascular sciences, agreed, pointing out that it may have been easier to find beneficial treatments in the 1970s when heart disease death rates were much higher. “In some ways these trials may have been the proverbial low-hanging fruit,” he says. “It has become increasingly difficult to improve upon these therapies.”

Still, of all the factors they studied, Irvin and Kaplan say that registration had strongest effect, even though it cannot erase all bias — even registered clinical studies showing positive results should be viewed with “healthy scepticism”, Irvin says. “Too often, the audience only reads the headline and the abstract.” It is only when you take a close look at the study details — such as effect sizes and response rates — that you can judge whether a result is likely to be clinically meaningful, she says.

For more, see www.nature.com/socialselection.