An automated tool has trawled through thousands of records on the world’s leading clinical-trials database to reveal which drug firms and academic institutions are failing to publish the results of their trials (see 'Unreported clinical trials').

The failure is already well documented: multiple studies have variously reported that 25–50% of clinical-trial results remain unpublished years after the trials are completed. And in September, the US Department of Health and Human Services announced tougher rules to push the researchers that it funds to publish clinical-trial designs and results.

But software, such as the tool described in a paper published online at F1000Research1 on 3 November, allows for a more comprehensive search than was previously possible, says the paper's corresponding author Ben Goldacre, a clinical-research fellow at the University of Oxford, UK. (The publication has yet to be peer reviewed.)

Automating the process also means that results can be updated regularly, which keeps the pressure on trial sponsors who fail to report — and enables them to take action to improve their scores.

Unreported clinical trials

Table 7.40330 Unreported clinical trials

“If anyone wants to improve their score or improve their ranking, all they have to do is publish their results,” says Goldacre.

Computerized check

Goldacre and his Oxford-based co-author Anna Powell-Smith developed the tool to search the database for trials that were completed at least two years ago. The computer then attempted to match those trials with results published in that database or in the research repository PubMed. 

Of nearly 26,000 trials evaluated, 45.2% had no published results. The team also built a website that enables users to view clinical-trials sponsors in order of who is the best — or worst — at publishing their results. The lists contain a mix of academic and industry sponsors from around the world.

Automated, rather than manual, analyses are increasingly the norm for studies that scan for clinical-trial transparency, says Jennifer Miller, a medical ethicist at New York University’s Langone Medical Center. She points to the Good Pharma Scorecard initiative, founded by Bioethics International, a charity that Miller founded.

The initiative ranks new drugs and companies on clinical-trial transparency, on the basis of automatic analyses and machine learning. But it is careful to check its work manually and to confirm its findings with clinical-trial sponsors, she says. The Scorecard, she says, searches other clinical trials registries and research databases, including Google Scholar, which could capture a bigger pool of trials and papers than does Goldacre's tool.

Automating the search can lead to a sacrifice in precision, Goldacre acknowledges. For example, the search might miss published results if they are not tagged with a number assigned by the database, or if the journal in which they are published is not listed in the research repository PubMed.

But although Goldacre says that his team did find some discrepancies in how individual studies were scored, overall trends from his tracker are similar to those previously published by manual surveys on smaller subsets of data. And he hopes that the ability to regularly update results will incentize trial sponsors to improve their score.

“This is such a serious business,” he says. “We need to maintain the pressure.”