Linking large-scale genomic information to specific diseases remains a tricky proposition. Credit: RH Ingram_R / www.ingrampublishing.com

Biologists combing through massive patient data sets often find potential biomarkers of certain diseases, but many of these signals turn out to be false. To weed out these useless associations, the US National Cancer Institute (NCI) yesterday released a draft list of 29 criteria, such as documenting where specimens came from and ensuring experiments can be properly replicated, that researchers must address before receiving NCI funds to run a clinical trial.

The goal of the guidelines is to divert some of the problems that arise in developing tests on the basis of ‘omics' studies — investigations of thousands of genes, proteins and other biomolecules within patient samples. Such tests could help to predict the best treatments for patients with cancer, but evaluating them can be tricky, says Lisa McShane, a biostatistician at the NCI in Rockville, Maryland, and a co-author of the guidelines.

Problems with omics predictions captured headlines after clinical trials for cancer at Duke University in Durham, North Carolina, were halted in 2010 owing to suspicion of data alteration. More than two dozen papers were retracted in association with the scandal, and the lead investigator, Anil Potti, resigned amid an investigation of research misconduct (see 'Cancer trial errors revealed').

After the investigation, the US Institute of Medicine released a report in March calling for higher standards in developing omics tests to guide treatment decisions in clinical trials (see 'Lapses in oversight compromise omics results'). That report outlined principles for discovering, evaluating and validating omics tests, and specified that any tests used to determine a patient's medical care within a clinical trial should first be reviewed by the US Food and Drug Administration.

The NCI guidelines address similar problems as the IOM report but are more like a how-to manual, says McShane, and they are not just a checklist — because the NCI approves and funds clinical trials, the guidelines are enforceable.

Although cases of outright misconduct are rare, many papers and protocols contain serious scientific flaws, says McShane. Intuition breaks down when so many variables are considered, she says, making researchers blind to methodological flaws.

Keith Baggerly, a bioinformatician at MD Anderson Cancer Center in Houston, Texas, who, along with McShane, helped to uncover problems in the Duke trials says that no one expected that it would be so easy to find spurious patterns in genomic data.

“Early after the genome was mapped, there was a notion we could take this wonderful data, plug it through some mathematical algorithms, and out would pop the secrets of biology. Guess what? It's not that easy,” says Larry Kessler, who studies how research can best move into clinical practice at the University of Washington in Seattle.

One of the most egregious flaws, says McShane, is 'resubstitution', in which some of the data used to build a hypothesis are also used to evaluate it. “You can see that even in top journals,” says McShane. Other problems occur when researchers fail to thoroughly document where patient specimens come from or how they’ve been assayed, making it hard to replicate results.

“The guidelines are so important and necessary,” says Kessler. “A lot of experiments have not been done with the [necessary] rigour and replication.” Neither funders nor scientists get excited about replicating studies, he says, but not doing so is wasteful at best. The guidelines enforce replication at several levels, such as requiring researchers to repeat an assay on the same sample to make sure it provides consistent results, to document computer code in enough detail for someone else to reproduce an analysis, or to check that the same biomarkers are associated with the same conditions in independent data sets.

Part of the problem is cultural, says David Ransohoff, a cancer epidemiologist at the University of North Carolina in Chapel Hill, who says that the NCI guidelines are a good start. There is a big difference between assessing data for clinical research and generating hypotheses in labs. “Whatever you're using, you have to give it a fair, blinded hypothesis test to validate it, and if you don't, you don't have strong enough data to be doing patient management.”

What researchers might bill as blinded is not necessarily so, says McShane. If results don't come out as expected, researchers may tweak a model or disregard outlier data and then rerun a test without realizing that those tweaks corrupt validation studies. “People don't realize how much bias that can introduce,” she says. 

McShane says that she hopes the guidelines will protect patients and make research more efficient. “We are seeing these guidelines not as something making people's life difficult, but something that can spare people pain,” says McShane. “So they won't write up a bad protocol or hype a bad study.”