Main

If MIQE, pronounced “Mikey,” were a person, he or she would be someone with a familiar-sounding name who is held in high regard but is not close to many. MIQE urgently wants more friends.

MIQE stands for Minimum Information for Publication of Quantitative Real-Time PCR Experiments, which is a set of guidelines published in 2009 by scientists from 12 academic institutions and 2 companies1, and it has been cited nearly 1,300 times since publication. Despite the guidelines and despite the long tenure of quantitative polymerase chain reaction (qPCR, also known as real-time PCR) as a detection and quantification tool, the challenges of this technique seem to fall smack into many scientists' blind spot.

Changing that reality takes nagging and appeals, practical recommendations and new approaches—all in the name of achieving more reliable, reproducible results from qPCR experiments that will shape the backbone of future clinical diagnostic tests. Despite the critical consequences of irreproducible results (Box 1), workshop organizers see both good and problematic behavior in labs using PCR (Box 2). Quality-control issues in qPCR are moving to the fore through activities by the MIQE authors and initiatives such as the European Union's public-private venture SPIDIA: Standardisation and improvement of generic pre-analytical tools and procedures for in vitro diagnostics. A recent SPIDIA paper assessing the practices in over 100 labs across Europe relating to the quality of RNA extracted from blood samples revealed a wide disparity in extraction procedures and processing. Around 30% of the labs had at least two problematic quality-control parameters2.

Upping the standard

The reality of qPCR remains substandard in too many labs, says Stephen Bustin, a cancer researcher at Anglia Ruskin University, author of several books on qPCR and instigator of the MIQE guidelines.

One of his coauthors, Tania Nolan, who is with Sigma-Aldrich, says that a steady half of her workshop participants say the qPCR guidelines, stating the 57 essential and 27 desirable bits of information on sample quality, assay validation or data normalization that should accompany papers, are news to them. She and Bustin believe that scientists may also avoid quality control to save time or money.

Bustin and others also take publishers, especially those with high–impact factor journals, to task for publishing papers with a lack of transparency about qPCR data. A survey of 1,600 papers in 2010 and 2011 revealed the standard of reporting to be “terrible,” Bustin says. He is currently tallying the results of a repeat survey. Although most publications are still “as bad as ever,” practices are improving, he says. “So that is very rewarding.”

Stephen Bustin, Anglia Ruskin University. qPCR pet peeve: cutting corners and avoiding quality control. Credit: M.A. Athar

Nolan is optimistic about MIQE in that papers are now emerging from experiments started after the guidelines were published. New experimental approaches might help give the guidelines further impetus.

When qPCR approaches patients

Researchers may perform qPCR experiments many different ways, but shortcuts can affect quality control. Credit: Bananastock

At the University of Toledo Health Science Campus, Jim Willey develops methods to analyze gene expression that are intended, one day, for patient diagnosis and treatment. His approach to improve accuracy and avoid false negatives is to use internal standards. When added to the sample, these synthetic molecules are amplified much as the native template is but are different enough to be quantified separately3.

For those in basic research, quality might not be front and center, says Willey. “My message to people is that even during discovery, if you don't want to wind up going down a rabbit hole and with results that were not really valid to begin with, it's better to have quality control at the beginning,” he says. With this approach, his sense is that scientists can trust results indicating a potential diagnostic for patients and can also more readily detect experimental failure, for example, a lack of PCR product when an internal standard should be delivering one.

Jim Willey, University of Toledo. qPCR pet peeve: under-recognizing the value of using internal standards. Credit: E. Crawford

Internal standards help in experiments measuring hundreds of genes, with a standard for each one. Willey and his team have developed ways to use internal standards on several qPCR platforms such as the BioTrove OpenArray and ABI 7500, now offered by Life Technologies, and the Roche LightCycler 96; and he has one in the works for next-generation sequencing platforms.

Using internal-standard mixtures to provide quality control on a real-time platform is complicated. “You have to measure each target gene relative to its internal standard, and each reference gene relative to its internal standard,” he says. “But I think we've cracked that.” He follows the example of Roche and its application in which PCR of one gene, along with a two-color fluorometric readout, is used to detect the presence of an infectious agent such as HIV. Willey's team developed fluorometric methods to apply internal-standard mixtures when assaying multiple genes in the same tube, for example, multiple target genes and reference genes.

Internal standards can also help detect substances that hinder the PCR reaction. Blood samples or tissue that is fixed in formalin and embedded in paraffin often contain PCR inhibitors. Heme is one culprit, Willey says, because it disrupts the enzymatic activity of the polymerase in the PCR reaction. RNA extracted from blood is often contaminated with heme, which leads to intersample variation.

Although serial dilution of the sample can detect an inhibitor, it is “a very cumbersome way to address the problem,” he says. Dilution can also cause genes with low expression to disappear.

Willey is commercializing the approach with internal standards as the standardized nucleic acid quantification (SNAQ) method for PCR analysis. He developed the method together with Tom Morrison, who completed his PhD in Carl Wittwer's lab at the University of Utah where the LightCycler qPCR instrument was developed and later licensed to Roche. SNAQ is licensed to Accugenomics, which focuses on cancer diagnostic tests. Willey founded the company in 2010, and Morrison is now chief scientific officer.

SNAQ arose from a clash with the realities of pathology. The quality of qPCR data is best when samples are kept fresh or frozen, but those are costly approaches. Samples are therefore more commonly fixed in formalin and embedded in paraffin, “which does horrible things to the RNA,” he says. To work with such degraded samples, he explored how to assess shorter PCR products. “We had to move the primers closer together to amplify shorter amplicons, like 60–80 base pairs instead of a couple hundred base pairs,” he says.

Separately, he is applying internal-standard mixtures to prepare PCR amplicon libraries for next-generation sequencing. The approach is geared toward lowering the cost of gene expression measurement and standardizing RNA sequencing. Both qPCR and next-generation sequencing are complex procedures, but methods development, money and willpower will help lower barriers to their use, Willey says.

Tania Nolan, Sigma-Aldrich. qPCR pet peeve: assuming that selected reference genes are rock steady without validating them. Credit: R. Taylor

In a given lab, scientists may find their quality control in PCR-based experiments “good enough,” says Willey. Researchers interviewed by Nature Methods offer recommendations to take qPCR beyond 'good enough'.

Practical tips for better qPCR quality control

Befriending RNA. Labs only “very rarely” properly assess their RNA isolation methods and the quality of RNA for the PCR workflow, says Bio-Rad's Sam Ropp from the company's division of gene expression reagents. Errors at the start are exacerbated by follow-on errors, which can all lead to erroneous, unrepeatable results, says his colleague Rachel Scott.

Designed assays. Having sensitive, validated assays is an important element in PCR experiments. Some companies perform wet-lab validation for specific transcripts and use that as a selling point.

Integrated DNA Technologies compares its qPCR assays against unnamed competitor assays and charts such aspects as end-point signals and qPCR efficiency. Roche Applied Science offers what it calls function-tested qPCR assays for any human gene, and customers can order custom assays for any human, mouse or rat target. Thermo Fisher Scientific has assays designed for whole genomes, and Qiagen offers predesigned qPCR assays that the company says are suited for detecting targets fewer than 100 base pairs in size and for detecting RNA from formalin-fixed, paraffin-embedded samples.

Sam Ropp, Bio-Rad. qPCR pet peeve: researchers and vendors using the term 'validation' too loosely.

When designing and applying assays, scientists should ensure that they work within the quantitative range for which their assay calls and should look at factors such as efficiency, linearity and dynamic range, says Ropp.

Using an assay from the published literature for a different cell line or different animal model has “very little bearing on how those genes will behave in another specific sample,” says Scott.

When screening a few hundred gene targets at a time, scientists might try to save time by screening first and then validating only the ones that are different. To prevent researchers from missing discoveries, Bio-Rad markets assays for the entire human genome and has just launched the same for the mouse genome. In both cases, the company provides the data for wet-lab validation.

Beyond the slogan. Differing extraction methods, reagents, buffer composition, instruments and their varying thermal characteristics, along with many more variables, can change primer behavior, says Nolan. Some assays are stable, whereas others are sensitive to primer concentration. Even using the same primers at the same concentration, with the same template, in the same instrument, but with different reagents can lead to primers hybridizing with different efficiencies. “You could artificially think you've got more template,” she says, which is all the more reason for scientists to tell their colleagues exactly what they are using.

The phrase 'MIQE-compliant' has become a marketing tool to describe instruments and reagents. Sigma launched assays for use with SYBR green dye to detect RNA targets, assays that are MIQE-compliant in that the customer also receives the primer sequences, says Nolan.

She advises that researchers verify an assay's effectiveness “in your hands, with your instrument, with your reagents, with your sample.” In that sense, it is “philosophically impossible to sell a MIQE-compliant anything, because that's not what MIQE says.”

Name that primer. The MIQE guidelines originally asked scientists to disclose primer and probe sequences used in qPCR. But adherence was a challenge for scientists using commercial assays, which often do not disclose this information.

In a follow-up publication, the MIQE group softened their stance, stating that full primer and probe sequence disclosure remains “our ideal,” but it is adequate to identify the commercial assay and offer the “specific amplicon context sequence” for the qPCR assay, which is sequence information around a so-called anchor nucleotide in the sequence4.

This information can come from the vendor or be obtained by sequencing the target PCR amplicon. This approach, says Nolan, gives scientists greater visibility of their sequence region while allowing companies to keep probe sequences confidential.

Fight contamination. Running a water control sample, which contains all PCR reagents but no template, is a must, says molecular oncologist Matthias Dobbelstein at Göttingen University Medical School. In addition, he recommends an “extraction blank,” which is a sample obtained from a mock preparation of template DNA. This procedure is essentially running the extraction protocol from a true experiment but without the source of DNA, he says.

Only this approach will allow scientists to be “reasonably sure” that template DNA is not contaminating the extracted components. Given that the most frequent source of contamination is the product of previous PCR runs, it is also important to separate the lab areas for PCR setup and PCR analysis, he says.

A bundle of replicates. Having sufficient biological replicates has a major influence on data analysis quality, says Dobbelstein. Although it may be easy to run the same cDNA preparation simultaneously in three wells for qPCR, the process “provides you with a kind of accuracy that doesn't reflect reality,” he says. A much larger source of variation is that the template RNA or cDNA will be present in different amounts when an experiment runs over several days. Having at least three, or preferably even more, biological replicates is advisable, he says, and will give researchers a realistic idea of how much variation is in a given system.

The new normal. In PCR, many factors influence the strength of generated signals, which is why scientists seek to normalize data with respect to data from a gene with stable expression, a 'reference' gene. Some of these approaches include mathematical modeling methods developed at Aarhus University and the relative expression software tool (REST) from the Technical University of Munich5,6.

One widely used algorithm for obtaining the recommended reference genes for a sample and experiment is geNorm7. Developed in the lab of Jo Vandesompele at Ghent University, it is also sold as qbase+ as part of a commercial software package by the Belgian company Biogazelle. The company, founded by Vandesompele and his Ghent University colleague Jan Hellemans, has tiered pricing for academic and commercial labs.

According to the company, the tool supports data formats from numerous PCR instruments including Applied Biosystems, Bio-Rad, Fluidigm, Illumina, Qiagen, Roche, Agilent and WaferGen. Scientists performing single-cell qPCR can provide their own normalization values, and for those measuring a large number of genes, the tool can include all targets in the normalization process.

Rock-steady reference genes. Perhaps as a carryover from the precursor to PCR, northern blotting, scientists tend to normalize their data to one reference gene, says Nolan. The favorites, for which she receives weekly requests, are the genes encoding glyceraldehyde-3-phosphate dehydrogenase (GAPDH), β-actin and 18S ribosomal RNA. Often researchers do not experimentally validate the genes, so teams do not know whether expression of these genes remains stable in their particular experiment with a given cell type or treatment. “It drives me crazy,” says Nolan.

Scientists can also use reference-gene panels prepared as ready-made qPCR assays. Roche Applied Science, for example, offers configurable panels, as does Bio-Rad.

To help scientists with the appropriate selection of reference genes, Bio-Rad includes normalization tools in the software it provides to customers with the PCR instrument. Among the software tools is geNorm7. Choosing widespread tools for their commercial software “raises their visibility and accessibility,” Scott says.

On the subject of reference genes, Nolan, who used to be a rock climber, draws an analogy: “If I'm going to climb up a rock face and I want to use an anchor, I want to be absolutely sure that when I pull on my anchor, it's not going to go anywhere,” she says. “The analogy is: if your reference gene is going to move, then you don't want to use it.”