Stephen Bustin knew something was wrong as soon as he visited the laboratory. He was investigating reports that using a technique called real-time quantitative PCR (qPCR) researchers had identified measles virus in intestinal tissue of children with developmental disorders1. If true, those results supported the theory that a commonly administered vaccine caused autism in young children. If not, anxieties of parents and public health officials had been needlessly inflamed.

Bustin found that this laboratory was next door to a facility producing DNA plasmids—a likely source of contamination. Even worse, on at least two occasions the researchers had neglected a basic step. Because measles virus is made of RNA, it must be converted to DNA before PCR can work. The enzyme that effects this conversion, reverse transcriptase, had been left out of some protocols, but there was no change in the results. Whatever they had detected was certainly not measles virus.

That case was extreme; one of the contributing authors was barred from practicing medicine after being accused of misrepresenting data in other work. But Bustin, a professor at Barts and the London School of Medicine and Dentistry, believes that the flaws he discovered resulted not from fraud but from sloppy science. Such errors are far from isolated, he says; a surprising number of researchers neglect very basic issues when performing qPCR. As in the case he investigated, many also neglect to report experimental details.

Nonetheless, the number of users and applications of qPCR are expanding, particularly for genotyping (Box 1), and the technique has long been the gold standard for quantifying gene expression. Since the commercialization of qPCR in 1996, more than 25,000 publications have referred to qPCR data, and use of this technique is growing exponentially, according to estimates in a special issue of Methods2. A plethora of kits for preparing, amplifying and analyzing samples is available, and the instruments have become increasingly affordable.

Many people pay little attention to very basic issues when they do qPCR, says Stephen Bustin at Barts and the London School of Medicine and Dentistry.

qPCR experts say that commercialization is generally a positive force for good practice. Commercial products often include important but frequently neglected controls. Manufacturers also offer troubleshooting support and thorough training in experimental design. On the negative side, vendors may be reluctant to scare potential customers with comprehensive descriptions of sources of error and artifact, and researchers who over-rely on kits may not realize when a procedure is problematic. In short, new offerings for qPCR make it easier for researchers to get better data but may lull them into being careless.

More details needed

qPCR derives from PCR, a qualitative technique that produces more copies of DNA molecules. qPCR is an analytical technique that allows their quantification. Quantitative instruments combine fast thermocyclers with fluorescence detection systems that monitor signal from dyes and probes that detect PCR products. The more copies of a DNA template present at the beginning of an experiment, the fewer PCR cycles are needed to make enough material for detection.

Using the number of PCR cycles to determine the amount of a sequence in a sample involves many steps. For gene-expression studies, RNA is extracted, stabilized and converted to DNA. DNA primers are used to copy, or amplify, sequences of interest through successive PCR cycles until resulting amplicons can be detected reliably, either by sequence-specific fluorescent probes or by dyes that nonspecifically bind double-stranded DNA. Finally, data are normalized, or calibrated against reference DNA templates that are expected to remain constant between samples.

Each step can introduce error. Some sequences are stabilized, converted, amplified or detected more readily than others, so analyses should compare changes in gene expression between samples rather than comparing expression levels of genes directly. Researchers also need to realize that estimates of the amount of amplification will be off if copy numbers stop increasing exponentially, as happens as reagents are used up. And even if sample preparation and amplification is flawless, a bad normalization strategy will render data meaningless.

Many journals insist on qPCR data to back researchers' claims but do not require even basic information about how experiments were conducted and analyzed, says Bustin. “There's an assumption that qPCR is easy, and so you don't have to give much information,” he says, “but qPCR is not simple; it's a multistep assay where each step has to be carefully done and optimized.”

After the vaccine investigation, Bustin and others published advice for conducting and reporting high-quality qPCR experiments. The minimum information for publication of quantitative real-time PCR experiments (MIQE) guidelines detail 85 parameters from experimental design to sample processing, assay validation and data analysis3. In September 2010, the editorial board of BMC Molecular Biology endorsed these guidelines4; when Bustin gave a talk at a qPCR meeting in November, about 80% of the audience had heard of them, he says. Still, a constant influx of new practitioners means a constant repetition of old mistakes. “Only about a quarter can be considered experienced users,” estimates Jo Vandesompele, a qPCR expert at Ghent University and an author of the MIQE guidelines.

As the throughput of qPCR increases, researchers rely more on robots. Credit: W. Freeman, Pennsylvania State University

Setting up for higher throughput

Complicating matters is the fact that the most experienced users are starting to do more exploratory qPCR experiments in which it is impractical to optimize individual reactions. As qPCR has matured, researchers have monitored more and more reactions, says Willard Freeman, who directs the functional genomics core facility at Pennsylvania State University. “Ten years ago we were doing qPCR in individual reaction tubes and then 96-well plates and then 384,” which is the current standard for core facilities.

Higher throughput reduces costs of reagents but also introduces new sources of technical variability, such as dispensing very low volumes, which means that the robotics and liquid-handling instruments must be extremely reliable. A high-throughput format has several advantages for experimental design: wells can even be left empty if doing so allows for a more intuitive arrangement of samples. Large numbers of samples or genes also can be run side by side on the same plate, reducing the chance that differences identified between samples actually reflect run-to-run variability between plates. High-throughputt approaches also reduce the amount and costs of reagents used per assay.

Several vendors sell focused gene arrays designed to study hundreds of genes involved in particular pathways, including apoptosis, inflammation, signal transduction, cancer and other diseases. Plates can either be custom-designed or ordered from a catalog and come with aliquots of specified primers for qPCR experiments; most include wells designated for reference genes.

Some companies, such as Qiagen and Lonza, sell arrays that can be used to detect amplification through fluorescent dyes that bind double-stranded DNA. Other companies, such as Life Technologies and Roche Applied Science, sell arrays for detection of amplification with probes that emit a fluorescent signal only when the correct amplicon is detected.

qPCR arrays can be very useful, says Vandesompele, but customers need to know how they have been assessed. Some companies test reactions empirically to select the best primers and probes, thus guaranteeing some specificity and sensitivity. Even then, assays may not be tested over the range of concentration over which a template will be assessed. “It's crucial that the customer understands the validation,” says Vandesompele. Public databases such as the real-time PCR primer and probe database (RTPrimerDB) also contain published and experimentally validated assays.

With hundreds or even several thousands of genes being analyzed in some experiments, scientists are now using qPCR to conduct analyses conceptually similar to those based on microarrays. With the resultant larger data sets, qPCR data are being subjected to similar types of calculations, such as principal component analysis and cluster analysis. “Before it was just quantification and then [more-sophisticated forms of] relative quantification. Nowadays we are doing high-throughput expression profiling,” says Michael Pfaffl, a qPCR specialist at the Technical University in Munich who maintains the very informative website http://www.gene-quantification.de/. “We are looking at gene classes and which genes are regulated in the same pattern.”

Physical limitations

Though qPCR is very sensitive, studying more genes usually requires more sample. For a 384-well plate, researchers need about 5 microliters of processed sample per well. A few technologies circumvent this problem. Some protocols include a 'preamplification' step to increase the amount of template for a subsequent qPCR experiment. This even allows qPCR to be conducted on single cells, but it also increases variability. Other approaches involve working with ultralow-volume PCR and microfluidics instruments. Fluidigm, for example, sells BioMark, which can analyze 9,216 reactions at once, requiring less than 7 nanoliters of each reaction. Similarly, the BioTrove NT, now sold by Life Technologies, can follow just over 8,000 33-nanoliter reactions simultaneously. Wafergen's SmartChip Panels use about 100 nanoliters each to monitor just over 5,000 reactions.

Whereas many researchers run gels to visually check RNA quality, qPCR experiments require a more sensitive analysis, says Sridar Chittur at the State University of New York, Albany. Credit: State University of New York, Albany, press office

In general, qPCR is excellent for studying a dozen or so genes in several dozens of samples. Whereas some biomarker studies use hundreds of microarrays, microarrays are most often used in exploratory studies to look at thousands of genes in ten or so samples. A company called Nanostring has commercialized a non-qPCR product for the middle ground: as many as 800 gene targets can be assayed from a 100-ng sample, with a throughput of 72 assays per day, says company scientist Philippa Webster. Instead of amplifying DNA for detection, the technology avoids enzymatic processing and uses barcoded fluorophores unique to each transcript. Reagents are mixed with whole-cell lysates and then fed into a dedicated instrument for detection. Most customers also have access to microarrays and next-generation sequencing and are using Nanostring for validation on 100–200 mRNA targets, says Webster.

The instrument and the reagent aspects of qPCR are somewhat mature, but people are struggling with data analysis and experimental design, says Jo Vandesompele at Ghent University and Biogazelle. Credit: Foundation Fournier-Majoie For Innovation

New techniques allow qPCR to be conducted on whole-cell lysates, without the need to purify RNA. Popular kits include Cells-to-CT lysis from Life Technologies and RealtimeReady from Roche. The technology is powerful, but whenever cell lysates are used, researchers have to be particularly careful, says Greg Shipley, who directs the Quantitative Genomics Core Laboratory at the University of Texas Health Science Center (UTHealth) in Houston. “The RNA is stabilized by the lysate buffer but only for a short time at −20 °C before some degradation sets in.” Shipley has been impressed with results from a product called RNAGem from the company ZyGem. The reagent kit contains an active thermophilic enzyme that quickly degrades proteins in a whole-cell lysate, leaving the nucleic acids intact.

No matter how RNA is prepared, the reverse transcription step is notoriously variable, with some enzymes converting certain sequences more efficiently than others. Last year, the Association of Biological Resource Facilities reported results of a 20-lab survey examining six different reverse transcriptases on human neuroblastoma samples, with the goal of finding the best techniques for 'priming' RNA so that it can be recognized by the enzyme. If only one gene in the sample is being interrogated, a gene-specific primer can yield good results. Otherwise, the best qPCR results came from combinations of oligo(T)s (targeting the poly(A) tail) and random primers. This held true regardless of whether the assayed region was located near or far from the 3′ end of the molecule.

This study also revealed that RNA quality is important for good reverse transcription, a fact that researchers used to running gels for quality control may not appreciate, says Sridar Chittur, who directs the microarray core at the State University of New York, Albany and co-led the study. Quality is commonly assessed by comparing ratios of two kinds of ribosomal RNA, which is by far the most common RNA in cell and tissue lysates. Microfluidics-based electrophoresis systems such as the 2100 Bioanalyzer from Agilent Technologies and Experion from BioRad measure the integrity and concentration of RNA in a sample and produce metrics to represent RNA quality. These can be useful, but researchers should remember that these assessments are heavily influenced by the predominant ribosomal RNA, says Chittur. “Just because the ribosomal RNA is intact doesn't mean that the rest of your RNA is.”

Vandesompele recently compared six RNA quality indicators5. In addition to using general RNA-quality metrics and ribosomal RNA, his team assessed the amplification of several transcribed components: a repetitive sequence known as an Alu element, a single commonly used reference gene (HPRT1), a combination metric derived from four common reference genes and a combination metric incorporating differences in HPRT1 amplified with two primer sets targeting the start and end of the gene. Though all measures are correlated, assessments based on the mRNA rather than ribosomal RNA are the best indicators of reliable amplification, says Vandesompele, even though conducting this assessment requires an amplification step. Whether RNA is sufficiently intact to use in an experiment depends on the question, he says. For profiling samples, somewhat degraded RNA can be acceptable because classification depends on many genes. Nonetheless, no amount of normalization will completely dampen noise caused by poor RNA quality.

Cycling

After RNA has been converted to cDNA, there are plenty of biases built into the qPCR itself. Depending on the primers and the target sequence, some targeted sequences are copied and detected more efficiently than others. Higher-throughput experiments tend to use dyes that fluoresce upon binding to double-stranded DNA. Sequence-specific probes are designed to fluoresce only after binding to the target sequence.

How the chemistry works in your lab, using your hands and on your instrument is what matters, not what happens in silico. –Greg Shipley

The performance of all reagents should be vigorously assessed, says Shipley. “How the chemistry works in your lab, using your hands and on your instrument is what matters, not what happens in silico.” Differences in PCR efficiency can be monitored by added internal controls or comparing the amplification trajectory with references. (A software program called Kineret from Labonnet Ltd can be used to correct for these kinds of variations.)

A pilot study can help identify how to get the best data from the fewest samples, says Ales Tichopad, a scientist at qPCR services provider TATAA Molecular Diagnostics and a co-founder of Labonnet. For example, researchers can look at gene expression in three animals, in three samples from each animal, in three aliquots treated with reverse transcriptase from each sample and in three qPCR runs from each aliquot. A statistical technique called nested analysis can reveal the scope of natural variation as well as variation associated with the processing steps, and then researchers can decide to produce replicates where the major error is, says Tichopad. He has written free software called powerNest (http://www.powernest.net/) available as a module of the GenEx software that can analyze variation found in a pilot study and help researchers optimize the types and samples to use in a study.

Even then, researchers need to be careful about variations in experiments. Different reagents are designed for different instruments, and although many researchers mix and match, varying conditions require caution. “Even if you change vendors for your [fetal bovine serum, a major component of cell culture media], you could have something different going on,” warns Chittur.

Data standards

Whether looking at hundreds of genes or just a handful, reliable results depend on corrections in the 'post-data' analysis. Multiple software programs are used to correct for all these errors and provide trustworthy results, explains Vandesompele, who founded the qPCR software company Biogazelle. Several instrument makers are making it easier for users to feed data from qPCR experiments into specialized analysis software, he says. Bio-Rad instruments now come bundled with the qbasePLUS software from Biogazelle. Software company Integromics developed RealTime StatMiner in partnership with Life Technologies. Eppendorf and Exiqon both have agreement with GenEx. Additionally, an XML-based real-time PCR data markup language, RDML, provides standards for exchanging qPCR data.

The most important step in the 'post-data' analysis is normalization, picking reference genes that do not vary between samples and using these to assess measures of the gene of interest. Rather than looking at their own data, new practitioners often pick a reference gene previously reported in the literature or used by a colleague, but reference genes that are appropriate in one experiment may cause extreme bias in another. Work by Vandesompele has shown that a single unvalidated reference gene can skew one-quarter of results threefold and one-tenth sixfold6. In fact, some reference genes that were very 'popular' in the early days of qPCR have now been shown to vary considerably, and results of many studies are in question. Researchers following up on genes identified in exploratory studies on microarrays or large qPCR arrays can often use a strategy of looking for the unvarying genes in their own samples and picking several reference genes based on that data. Despite any inconvenience, every group of researchers needs to validate their own reference genes, says Shipley. “Every experiment perturbs cells in a different way. There's no way to know which genes aren't going to change. You have to do it empirically.”