Main

Following up on last month's focus on RNA interference (RNAi), a group of investigators have come together to articulate in a Commentary (p. 777) the caveats they have identified in large-scale RNAi screens. They propose that the use of standard controls should be adopted to minimize the risk of reporting false positives among screen results. Although the September Focus (http://www.nature.com/nmeth/focus/rnai/index.html) already included a review of best practices to enhance specificity of RNAi experiments, it was purposely centered on experiments aimed at knocking down the expression of a single, specific gene of interest in mammalian cells—the worry of many RNAi users. But for those involved in genome-wide screens in various model organisms, like the Commentary's authors, the recommended controls may be technically difficult to implement.

And yet, these investigators agree that the proliferation of large-scale screens creates an urgent need to reach a consensus on necessary controls. Their Commentary is not meant to dictate rules but to open a community-wide discussion leading to a consensus of best practices that can realistically be implemented across all experimental settings. Therefore, we welcome your reactions (at methods@natureny.com) and hope for a constructive dialog that we can publish for the benefit of all.

The need for such debate has been reinforced in two papers published online several weeks ago by Ma et al. (Nature; published online 10 September 2006) and by Kulkarni et al. (this issue, p. 833). These reports show that contrary to widely held assumptions, long dsRNAs used as silencing agents in Drosophila melanogaster cells can silence genes that are not the intended targets but share short regions of sequence identity with the target—thus causing false positive results in genomic screens. In one of these studies (p. 833), researchers from the Drosophila RNAi Screening Center at Harvard Medical School used the data that had been generated in 30 genome-wide screens performed at the facility to retrospectively assess the frequency of off-target effects. It is the availability of these large datasets and collections of RNAi reagents that made the systematic analysis possible and meaningful. The unsettling results show how important it is to keep a critical eye on one's tools and technologies, and justify a re-evaluation effort whenever possible, especially when the understanding of the underlying biology evolves in parallel with the tool.

Sometimes the biology is quite clear, but the available data do not lend themselves well to systematic analysis, and in such cases, generating an expressly designed dataset may be necessary. That is exactly what the MicroArray Quality Control consortium did (see the September issue of Nature Biotechnology and a Research Highlight in this issue on p. 772). At the initiative of scientists at the US Food and Drug Administration, this group of microarray users and manufacturers have evaluated the reliability of several microarray platforms for gene-expression profiling. Although similar multicenter studies have been published, the present one was specifically designed to determine whether microarrays were sufficiently robust to be used in clinical and regulatory contexts. This re-evaluation was a necessary step to permit the eventual translation of microarray technology from a research tool to a clinical diagnostic instrument and a basis for regulatory decisions.

Yet another reason to re-evaluate a technology is that its purpose may have been altered gradually over time, as the technique gains popularity and becomes used in different contexts. An example of this kind of situation also can be found in this issue in the study of Singec and colleagues who take a critical look at the so-called 'neurosphere assay' (p. 801). Originally developed in 1992 by Brent Reynolds as a way of isolating neural stem cells from brain tissue, the assay has quickly gained popularity as a powerful method for detection and expansion of stem cells—neural and others. Slowly, interpretations of the assay started drifting; from the original intent of isolation, the assay became used, sometimes, as a quantitative measurement. Last year, Reynolds and Rietze cautioned that equating neurosphere and stem cell frequency was misleading as not all neurospheres were derived from stem cells (Nat. Methods 2, 333; 2005). In this issue, Singec and colleagues (p. 801) take the cautionary tale one step further. By demonstrating that neurospheres are not clonal entities—but in fact regularly merge with each other under culture conditions commonly used to run the assay—they call into question any quantitative interpretation based on the number, size or composition of spheres.

In all these situations in which re-evaluation has the potential to increase confidence in a technology or improve its use, one should applaud the efforts of the researchers who spend time and energy to ensure that familiar tools are used at their best. A bad workman blames his tools; a good scientist constantly appraises his with a critical eye and does not hesitate to use the sharpener as often as is needed.