When embarking on a new experiment, researchers are often faced with the nontrivial task of choosing which tool or procedure best suits their needs. A methodological bounty often becomes a confusing maze of choices that leads to problems. Either considerable time and resources are devoted to determining a suitable experimental strategy, or an improper choice leads to poor or even inaccurate results that may not be spotted until peer review of the hard-won results.

Examination of the original reports describing different methodological choices is often of little help because published studies commonly differ in variables that contribute to differences in performance, making it difficult to extract information about the tools or methods themselves. Also, critical quantitative information is often missing, making it hard to systematically choose among the available methodological options.

Despite their importance, empirical comparisons of tools or methods under standardized conditions are not commonly published in high-visibility journals. These endeavors often stem as informal comparisons for a specific project in an individual laboratory. Because they are not intended to be published, these comparisons are not conducted with the rigor required for peer-reviewed publication—particularly in high-tier journals. These valuable efforts thus end up being unpublished or published in venues of lower visibility.

On the other side of the spectrum, research consortiums often extensively test methods to determine the workflows that will be used. Sometimes, these evaluations are published in high-profile or technical journals, but it is not uncommon that these endeavors end up as unpublished white papers.

Community experiments (competitions) fit into this category as well. Competitions are particularly well-suited for comparing algorithmic methods that can easily be run on identical material across platforms. Such competitions—large and multilaboratory as well as smaller and independently spearheaded—constitute extremely valuable efforts that provide a favorable context for rapid progress to be made in the field.

When any of these comparisons are done rigorously, involve the most up-to-date technology and are performed by experienced researchers, they deserve publication in a high-profile, high-visibility journal. To this end, Nature Methods provides the Analysis article format. 'Analysis' papers generally contain a reasonably comprehensive set of comparative data and provide substantial quantitative or qualitative information of direct practical relevance to the choice and application of published methods and tools.

Analysis articles can be critical when the validity of a technology is put into question owing to the disparity of results obtained by different groups applying the technology under similar conditions. An example of this were the early days of microarray technology, when there seemed to be a lack of concordance in the results obtained when assaying patterns of gene expression of a given sample using different platforms. In 2005, we published three large comparisons involving 15 array platforms performed in 17 laboratories that established conditions for comparing results between different devices and research groups.

Nature Methods has continued publishing this type of work throughout the years. Analyses are also critical to fields that experience rapid development of new tools. In these cases head-to-head comparisons are intended to help users choose the best candidate for their experiments. Maybe more importantly, these studies also extract general principles of tool performance, and this information can in turn be of great value for the further development of new tools and can help end users understand how the experimental parameters translate into differences in performance. They also provide a framework for the characterization of new tools or of tools not included in the analysis.

Two recent examples of such initiatives could become well-thumbed copies on many desks. In our December 2011 issue, Zhuang and colleagues systematically and empirically compared 26 different fluorescent dyes used for super-resolution imaging (Nat. Methods 8, 1027; 2011) and in this issue, Deisseroth and colleagues compare 14 depolarizing and 9 hyperpolarizing microbial opsins for the modulation of neuronal activity in optogenetic experiments (p. 159).

Although these Analyses will not answer all questions about the performance of these tools in every possible circumstance, they provide important quantitative and qualitative information about the tools themselves and how these properties influence experimental outcomes.

We hope that you will find these Analyses useful, and we welcome future endeavors of this kind.