Sir

In the 19 February 2004 issue of Nature, there were ten items (one Brief Communication, one Article and eight Letters to Nature) containing figures with error bars, but only three had figure legends describing what the error bars were: in one case, 80% confidence intervals; in another, standard deviations; and in the third, standard error of the mean. The articles with incomplete legends represented both the biological and physical sciences, across many different disciplines, and clearly should not be considered isolated examples.

Error bars can be used by the reader to determine how much the data varied, allowing an estimation of whether the experiments gave reproducible results, whether the results were significantly different from the controls, and sometimes whether the data were obtained in an unbiased manner.

How did these omissions occur? If authors include error bars on their figures, why do they so often forget to state what they are in the legends? How can reviewers be confident that the conclusions are correct if they are not told about the errors in the data? Why don't reviewers request that descriptions of the error bars be included when they review the papers?

When properly described, error bars can be very revealing. In their analysis of the experiments and methods used by Jacques Benveniste to study homeopathy, John Maddox and colleagues illustrated how much information can be gained if one knows how to interpret errors correctly (Nature 334, 287–290; 198810.1038/334287a0).

By not ensuring that all papers that have error bars describe what they are, Nature publishes material that cannot be properly assessed by its readers.