In the fragmented media marketplace, it is a brave publisher that dismisses the professional competence of most of its readers. So sensitive subscribers might want to avoid page 150 of this week’s Nature.

The criticism in question appears in a News Feature on the thorny issue of statistics. When it comes to statistical analysis of experimental data, the piece says, most scientists would look at a P value of 0.01 and “say that there was just a 1% chance” of the result being a false alarm. “But they would be wrong.” In other words, most researchers do not understand the basis for a term many use every day. Worse, scientists misuse it. In doing so, they help to bury scientific truth beneath an avalanche of false findings that fail to survive replication.

As the News Feature explains, rather than being convenient shorthand for significance, the P value is a specific measure developed to test whether results touted as evidence for an effect are likely to be observed if the effect is not real. It says nothing about the likelihood of the effect in the first place. You knew that already, right? Of course: just as the roads are filled with bad drivers, yet no-one will admit to driving badly themselves, so bad statistics are a well-known problem in science, but one that usually undermines someone else’s findings.

The first step towards solving a problem is to acknowledge it. In this spirit, Nature urges all scientists to read the News Feature and its summary of the problems of the P value, if only to refresh their memories.

The second step is more difficult, because it involves finding a solution. Too many researchers have an incomplete or outdated sense of what is necessary in statistics; this is a broader problem than misuse of the P value. Among the most common fundamental mistakes in research papers submitted to Nature, for instance, is the failure to understand the statistical difference between technical replications and independent experiments.

Too many researchers have an incomplete or outdated sense of what is necessary.

Statistics can be a difficult discipline to master, particularly because there has been a historical failure to properly teach the design of experiments and the statistics that are relevant to basic research. Attitudes are also part of the problem. Too often, statistics is seen as a service to call on where necessary — and usually too late — when, in fact, statisticians should be involved in the early stages of experiment design, as well as in teaching. Department heads, lab chiefs and senior scientists need to upgrade a good working knowledge of statistics from the ‘desirable’ column in job specifications to ‘essential’. But that, in turn, requires universities and funders to recognize the importance of statistics and provide for it. Nature is trying to do its bit and to acknowledge its own shortcomings. Better use of statistics is a central plank of a reproducibility initiative that aims to boost the reliability of the research that we publish (see Nature 496, 398; 2013). We are actively recruiting statisticians to help to evaluate some papers in parallel with standard peer review — and can always do with more help. (It has been hard to find people with the right expertise, so do please get in touch.) Our sister journal Nature Methods has published a series of well-received columns, Points of Significance, on statistics and how to use them.

Some researchers already do better than others. In the big-data era, statistics has changed from a way to assess science to a way of doing science — and some fields have embraced this. From genomics to astronomy, important discoveries emerge from a mass of information only when they are viewed through the correct statistical prism. Collaboration between astronomers and statisticians has spawned the discipline of astrostatistics. (This union is particularly apposite, because it mirrors the nineteenth-century development of statistical techniques such as least squares regression to solve problems in celestial mechanics.)

Among themselves, statisticians sometimes view their contribution to research in terms of a paraphrase of chemical giant BASF’s classic advertising tag line: “We don’t make the products. We make them better.” In doing so, they sell themselves short. Good statistics can no longer be seen as something that makes science better — it is a fundamental requirement, and one that can only grow in importance as funding cuts bite and competition for resources intensifies.

Most scientists use statistics. Most scientists think they do it pretty well. Are most scientists mistaken about that? In the News Feature, Nature says so. Go on, prove us wrong.