Science: Branch of knowledge or study dealing with a body of facts or truths systematically arranged. So says the dictionary. But, as most scientists appreciate, the fruits of what is called science are occasionally anything but. Most of the time, when attention focuses on divergence from this gold (and linguistic) standard of science, it is fraud and fabrication — the facts and truth — that are in the spotlight. These remain important problems, but this week Nature highlights another, more endemic, failure — the increasing number of cases in which, although the facts and truth have been established, scientists fail to make sure that they are systematically arranged. Put simply, there are too many careless mistakes creeping into scientific papers — in our pages and elsewhere.

A Comment article on page 531 exposes one possible impact of such carelessness. Glenn Begley and Lee Ellis analyse the low number of cancer-research studies that have been converted into clinical success, and conclude that a major factor is the overall poor quality of published preclinical data. A warning sign, they say, should be the “shocking” number of research papers in the field for which the main findings could not be reproduced. To be clear, this is not fraud — and there can be legitimate technical reasons why basic research findings do not stand up in clinical work. But the overall impression the article leaves is of insufficient thoroughness in the way that too many researchers present their data.

Handling corrections that have arisen from avoidable errors in manuscripts has become an uncomfortable part of the publishing process.

The finding resonates with a growing sense of unease among specialist editors on this journal, and not just in the field of oncology. Across the life sciences, handling corrections that have arisen from avoidable errors in manuscripts has become an uncomfortable part of the publishing process.

The evidence is largely anecdotal. So here are the anecdotes: unrelated data panels; missing references; incorrect controls; undeclared cosmetic adjustments to figures; duplications; reserve figures and dummy text included; inaccurate and incomplete methods; and improper use of statistics — the failure to understand the difference between technical replicates and independent experiments, for example.

It is usually the case that original data can be produced, mistakes corrected, and the findings of the corrected research paper still stand. At the very least, however, there is too little attention paid and too many corrections, which reflect unacceptable shoddiness in laboratories that risks damaging trust in the science that they, and others, produce.

The situation throws up many questions. Here are three of them. Who is responsible? Why is it happening? How can it be stopped?

The principal investigators (PIs) of any lab from which the work originates, especially if their names are on the paper, have an absolute and unavoidable responsibility to ensure the quality of the data from their labs, even if the main work is done by experienced postdocs. Officially, postdocs and graduate students are still in training, and it is the PI's job to make sure they are properly trained — in statistics and appropriate image editing, for a start. It is unacceptable for lab heads —who are happy to take the credit for good work — to look at raw data for the first time only when problems in published studies are reported.

In private, scientists who run labs in even the most prestigious universities admit that they have little time to supervise and train all their students. Institutions such as the European Molecular Biology Laboratory in Heidelberg, Germany, have maximum lab sizes for this reason. Funding agencies should require grant applicants to indicate lab size and offer adequate supervision. As is the case in commercial companies, larger labs should introduce formal training and a management hierarchy, with more experienced postdocs and research associates required to sign off data and experiments if PIs cannot do so themselves.

What can journal editors and referees do? Sloppiness is sometimes caught, but so much must be taken on trust. Journals should certainly offer online commenting, so that alert readers can point out errors. Where comments or corrections appear in other journals, these should be linked from the original paper — as the Comment authors recommend.

There should also be increased scope to publish fuller results from an experiment, and subsequent negative or positive corroborations. There is an opportunity here for 'minimum threshold' journals, such as PLoS ONE and Scientific Reports. Editors and referees cannot be expected to divine when only positive data are included and inconvenient results left out, but journals should encourage online presentation of the complete picture. And scientists should offer it. The complete picture is, after all, what this science of ours strives to provide.