To err is human. But to catch that error — does that take a computer? That’s a question that psychologists have been wrestling with in recent months, as automated software has been checking their published findings on a huge scale.

The automated review of basic features of scientific papers marks a new front in the battle for research reproducibility, and one that has split the community. The divide must be bridged before it becomes too wide, and that will require criticism to be both offered and received in the true spirit of academic enquiry.

As we report in a Toolbox article, psychologists discovered starting in August that someone — or something — was commenting on the quality of thousands of their published papers. The comments were left on PubPeer — a website for post-publication review that often hosts anonymous allegations of image manipulation. These can lead to retractions and even, according to at least one lawsuit, to an exciting job offer being rescinded.

In the case of the large-scale comments, the posts had been generated by an algorithm that pointed to potential errors in reported P values, measures of statistical significance that are too often used to decide whether results are worth publishing. The program, called statcheck, posted analyses of more than 50,000 papers. It sometimes erroneously tagged correct results as potential errors, and it identified many errors that were real but trivial. It also found instances in which P values that had been reported as reaching a threshold for statistical significance were actually just shy of it. Although a few authors have posted explanations and corrected results on PubPeer, none of the posts have to Nature’s knowledge resulted in any formal corrections or retractions.

Some researchers were confused and upset by the mass fact-check; leaders in the psychological community warned that such projects unduly threaten the reputations of individual researchers and even the field. A former head of the Association for Psychological Science in Washington DC wrote a column decrying the use of “uncurated” social media for personal attacks and harassment. A controversial early draft accused research critics of “methodological terrorism”; it was later revised. Another group of researchers launched a petition that called for discussions to stay polite, but also argued that “the freedom to express legitimate criticism must take priority and be protected”.

To be sure, the automated statcheck comments were lacking some useful context, and the algorithm is far from perfect. But much of the negative reaction has less to do with the ins and outs of a simple computer program than with the importance that people place on scientific papers. These are the currency of funding, tenure and prestige, so any challenges come across as threats to careers and reputation.

Anyone who finds flaws should seek corrections with diplomacy and humility.

The implicit assumption that academic papers must adhere to an impossible standard of perfection does science a horrible disservice. As Nature has pointed out before, the scientific paper is a marker on the way to scientific progress, not itself a destination. Scrutiny of papers is therefore to be welcomed, if only to check that the signposts are pointing in the correct direction. New knowledge arrives constantly to correct and displace the old. It is a messy process, full of acrimonious discussions and painful realizations, but necessary. Errors must be rooted out.

The appropriate reaction depends on the nature of the error. Insightful reasoning can lead to incorrect conclusions that still advance science. A 1996 study of a meteorite that had landed in Allan Hills, Antarctica, argued that elongated nanometre-scale blobs in the rock were the fossils of alien bacteria. Subsequent abiotic explanations felled each argument in turn. But the study breathed life into the field of astrobiology.

Carelessness and avoidable errors will not have such positive effects. Revelations of typos and biased reasoning should make authors uncomfortable. Before submitting their work, they should take on the responsibility of reexamining manuscripts for simple details and limits to their conclusions, and should invite colleagues to do the same. (Peer review improves the scientific literature both by giving papers more credibility and by forcing authors to do just this.)

Even so, errors will make their way into the literature. Anyone who finds flaws should seek corrections with diplomacy and humility. A gloating sense of ‘gotcha’ does not help to provide constructive criticism; some ill-considered phrases have caused lasting damage. But many scientists use their blogs for credible, restrained, nuanced criticism, often engaging the authors whose works are criticized.

Sharing and discussion of scientific work has changed drastically in a world of blogs, online repositories and Twitter. The fact remains, however, that self-correction is at the heart of science. Critics — curated or not — should be courteous, but criticism itself must be embraced.