In 2023, Google awarded a total of US$10 million to researchers who found vulnerabilities in its products. Why? Because allowing errors to go undetected could be much costlier. Data breaches could lead to refund claims, reduced customer trust or legal liability.

It’s not just private technology companies that invest in such ‘bug bounty’ programmes. Between 2016 and 2021, the US Department of Defense awarded more than US$650,000 to people who found weaknesses in its networks.

Just as many industries devote hefty funding to incentivizing people to find and report bugs and glitches, so the science community should reward the detection and correction of errors in the scientific literature. In our industry, too, the costs of undetected errors are staggering.

That’s why I have joined with meta-scientist Ian Hussey at the University of Bern and psychologist Ruben Arslan at Leipzig University in Germany to pilot a bug-bounty programme for science, funded by the University of Bern. Our project, Estimating the Reliability and Robustness of Research (ERROR), pays specialists to check highly cited published papers, starting with the social and behavioural sciences (see go.nature.com/4bmlvkj). Our reviewers are paid a base rate of up to 1,000 Swiss francs (around US$1,100) for each paper they check, and a bonus for any errors they find. The bigger the error, the greater the reward — up to a maximum of 2,500 francs.

Authors who let us scrutinize their papers are compensated, too: 250 francs to cover the work needed to prepare files or answer reviewer queries, and a bonus 250 francs if no errors (or only minor ones) are found in their work.

ERROR launched in February and will run for at least four years. So far, we have sent out almost 60 invitations, and 13 sets of authors have agreed to have their papers assessed. One review has been completed, revealing minor errors.

I hope that the project will demonstrate the value of systematic processes to detect errors in published research. I am convinced that such systems are needed, because current checks are insufficient.

Unpaid peer reviewers are overburdened, and have little incentive to painstakingly examine survey responses, comb through lists of DNA sequences or cell lines, or go through computer code line by line. Mistakes frequently slip through. And researchers have little to gain personally from sifting through published papers looking for errors. There is no financial compensation for highlighting errors, and doing so can see people marked out as troublemakers.

Yet failing to keep abreast of this issue comes at a huge cost. Imagine a single PhD student building their work on an erroneous finding. In Switzerland, their cumulative salary alone will run to six figures. Flawed research that is translated into health care, policymaking or engineering can harm people. And there are opportunity costs — for every grant awarded to a project unknowingly building on errors, another project is not pursued.

Like technology companies, stakeholders in science must realize that making error detection and correction part of the scientific landscape is a sound investment.

Funders, for instance, have a vested interest in ensuring that the money that they distribute as grants is not wasted. Publishers stand to improve their reputations by ensuring that some of their resources are spent on quality management. And, by supporting these endeavours, scientific associations could help to foster a culture in which acknowledgement of errors is considered normal — or even commendable — and not a mark of shame.

I know that ERROR is a bold experiment. Some researchers might have qualms. I’ve been asked whether reviewers might exaggerate the gravity of errors in pursuit of a large bug bounty, or attempt to smear a colleague they dislike. It’s possible, but hyperbole would be a gamble, given that all reviewer reports are published on our website and are not anonymized. And we guard against exaggeration. A ‘recommender’ from among ERROR’s staff and advisory board members — none of whom receive a bounty — acts as an intermediary, weighing up reviewer findings and author responses before deciding on the payout.

Another fair criticism is that ERROR’s paper selection will be biased. The ERROR team picks papers that are highly cited and checks them only if the authors agree to it. Authors who suspect their work might not withstand scrutiny could be less likely to opt in. But selecting papers at random would introduce a different bias, because we would be able to assess only those for which some minimal amount of data and code was freely available. And we’d spend precious resources checking some low-impact papers that only a few people build research on.

My goal is not to prove that a bug-bounty programme is the best mechanism for correcting errors, or that it is applicable to all science. Rather, I want to start a conversation about the need for dedicated investment in error detection and correction. There are alternatives to bug bounties — for instance, making error detection its own viable career path and hiring full-time scientific staff to check each institute’s papers. Of course, care would be needed to ensure that such schemes benefited researchers around the world equally.

Scholars can’t expect errors to go away by themselves. Science can be self-correcting — but only if we invest in making it so.