Biologists wearing face masks work at a fume cupboard in a pharmaceutical laboratory.

Researchers in Nantes, France, working on a COVID-19 vaccine in 2021. The use of preprints to disseminate research findings saw a major uptick during the pandemic.Credit: Loic Venance/AFP/Getty

The COVID-19 pandemic saw an explosion in publication of preprint articles, many by authors who had never produced one before. Now it seems a high proportion of these scientists are likely to continue the practice.

A survey published in PeerJ1 questioned researchers who had posted preprints relating to COVID-19 or the virus SARS-CoV-2 in 2020, across four preprint servers: arXiv, bioRxiv, medRxiv and ChemRxiv. Of the 673 people who completed the survey, just under 58% had posted their preprints on the biomedical server medRxiv; around 18% on arXiv, which focuses on mathematics and physical sciences; 14% on the life-sciences server bioRxiv; and 7% on ChemRxiv, a chemistry repository.

For two-thirds of respondents, this was the first time they had published a preprint. Almost 80% of these said they intended to post preprints of at least some of their papers going forward.

One of the most intriguing findings is the number of respondents who received feedback on their preprints, says study co-author Narmin Rzayeva, a scientometrics researchers at Leiden University in the Netherlands. Fifty-three per cent received comments from peers, more than half of which were delivered privately through closed channels such as by e-mail or during meetings. Around 20% of respondents received comments on the preprint platforms, which are publicly accessible.

“We expected much lower numbers,” Rzayeva says, because preprint papers don’t typically receive much feedback.

Previous work2 found that by the end of December 2021, just 8% of preprints posted on medRxiv since it launched in mid-2019 had received comments online. But that study considered only publicly posted comments.

The impact of feedback

Preprint feedback is having an effect, albeit unevenly. Of all survey respondents, just 1.9% reported making major changes to the results section of their preprints as a result of feedback. By contrast, 10.1% received such changes in response to peer review conducted as part of conventional journal publication. Rzayeva suspects that this is partly because authors feel obliged to make changes after receiving feedback from journal peer reviewers.

Of the survey respondents who reported receiving feedback on their preprints, 21.2% said they had made substantial changes to their discussion and conclusions sections. “I find it pretty exciting and encouraging that authors are making the amount of changes to their preprints that they do in response to preprint commentary,” says Jessica Polka, executive director of ASAPbio, a non-profit organization in San Francisco, California, that promotes innovation in the life sciences.

Polka notes that preprint feedback tends not to be as thorough as a review commissioned by a journal. An analysis of comments left on bioRxiv preprints posted between May 2015 and September 2019 found that only around 12% of non-author comments resembled those from conventional peer review3.

Polka encourages researchers to strike up discussions over preprints. “By conducting peer review in the open, you integrate many more perspectives than you would by doing it behind closed doors,” she says.

The preprint experience seems to have been positive for the survey respondents, 87% of whom said they had later submitted their paper to a peer-reviewed journal. Preprints shouldn’t replace journal articles, Rzayeva says, but should complement them and become an integral part of the publishing system.

Taking AI into account

Rzayeva acknowledges that the survey covered only 4 servers, which accounted for around 55% of all COVID-19 preprints published in 2020. As with most surveys, there was also a self-selection bias, meaning that the proportion of individuals with certain views could be overestimated.

Anita Bandrowski, an information scientist at the University of California, San Diego, says the survey is important, but notes that it did not consider artificial intelligence (AI) tools that are giving automated feedback on preprints. Bandrowski was part of a group of biologists and software specialists who developed a set of automated tools that measure the rigour and reproducibility of COVID-19 preprints and post the results on the social-media platform X.

Similar tools could become common as researchers consider ways to assess the rapidly growing number of preprints, and it will be important to find ways to track the results, says Bandrowski. She predicts that there will be “much more adoption of preprints in the future among biologists” as a result of researchers dipping their toes in during the pandemic.

Polka agrees. “The pandemic gave us a window into what is possible with preprints. It’s just a matter of tweaking policies in order to make use of that potential.”