I started my career as an infectious-disease epidemiologist at the Singapore Ministry of Health. Soon I was fighting the first coronavirus epidemic, that of severe acute respiratory syndrome (SARS). That was in 2003, a world before biomedical preprints. In the middle of the current pandemic, I am thinking about preprints and research integrity. My professional expertise has come full circle.
I co-designed the Netherlands’ National Survey on Research Integrity (NSRI), one of the world’s largest on the topic. In July, we reported in a preprint that about 8% of researchers admitted committing misconduct, a higher figure than was found by previous studies. About one in two researchers admitted to frequently engaging in questionable research practices, including underplaying a study’s flaws and limitations. These findings have implications for the avalanche of preprints being deposited in public repositories.
The case for releasing preprints is clear: results from scientific studies are made more quickly and more broadly available. Overall, greater sharing and transparency boosts trustworthiness and collaboration. But efforts to promote preprints without simultaneously implementing firm measures to ensure that the research is of high quality put the cart before the horse.
Yes, there have been high-profile retractions of COVID-19 papers published in peer-reviewed journals. Perhaps the two most infamous involved Surgisphere, a company based in Chicago, Illinois, which reported results on a database of people with COVID-19 that might not have existed, and the report of hydroxychloroquine as a miracle cure for COVID-19. Nevertheless, our survey of 7,000 researchers found that the perception of how effectively reviewers would detect misconduct was strongly associated with a lowered likelihood that respondents would commit it. Because scholarly review can deter misconduct, such a mechanism is needed for preprints as well.
Preprint sharing of the SARS-CoV-2 genomic sequence and data on clinical management helped the world to respond quickly in the early days of the pandemic. However, there were also questionable preprints. Much confusion was caused by misguided speculation that SARS-CoV-2 had been manufactured, assumptions about the efficacy of drugs such as hydroxychloroquine and ivermectin, and doubtful reports about the fatality rate of COVID-19 and the effectiveness of face coverings.
Dozens to hundreds of preprints on COVID-19 appear every week on the medRxiv server alone. Their importance for how we researchers share our work with one another and with the public is clear. But how can we minimize the transfer of low-quality information to the public sphere? Here are half a dozen suggestions.
The first is to gauge how well the research community and the public understand the limitations of preprints. To do this, we need to openly discuss their potential downsides and the amount of sloppy science that exists in our research communities. However, my NSRI experience makes me worry that those in positions to make real change within research institutions are reluctant to examine or discuss the less-rosy side of research quality. Two-thirds of universities in the Netherlands declined to support the NSRI, vaguely questioning its methodology, or arguing that it focused too much on bad practices. Many of us think this reluctance is the most important finding of our survey.
The second suggestion is to increase accountability and transparency. Who is currently responsible for the quality of the research reported in preprints? Certainly, individual researchers and their institutions. But what about those who operate the preprint servers? Most servers screen superficially for preposterous claims or conspiracy theories, and advise readers that findings should be considered preliminary. MedRxiv takes further screening steps for specific topics that might have an adverse public-health impact. Many journals require that papers adhere to reporting guidelines, according to study type. But what sorts of guidelines and retraction mechanisms exist to safeguard against poor-quality research in preprints? The Automated Screening Working Group and related initiatives aim to rapidly assess whether preprints report data sharing, ethics approval, study limitations, and other relevant factors. Such initiatives need increased uptake and publicity.
Third, explicitly promote norms of good science. Often referred to as the Mertonian norms, after the sociologist Robert Merton, these state that science belongs to everyone, that claims should be assessed solely on validity, and that research should be conducted to expand knowledge or to benefit humanity, rather than for personal gain. Crucially, they also state that scientific claims must be subjected to independent critical scrutiny before acceptance. Our survey and others’ work imply that a conflict exists between how researchers think research should be done and what they actually do themselves. Researchers say they subscribe to these norms, but rate their peers low in adhering to them. That means that efforts to promote a responsible research culture are key to boosting quality overall. Research institutions should explicitly revive Mertonian norms, not just as static codes of conduct, but as institutional core values, for example by establishing and incentivizing excellent mentors, supervisors and role models in academia. This emphasis is already being applied. For example, grant applications to the UK biomedical funder Wellcome explicitly ask about leadership and citizenship activities, as do several institutions’ promotion and tenure committees.
Fourth, expand avenues for safe, respectful scholarly critique, including ways to reward and recognize it. During this pandemic, rigorous, high-quality review and debate have occurred on social media, for both preprints and journal publications — although sometimes not before errant ideas distorted public debate and policy. Mechanisms already exist that enable researchers to collect their formal peer-review contributions on platforms such as Publons and PubPeer. These resources should be extended to scholarly critique of preprints. Assessment criteria for researchers must explicitly encourage, recognize and reward such critique as a cornerstone of good open-science practice. We also need norms that protect against bullying and personal attacks in the name of scholarly critique.
Fifth, promote responsible scientific communication. If preprints are to strengthen science, their proliferation must be accompanied by a rise in the number of researchers skilled in how to communicate responsibly, respectfully and critically, not just with other scientists but also with journalists, politicians and the general public.
Finally, advocacy of preprints should be accompanied by efforts to teach non-scientific users the basic skills needed to read research critically. As the pandemic unfolded, institutions such as McGill University in Montreal, Canada, launched courses to help journalists assess public-health and clinical studies, as did professional science and journalism societies. Research shows that having skills in critical evaluation and knowledge of the scientific process are good ways of countering mistrust in science. Preprints need to be incorporated into efforts to build those skills and thereby help to minimize mistrust from the public, including politicians, who are eager to have a single, fixed answer to a research question.
To sum up, all proponents of open science should have two responsibilities: to promote practices such as preprint publication, and to prevent them from doing harm.