The World Health Organization (WHO) declared coronavirus disease 2019 (COVID-19) a public health emergency of international concern (PHEIC) on 30 January 20201. Since then, research on COVID-19 has been conducted at an extraordinary pace. The volume of research being published2 led us to perform an analysis to determine the extent to which the publication process is being expedited, a process that has typically taken around a median of 100 days from submission to acceptance3. We conducted a single-query PubMed search to measure the number of records and their time to acceptance during the first 12 weeks after COVID-19 was declared a PHEIC on 30 January 2020 (ref. 1).

We first searched PubMed using the term “COVID-19” as the query, relying on PubMed’s automatic term mapping feature to translate this into a broader search with additional terms (Supplementary Table 1). We restricted this search to records published between 30 January 2020 and 23 April 2020. We also conducted two disease-specific comparator searches: one for Ebola (another PHEIC) and one for cardiovascular disease (CVD), the leading cause of years of life lost in 2017 (ref. 4). The automatic term mapping feature was also used to create these search strings. For the Ebola search, we analysed records published in the 12 weeks after the Ebola outbreak was declared a PHEIC, from 8 August 2014 to 27 October 2014. For the CVD search, we restricted the dates to the same as the COVID-19 search, but for the year 2019 to obtain data from a period not skewed by COVID-19. We subsequently conducted a fourth comparator search to identify all of the records published by the same journals in which the COVID-19 articles were published during the same period in 2019 (see Supplementary File for search string). Finally, in a fifth search, we identified all records published in the same journals in the same period in 2020 and performed two analyses of these data: one excluding COVID-19 records and the other including COVID-19 records. COVID-19 articles for this last search were identified by matching all of the PubMed IDs in the search against those in the LitCovid database. The LitCovid database was used to identify the records, as it is curated by the National Library of Medicine, and this provides the largest number of articles identified as well as minimise the bias inherent in conducting another PubMed search2.

For all records that were identified, we extracted the publication type as per PubMed categorization (journal article, editorial, letter or other publication) and, when available, the date of receipt by the journal and date of acceptance for publication. Our analyses focus on journal articles, which presumably report original, peer-reviewed research. We calculated the median length of time to acceptance from when a record was received to when it was accepted for the records identified in each search, as well as the interquartile range (IQR) and the range for each set of records. For calculated medians, IQRs and ranges, records with a receipt-to-acceptance time of longer than 3 years were removed from the calculations. We further examined the distribution of journal articles at three pre-specified cut-offs following submission: one week (7 days), one month (30 days), and 100 days after submission.

Missing dates were found to be unevenly distributed by journal, which suggests that missing data were not randomly distributed. Although this may affect comparisons between the COVID-19 corpus and the Ebola and CVD articles (as the three groups involved different journals), it does not affect the comparison between COVID-19 articles and other articles published in the same journals during the same period in 2019 and 2020.

In Fig. 1, the COVID-19 journal article data are represented in the dark grey histogram, the Ebola data in yellow, the CVD data in blue, the same-journal comparison for 2019 in medium grey, and the same-journal comparison for 2020 in light grey. Only the first 150 days (5 months) of data are presented to better visualize the results. There is a clear skew in the COVID-19 histogram, with the majority of articles being published within a week and publication time falling off rapidly from there. The CVD histogram has a flatter, more uniform distribution, and the journal comparison histograms have a bimodal distribution, with an early peak resembling that of the COVID-19 histogram and a second, flatter peak with a more uniform distribution similar to the shape of the CVD curve.

Fig. 1: Time from submission to acceptance for journal articles with received and accepted dates.
figure 1

a, COVID-19 articles. b, Ebola articles. c, Cardiovascular disease articles. d, Articles published in the same journals in 2019 in which COVID-19 articles were published. e, Articles published in the same journals in 2020 in which COVID-19 articles were published, excluding COVID-19 articles. f, Articles published in the same journals in 2020 in which COVID-19 articles were published, including COVID-19 articles. In all panels, the purple line represents 7 days, the blue line 30 days and the red line 100 days from time of article receipt.

Table 1 summarizes descriptive statistics for the six groups of journal articles that we compared. There were on average 367 COVID-19 journal articles published per week, while the median time from submission to acceptance for COVID-19 journal articles was just 6 days. In comparison, there were four journal articles published per week for Ebola, while the median time to acceptance for journal articles was more than twice as long, at 15 days. Median acceptance time was many-fold slower for CVD journal articles at 102 days, with only 374 journal articles (3%) accepted within one week from the time of submission, as compared to 1,250 COVID-19 journal articles (59%).

Table 1 Descriptive statistics for the six groups of journal articles compared in our analyses

Processing times for the same journals in which COVID-19 journal articles appeared were substantially slower overall during the same period in 2019: the median time to acceptance for journal articles stood at 93 days. In absolute terms, however, a substantial number of articles in these journals was accepted within 7 days (N = 1,386), suggesting that fast-track publication is already undertaken by some of these journals outside of the context of a public health emergency. When looking at the non-COVID-19 content published in the same journals in 2020, we found that the median time to acceptance for journal articles was 84 days, with 3% (N = 2,113) published in the first 7 days.

The full 2020 same-journal search including the COVID-19 journal articles found that the median time to acceptance had decreased to 82 days, with 5% of the articles (N = 3,138) published in the first 7 days compared to 2% during the same period in 2019. This, along with Fig. 1f, suggests that the total number of journal articles published within 7 days has increased during the pandemic, with 33% of those articles published within 7 days being articles related to COVID-19. Our finding of the shorter time to acceptance is in line with a related smaller-scale analysis reported in a non-peer-reviewed preprint5.

In recent years, it has become conventional for journals to offer a fast-track publication option outside of the context of PHEICs, and speed does not necessarily entail poorer quality. Our analyses cannot ascertain the details of the editorial process for any of the articles, COVID-19 or otherwise, including whether these articles were processed through a dedicated fast-track system; were transferred, along with their peer reviews, from other journals; or did not undergo peer review at all. However, it is important to ask whether the scale of expedited publishing activities can be rapidly expanded without weakening the peer-review process. The magnitude and impact of the COVID-19 pandemic certainly warrant efforts to accelerate the publication of research findings, particularly those findings which might immediately contribute to efforts to reduce transmission, morbidity and mortality. Nonetheless, the remarkable speed and rate of publication of COVID-19 research raises concerns about the quality of the evidence base and about the risk of misinformation being spread with harmful consequences6. For example, a recent living systematic review on the quality of 66 prediction models for diagnosing COVID-19 reported in 51 manuscripts, 45 of which were non-peer-reviewed preprints, found that the models identified were poorly reported and all had a high risk of bias, with likely overly optimistic performance7. Vigilance is required by reviewers and editors to prevent such manuscripts from becoming flawed published evidence, which has the potential to unfavourably influence the scientific and public discourse, resulting in confusion, poor policy decisions and public mistrust in science.

With many unanswered questions about COVID-19 having important implications for how healthcare providers, governments and societies respond to this crisis, the rapid pursuit of new evidence will likely continue for the foreseeable future. Therefore, urgent measures are required to safeguard the integrity of the scientific evidence base. Regarding the specific issue of a large volume of published research articles moving quickly through the peer review process, we make the following recommendations:

  • Establish new journal editors’ standards for maintaining quality during large-scale public health emergencies. These standards should incorporate checklists such as STROBE, CONSORT and others, which have been adopted by many journals; however, implementation could be enforced more diligently8. The European Association of Science Editors, for example, have released a formal statement to this effect on publishing COVID-19 results9.

  • Require peer reviewers to be adequately trained before undertaking reviews. Research has shown that new peer reviewers who receive training and are equipped with supporting tools perform better at reviewing articles as compared to a control group10. Training may be particularly important when inexperienced reviewers are asked to peer review within very short time-frames.

  • Prioritise funding, maintenance and access to literature databases and search engines, both of which are essential resources for enabling researchers, policy-makers and other stakeholders to utilise the COVID-19 evidence that is being generated.

  • Maintain and invest in the further development of curated databases such as LitCovid (US National Library of Medicine)2, Novel Coronavirus Research Compendium (Johns Hopkins Bloomberg School of Public Health)11, Publons (Clarivate Analytics)12 and the WHO COVID-19 Global Literature on Coronavirus Disease Database13. In light of the risk of bias with curation, it is vital to ensure that there are multiple independent quality-assured systems that are funded on an ongoing basis. This may require an element of public funding rather than solely or largely relying on the private sector.

  • Commission and publish more scoping, narrative and systematic reviews to evaluate and synthesize COVID-19-related evidence. These efforts should be prioritised by funders and recognised as a vital strategic investment in the global COVID-19 response. They should also be coordinated to avoid duplication of reviews. Good examples of efforts to synthesize the literature include the current COVID-19 Cochrane Library14 and the COVID-END database maintained by McMaster University15.

Rapid publication ensures that new evidence is shared in a timely manner, which is particularly important during a fast-moving health crisis, such as the COVID-19 pandemic. Nonetheless, there are steps that the scientific community can and should take to prevent the accelerated pace of COVID-19 publishing from weakening the evidence base. From the inception of scientific publishing, the nature of this endeavour has been to evolve in response to new needs, ideas and opportunities. Today, the challenge of how to effectively disseminate a large volume of research in the context of a global health emergency should be recognised as a call for innovative thinking and for the implementation of solutions that will ensure continued trust in the scientific publishing process. Lessons learned from this undertaking can enrich scientific publishing more broadly in the years to come.