Peer review is the defining feature of scholarly communication. In a 2018 survey of more than 11,000 researchers, 98% said that they considered peer review important or extremely important for ensuring the quality and integrity of scholarly communication1. Indeed, now that the Internet and social media have assumed journals’ original role of dissemination, a journal’s main function is curation.
Both the public and the scientific community trust peer review to uphold shared values of rigour, ethics, originality and analysis by improving publications and filtering out weak or errant ones. Scholarly communities rely on peer review to establish common knowledge and credit.
Despite decades of calls for study, research on peer review is scarce2. Current investigations are fragmented, with few connections and limited knowledge-sharing, as manifested by how sparsely these researchers cite each other’s papers3. The most rigorous work is generally restricted to one or a few journals per study, often in the same field. There is a lack of systematic research on how journals manage the process (such as selecting, instructing and rewarding reviewers, managing conflicting reviews, or publishing reviewers’ reports); on how to define the quality and utility of individual reviews; and on how to assess peer review (such as who participates, how and why). Nor is there a way to compare the reactions of authors and reviewers at different journals or in different disciplines.
The topic is under-studied partly because it is difficult to research. Access to data about the review process is hard won. It often depends on personal connections with journals, and is generally limited to such a small number of titles that generalizations are hard to make. Few dedicated grants are available. Yet greater transparency and study could determine which models and practices of peer review best promote research integrity and reliability4.
Here we describe a pilot project to encourage broad, systematic study of peer review and what we hope this can accomplish.
Probing peer review
Tantalizing insights are possible when researchers can access journal data about peer review. For instance, sociologist Misha Teplitskiy at the University of Michigan in Ann Arbor and his colleagues were confidentially supplied with reviewers’ identities for 7,982 neuroscience manuscripts submitted to PLoS ONE, and created co-publishing networks of reviewers, authors and editors. They found that reviewers tended to favour authors connected to them through co-authorship and professional networks5.
Data from the journal Functional Ecology indicated quantitatively that the use of author-suggested reviewers can bias editorial decisions6. An analysis of the writing styles and recommendations in reviewer reports from five Elsevier journals suggests that open peer review favours more objective and constructive remarks7. However, only 8.1% of those referees agreed to reveal their identity, and this was mostly when their recommendations were positive7. And analysts at Elsevier last year identified reviewers and an editor who seem to have unethically used peer review to boost citations of their own work (see Nature http://doi.org/gf7zjm; 2019).
Cross-disciplinary teams doing both qualitative and quantitative research will be essential for understanding which review models (single blind, double blind, published and unpublished reviews, confidential and disclosed reviewers) work best under which circumstances and why. Studies that probe reviewer behaviour under different peer-review models or before and after a change in process could assess, for instance, whether double-blind review (often used in the humanities and social sciences) is just convention or is a useful way to avoid favouring senior scientists. Such knowledge could stop journals implementing one-size-fits-all approaches when they are inappropriate, and might suggest how to harmonize peer-review processes to benefit authors and referees.
There are many questions about the quality of journals that access to reviewer reports cannot address. In some cases, journals must take the initiative to perform internal experiments. In 1999, to see whether publishing reviewers’ names would affect the quality of reviews, the British Medical Journal ran a randomized trial. In 2014, Elsevier ran a trial on five journals that shifted from closed to open peer review. Nature Research journals are also studying the effects of publishing review reports. In other cases, relevant data can be found in the manuscripts themselves — for example, whether key experimental details of animal and human studies are included and comply with those registered before studies began.
However, a systematic study of peer review could address crucial questions, such as when and where it has the most value — in screening out weak manuscripts, improving mediocre ones or adding essential caveats or context. It could help to reveal when and why editorial decisions are made on the basis of quality versus authors’ reputations. That could, in turn, help the development of tools for evaluating quality, rigour and integrity. Authors, editors and referees could use these in assessing individual manuscripts. Publishers, scientific associations and other organizations could use such tools to improve their reviewing processes. This will be most effective if evidence comes from a broad variety of sources.
Collect and collaborate
Accumulating the sort of data we envisage might seem like a pipe dream. The peer-review process varies greatly across publishers, and there are even idiosyncratic differences at the same journal. Yet data sharing and collaboration now occur across disparate domains outside scholarly publishing. Pharmaceutical companies and other research institutions pool clinical data through platforms such as Vivli and YODA, and drug-discovery data through Open Targets. Some 50 European firms involved in transportation, logistics and information technology (IT) are sharing data with an eye to, for instance, helping passengers to move more swiftly through airports or transporting goods more efficiently. The Global Alliance for Genomics and Health, a non-profit organization based in Toronto, Canada, has created frameworks and standards for responsible, voluntary sharing of genomic data.
Many digital innovations in scholarly publishing can be applied to the study of peer review. ORCID (which supplies unique identifiers for individual researchers) and Crossref (which can, for instance, link citations, data sets and individual publications) can help to disambiguate reviewers, authors and scholarly products, and so enable more-rigorous analyses. The Manuscript Exchange Common Approach, a framework for transferring manuscripts and reviews between different publishers and preprints, shows the feasibility of creating databases that can pull from different peer-review management systems. These include ScholarOne Manuscripts — used by the publishers Wiley and Sage, among others — and Editorial Manager, which is used by Springer Nature, Elsevier and many others.
Big obstacles remain. Publishers, both private and non-profit, consider it risky to show their workings, not least because of concerns over confidentiality. There is no infrastructure for sharing data even on the number of reviewers per manuscript, the rate of accepted review requests and other data that would not violate any confidentiality rule.
Strategies for sharing
In 2014, a group of science and technology scholars, publishing professionals and funders — including many of the authors of this Comment — formed a collaboration under a European Union project called PEERE, funded by COST (the European Cooperation in Science and Technology). We wanted to enable broader research that involved many journals from many disciplines. In 2017, we released a PEERE protocol8 for sharing data on the peer-review process (see go.nature.com/2vbkc7m). It considers ethics, responsible management, data protection and privacy, and complies with the current EU legal framework, including the General Data Protection Regulation (see ‘Peer-review data’).
Subsequently, we piloted a series of data-sharing initiatives that we scaled up to cover more than 150 journals (we have not made the list available because the journals are anonymized). The publishers include Elsevier, the Royal Society, Springer Nature and Wiley. PEERE computer scientists worked with technical staff from ScholarOne and Editorial Manager to design a joint metadata set and to store data ready to be used for research.
We have developed ways to, for example, anonymize factors such as the publisher and journal name. Author, referee and editor names are anonymized but transformed into a consistent identifier throughout samples. The text of reports is recoded as a list of machine-readable symbols with a hidden key that is accessible to natural-language computing techniques and other analytics.
This provides the technical groundwork for a broad data-sharing infrastructure that can enable systematic research on peer review. Next, we need to develop a blueprint on data-sharing and commission a proof-of-concept infrastructure that complies with general data-protection regulations and other principles of ethical management. We have worked out ways to make data systems interoperable and to ensure that people using these data cannot access proprietary information, as well as ensuring that users follow ethical procedures. We hope to discuss broader plans when scholars come together at the PEERE conference in Valencia, Spain, in mid-March.
Making all of this happen will require accountability and reliable funds. Public agencies and independent foundations have already started to recognize that research to improve research is an investment, supporting initiatives such as the Open Science Framework, the Research on Research Institute and others. Small and under-resourced publishers — which in some disciplines publish the leading journals — might still rely mainly on manual systems and could need help to participate or to overhaul their systems.
As custodians of the scholarly record, publishers, independent journals and learned societies should participate in developing the blueprint and provide support with in-kind investments (such as the time of their IT staff). They should also work to remove obstacles to data sharing, such as the culture of secrecy and bug-ridden, patchworked content-management systems. Public bodies should support the establishment of an independent, representative governance group for the data-sharing infrastructure.
In the ideal situation, researchers would need only to sign an ethics agreement, file a request detailing the type of data needed, and either run their data-analysis scripts on our infrastructure or obtain material for qualitative research. Our system is designed to prevent researchers from pulling out data from a single journal or identifying individual researchers.
This infrastructure needs to be seen as a community public good — not as a resource that entrenches private interests. Of course, the infrastructure must take into account each stakeholder’s needs for responsible data management, confidentiality and accountability. Once in place, access for research studies could be greatly expanded: academics would not need to negotiate individual agreements on data sharing or forge direct connections with journal editors and publishers.
Publishers, too, could rely on shared protocols to anonymize and manage data, reducing costs and reputational risk. Indeed, an agreement to share data on peer review could become a clear marker of legitimacy in a world increasingly plagued by predatory journals. It is not hard to imagine work that would support standards for data and journal management, or for training, certifying and crediting reviewers and editors. Although there is much to be done, supporting research on peer review promises to create better processes for authors, reviewers and editors. More importantly, it will boost the reliability, rigour and relevance of the scientific literature. Everyone will benefit from this.