COMMENT

What’s next for Registered Reports?

Reviewing and accepting study plans before results are known can counter perverse incentives. Chris Chambers sets out three ways to improve the approach.
Chris Chambers is a professor of cognitive neuroscience at Cardiff University, UK.
Contact

Search for this author in:

Illustration by David Parkins

What part of a research study — hypotheses, methods, results, or discussion — should remain beyond a scientist’s control? The answer, of course, is the results: the part that matters most for publishing in prestigious journals and advancing careers. This paradox means that the careful scepticism required to avoid massaging data or skewing analysis is pitted against the drive to identify eye-catching outcomes. Unbiased, negative and complicated findings lose out to cherry-picked highlights that can bring prominent articles, grant funding, promotion and esteem.

The ‘results paradox’ is a chief cause of unreliable science. Negative, or null, results go unpublished, leading other researchers into unwittingly redundant studies. Ambiguous or otherwise ‘unattractive’ results are airbrushed (consciously or not) into publishable false positives, spurring follow-up research and theories that are bound to collapse.

Clearly, we need to change how we evaluate and publish research. For the past six years, I have championed Registered Reports (RRs), a type of research article that is radically different from conventional papers. The 30 or so journals that were early adopters have together published some 200 RRs, and more than 200 journals are now accepting submissions in this format (see ‘Rapid rise’). When it launched in 2017, Nature Human Behaviour became the first of the Nature journals to join this group. In July, it published its first two such reports1. With RRs on the rise, now is a good time to take stock of their potential and limitations.

Source: C. Chambers

How do they work?

The Registered Report format splits conventional peer review in half. First, authors write an explanation of how they will probe an important question. This ‘Stage 1’ manuscript includes an overview of the background literature, preliminary work, theory, hypotheses and proposed methods, including the study procedures and analysis plan. Before researchers do the studies, peer reviewers assess the value and validity of the research question, the rationale of the hypotheses and the rigour of the proposed methods. They might reject the Stage 1 manuscript, accept it or accept it pending revisions to the study design and rationale. This ‘in-principle acceptance’ means that the research will be published whatever the outcome, as long as the authors adhere closely to their protocol and interpret the results according to the evidence.

After the Stage 1 manuscript is accepted, the authors formally preregister it in a recognized repository such as the Open Science Framework, either publicly or under a temporary embargo. They then collect and analyse data and submit a completed ‘Stage 2’ manuscript that includes results and a discussion. They are free to conduct further exploratory analyses, provided these are clearly identified as post hoc — having been done after planned analyses were completed. The Stage 2 submission is sent back to the original reviewers, who cannot question the study rationale or design now that the results are known. Whether the results are judged by reviewers to be new, groundbreaking or exciting is irrelevant to acceptance. At the journal Cortex, where I serve as an editor, the acceptance rate for Stage 1 RRs that enter in-depth review is about 90%: more than double that of conventional articles. The publication rate at Stage 2 is currently 100%, with no withdrawals by authors.

This assured acceptance means that authors are free to present results as they are, without having to shoehorn them into a clean, compelling narrative. And the outcome is striking. An analysis this year2 suggests that RRs are more likely to report null findings than are conventional articles: 66% of RRs for replication studies did not support initial hypotheses; for RRs of novel studies, the figure was 55%. Estimates for conventional papers range from 5 to 20%2. It is possible that researchers opt for this format when they think that null findings are likely. Nonetheless, these disparities suggest that RRs are a powerful way to counter publication bias (see ‘A brief history of Registered Reports’). And the research community cares: preliminary evidence finds that RRs are cited at levels that are comparable to or slighter higher than those for conventional articles3.

A brief history of Registered Reports

The potential of protocol registration to prevent publication bias and increase rigour has been recognized for decades in clinical-trials research. A format similar to Registered Reports (RRs) was also piloted at the now-defunct European Journal for Parapsychology in the 1970s to help ensure publication of negative results.

In 1997, The Lancet launched an article type similar to Stage 1 of RRs, which reviewed protocols of proposed research. Almost 150 were published before the article type was discontinued in 2015, ostensibly because other outlets served the same purpose6.

I began lobbying the editorial board of Cortex to consider RRs almost as soon as I joined as an editor. It gave the green light in November 2012, and by March 2013 it had adopted the full RR format. At around the same time, a separate group launched a variant focusing on replications at Perspectives on Psychological Science7. The same year, psychologists Brian Nosek and Daniël Lakens announced that a special issue of Social Psychology would use the format to publish replications of important results8.

From 2014, more journals in neuroscience and psychology began adopting and publishing RRs, and the format has now expanded across the life and social sciences. No specialized physical-science journals yet offer them.

Some multidisciplinary journals — including Royal Society Open Science — have launched the format across all subjects in science, technology, engineering and mathematics. I hope RRs will become an option in all mainstream life- and social-science journals within ten years.

One of the most striking characteristics of RRs is that reviewers can help authors to improve the protocol or rationale while it is still possible to make changes. I have overseen numerous cases in which reviewers have intervened to prevent a serious flaw in a study design — adding crucial controls, ensuring the sample size is sufficient or explaining why the hypotheses or planned statistical analyses cannot really answer the research question. Even when a proposed design is sound, the review process often adds clarity and focus. In my experience, the reviewers find the process rewarding. One comment from a reviewer is typical of the informal feedback I receive: “If the authors can incorporate many of the suggestions from all of us reviewers, they will have a far better study than what they originally planned, which is really valuable and exciting.”

Real and imagined concerns

As RRs have grown, I have come to spend as much time advocating, optimizing and getting feedback on the format as I do on my own research. I chair the Registered Reports committee supported by the Center for Open Science, and serve as a Registered Reports editor at BMJ Open Science, Collabra: Psychology, the European Journal of Neuroscience, NeuroImage, PLoS Biology and Royal Society Open Science.

I am often asked whether all research publications should be RRs. No! Work that is purely exploratory and not driven by a hypothesis is usually not suitable for the format. For example, an RR might be a poor fit for the discovery of a new disease mechanism or potential drug molecule without a clear set of predictions. Often, the same goes for work to develop new experimental methods. RRs are not designed to supplant publications that announce this kind of research; they are intended only to strengthen the rigour and transparency of studies that test hypotheses.

Another common question is whether RRs are suitable for sequential experiments in which the results of one study determine the design of the next. In principle, yes: many journals now offer ‘incremental registrations’ in which authors can re-enter Stage 1 review after the results are in, and then add protocols for one or more further studies.

In practice, authors rarely take up this option, probably because of the time associated with multiple rounds of Stage 1 review. More common is for authors to perform a series of experiments and report these in the Stage 1 manuscript. These can then be used to design one or more extra experiments to ‘seal the deal’. The final article describes all of the experiments and is badged as an RR. Another option is for authors to preregister multiple experiments at the beginning, as in one recent study. Over eight experiments, it asked whether light in the range typically used in optogenetics studies can influence neuronal physiology in mice4.

There are also times when hypothesis-driven research itself is not suitable for RRs. Studies seeking to capture the effects of unpredictable events (such as solar flares, flash floods, mass violence or stroke-induced brain injury) must start collecting data as soon as is feasible. They cannot wait two to four months for a Stage 1 manuscript to complete peer review. (Ideally, researchers would still take a few minutes to self-register their protocol in a recognized repository.) Similarly, undergraduate students who must finish a summer project in a short time might not be able to wait for reviewer feedback, although some teaching programmes have had success by dividing up research-project design and execution in creative ways (see, for example, K. Button Nature 561, 287; 2018).

By contrast, RRs have distinct advantages for longer-term students. The in-principle acceptance at Stage 1 allows them to list a publication much sooner than they could for a conventional manuscript, and with more certainty. There is emerging evidence that RRs are popular with early-career researchers. For example, at Cortex, 78% of RR first authors (n = 82) are PhD students or postdocs, compared with 67% in a control sample (n = 57) of conventional articles.

Although RRs require researchers to wait for review before starting experiments, I suspect that the time to publication probably declines overall. A conventional article might be rejected on the basis of results or because of methodological problems that can no longer be fixed, leaving authors to submit their work to journal after journal, or to perform extra experiments. Over the past six years, dozens of authors have told me —and written publicly — that they appreciate the more-predictable timeline of RRs (see, for example, go.nature.com/2kwnjuj).

Decreased flexibility is an oft-expressed concern over the format. One early critic said it would “put science in chains”. The fear is that peer-reviewed preregistration dampens the creativity and serendipity that could come from free-wheeling data exploration. But preregistration imposes no such limit: it merely requires that exploratory analyses are labelled transparently as post hoc and do not dominate conclusions.

Exploration is alive and well. Stage 2 submissions almost always include further analyses. The difference is that researchers cannot fool themselves or their readers by presenting only the most interesting analyses or imply that these were intended from the outset. RRs are a plan, not a prison.

A related misgiving is that researchers will find themselves locked into a suboptimal protocol once experiments begin. In my experience, the opposite is more likely: reviewers can prevent researchers from running less-informative experiments. And reviewers of Stage 2 manuscripts generally understand reasonable changes. It is not flexibility that is lost, but the ability to airbrush both reasonable and questionable changes out of the picture.

Moving forward

RRs are not a panacea — the format needs constant refinement. It currently sits rather awkwardly between the old world of scientific publishing and the new. Innovations over the next few years should make this format even more powerful, and stimulate wider reforms.

Transparency. When RRs first launched, some journals published Stage 2 manuscripts but not those for Stage 1, making it impossible for readers to see whether the completed protocol matched the planned one. In 2018, the Center for Open Science launched a simple tool that places submitted Stage 1 manuscripts in a public registry (see go.nature.com/2kb5s7v). This is now used by many journals, including Cortex and Animal Behavior and Cognition. The publisher Wiley has opted to publish accepted protocols. And venues such as F1000Research offer the option to post Stage 1 articles before peer review, with reviews and revisions made public as they become available. A badging system shows that the Stage 2 article adhered to the criteria and can be labelled as a RR.

Standardization. Improving the standardization of submitted protocols promises to improve computational reproducibility. Currently, submitted manuscripts are often prepared in word-processing software and contain insufficient methodological detail or linking between predictions and analyses. The next generation of RRs — ‘Registered Reports 2.0’ — is likely to be template-based and could integrate tools such as Code Ocean (see https://codeocean.com/researchers). This would ensure that analyses are immutable within a stable, self-contained software environment. With standardized metadata and badging, RRs will become useful for systematic reviews and meta-analyses.

Efficiency. The review process can be extended even further back in the research life cycle. Under the emerging RR grant model, reviewers award funding and signal in-principle acceptance of a research publication simultaneously or in rapid succession. The Children’s Tumor Foundation and PLoS ONE have pioneered such a partnership (see go.nature.com/2kpjzat), as have Cancer Research UK and the journal Nicotine & Tobacco Research5. More are in the works.

The lesson of RRs speaks to all areas of science reform. Instead of forcing quality to compete with success, partner them up. Instead of pitting what is best for the individual against what is best for all, create a model that benefits everyone — the scientist, their community and the taxpayer — and the rest will come naturally.

Nature 573, 187-189 (2019)

doi: 10.1038/d41586-019-02674-6

References

  1. 1.

    Nature Hum. Behav. 3, 763 (2019).

  2. 2.

    Allen, C. & Mehler, D. M. A. PLoS Biol 17, e3000246 (2019).

  3. 3.

    Hummer, L. T., Singleton Thorn, F., Nosek, B. A. & Errington, T. M. Preprint at OSF Preprints https://doi.org/10.31219/osf.io/5y8w7 (2017).

  4. 4.

    Ouares, K. A., Beurrier, C., Canepari, M., Laverne, G. & Kuczewski, N. Eur. J. Neurosci. 49, 6–26 (2019).

  5. 5.

    Munafò, M. R. Nicotine Tob. Res. 19, 773 (2017).

  6. 6.

    The Editors of The Lancet. Lancet 386, 2456–2457 (2015–16).

  7. 7.

    Simons, D. J., Holcombe, A. O. & Spellman, B. A. Persp. Psychol. Sci. 9, 552–555 (2014).

  8. 8.

    Nosek, B. A. & Lakens, D. Soc. Psychol. 44, 59–60 (2013).

Download references

Nature Briefing

An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.