Should researchers have the freedom to perform research that is a waste of time? Currently, the answer is a resounding ‘yes’. Or at least, no one stops to ask whether there are obvious methodological and statistical flaws in a proposed study that will make it useless from the get-go: a sample size that’s simply too small to test a hypothesis, for example.

In my role as chair of the central ethical review board at Eindhoven University of Technology in the Netherlands, I’ve lost count of the number of times that a board member has remarked that, although we’re not supposed to comment on non-ethical issues, the way a study has been designed means it won’t yield any informative data. And yet we routinely wait until peer review — after the study has been done — to identify flaws that can’t then be corrected.

In my own department at Eindhoven, we’ve been trialling a different approach. Five years ago, we instituted a local review board that also evaluates proposed methods. Although some colleagues found this extra hurdle frustrating at first, the improvements in study quality have led them to accept it. It’s time to make dedicated methodological review boards a standard feature at universities and other research institutions, as institutional review boards are.

Some medical trials, animal studies, grant applications and institutes around the world already review methods. For example, in stage one peer review when submitting a registered report, or peer review of a clinical trial protocol, reviewers comment on the study design before data collection. If a study has already passed such hurdles, a methodological review board will no longer need to assess it. That said, there are signs the existing system needs tightening up. The journal Trials has used clinical-trial protocol reviewers since September 2019, for example: it has found some items of methodological information missing in up to 56% of protocols (R. Qureishi et al. Trials 23, 359; 2022). Normal peer review had not flagged this.

To be clear, I do not propose that reviewers debate matters such as frequentist versus Bayesian philosophies of statistics. Instead, the focus should be on basic design flaws that cannot be corrected after data collection, with the goal of ensuring that the data can inform the statistical hypothesis being tested. For one thing, reviewers could check that researchers will collect sufficient data and be able to make the causal inferences they desire (by ensuring that the sample is representative of the target population, for example). Reviewers could deter researchers from performing too many exploratory analyses while reporting only some of them, and help to plan studies that will yield informative results even if the hypothesized effect is absent. They should also check that researchers follow disciplinary reporting guidelines where available.

Critics might worry that methodological reviewers will abuse their power and prohibit certain contentious methods or manipulations — for example, the implicit association test. (Many critics say the test does not measure implicit associations at all.) But the methodological review I propose is not about whether measures and manipulations are valid. Discussions about possible confounding variables and bad measures are too complex and, in my view, are best resolved in the literature.

The most contentious issue in methodological review is that boards have the power, in principle, to bar a proposed study from proceeding. Since we introduced methodological reviews as part of the ethics review process in our department, this has never happened. The methods can usually be adjusted to fit the question, or the question can be rephrased to fit the methods. Over time, we have asked colleagues to increase the level of detail about their study design and analysis methods. This led to more clearly specified statistical hypotheses. People have increasingly used sequential analyses — an efficient way to collect data, involving adding interim analyses at predetermined sample sizes to determine if the data are now sufficient — because writing a sample-size justification makes them realize they are uncertain about the sample size they need.

There are other advantages, too. Introducing methodological review boards might create dedicated career paths for specialists in research methods — individuals who add great value to the research enterprise, but whose contributions rarely get them tenure in the current system.

As part of roll-out, studies should test whether or not methodological review improves the quality of research. Proposals could be randomly assigned either to receive methodological review or to a control condition (for example, one in which the review goes ahead but the results are kept private). If the review does not sufficiently improve studies at a research institution, it should be abandoned.

It takes time for researchers to learn how to provide all the required information, so methodological review boards should be implemented slowly. Reviewers might initially provide suggestions, without requiring their incorporation in the final study design. Alternatively, the first stage could focus on studies that collect data from vulnerable populations or that require large resources or might have immediate societal impact. And reviewers should actively help researchers to improve their study proposals.

The goal is not to gatekeep, but to improve. As researchers learn better practices, all will benefit from the more informative studies they design.