Scientists like to complain about peer review. No researcher wants to be told that their work is flawed, unworthy or just plain wrong. But in recent months, I received reviews of my own submitted papers that suggest reviewers simply did not read the manuscript properly.

This is not nitpicking over matters of opinion or interpretation. In one instance, a reviewer complimented the double-blind placebo-controlled nature of our study, and made methodological comments related to that. Yet the study was not placebo controlled. In fact, participants were randomly assigned to three different active treatments. That is a serious mistake and undermines the supposed internal quality control of the peer-review system.

Conversations with colleagues reveal similar concerns about peer-review quality, and suggest that the scale of the problem has increased over the past few years. These are anecdotal reports, but they do raise a serious question: as the number of academic papers and scientific journals published continues to grow, can the peer-review system cope?

The migration of scholarly journals from print to digital increases the burden on reviewers. Online publications have no page budgets or print costs, and so can publish as much as they like. Once, this process was managed by editors who would decide whether to send a paper out for review, or to simply reject it. This system had its own disadvantages but it seemed to keep the total number of papers that required review at a manageable level. The default option for many online journals seems to be to send all submissions out for review. The rise of the open-access (OA) movement compounds this effect. The business case for online OA journals, to which authors pay submission fees, works best at high volume. And for many of these journals, submitted work is published as long as it is methodologically sound. It does not have to demonstrate, for example, the novelty or societal relevance that some traditional journals demand.

The OA publisher Frontiers, for example, focuses on: “certifying the accuracy and validity of articles, not on evaluating their significance”.

Increased pressure means papers are assigned to reviewers who are not experts in the area.

I think that some reviewers take the removal of the need for significance as a signal that they need to read and evaluate only the methods and statistics sections of a paper under review, and pay less attention to its rationale and wider context. One positive development of this is that papers that are important, but of limited interest, can get published, such as ‘null’ findings and failed replications. But given the ‘publish or perish’ nature of modern research, if scientists can publish more papers, they will. In this way, OA and other online journals both meet and create the demand for a massive rise in academic output. The OA journal PLoS ONE, for example, has published more than 105,000 papers since 2006, and Frontiers more than 20,000 since 2007. If at least two reviewers saw each manuscript, that amounts to more than 250,000 reviews for those two publishers alone.

If the number of journals and manuscripts grows faster than the number of scientists, the pressure on peer reviewers has to increase. Is that happening? It is hard to find reliable data. The annual number of articles indexed in the publisher Elsevier’s Scopus database increased from around 1.2 million in 2000 to roughly 2.7 million in 2013. That is an increase of 113%, but some of this rise is simply due to articles from more journals being included in the later count. Available figures suggest that the increase in scientists is slower: 2.8% per year in the European Union (between 2006 and 2011) and just 1.5% in the United States, but it is harder to track the faster rates of change in countries such as China. A 2014 survey of 3,000 scientists by Elsevier found that only 29% complained that pressure is increasing on reviewers — but that figure is 10% higher than in 2009.

One result of increased pressure is that papers are assigned to reviewers who are not experts in the area. They might have the technical ability to evaluate methods and results sections — as these OA journals require — but lack the expertise to evaluate a full paper, including introduction and discussion. This matters. Reviewers should verify that authors are quoting the right literature to support their rationale. Citing obsolete studies will set back science, because invalid conclusions might be kept alive.

To protect quality reviewing, a hybrid model should be considered. I suggest a two-tier system, in which some papers are not reviewed before publication at all and are instead subject to a post-publication peer review. Some manuscripts are of interest mainly to scientists, such as null findings, methodological studies or straight repeats of previous experiments. There is great value in publishing these papers, but perhaps not in sending them all out for review. This would free up peer reviewers to focus on papers with more direct societal impact, where the question of whether to publish at all is more relevant. Pre-publication review is more important there, because it protects the lay audience from being exposed to ‘miracle cures’ and wild claims.

In my view, we must look at the massive expansion of online publications (most of which are OA journals) as a disruptive technology, resulting in overworked and fatigued reviewers. Quality will suffer — across the board — unless something is done.