Authors submitting their work to a journal expect a fair assessment of the manuscript. Yet fairness can be in the eye of the beholder — the author, a peer reviewer or the journal editor — particularly when the evaluation involves somewhat subjective measures, such as the scientific advance, the wider relevance of the research topic and findings, and the overall quality of the evidence. In fact, an author may offer substantially different evaluations for work similar to their own when wearing the hat of a reviewer or donning the editorial helmet. Perceptions of fairness by authors, reviewers and editors are therefore not necessarily aligned, yet the roles are complementary. Authors seek to publish well, fast and without much hassle (and, when feasible, cheaply); reviewers and editors share these aims, yet not at the expense of a higher purpose — contributing to curating a slice of the literature and to raising the standing of their journals for rigour and excellence1.

Credit: “Piled Higher and Deeper” by Jorge Cham www.phdcomics.com

The main outcome of an initial appraisal of a manuscript by an editor is to arrive at a recommendation or decision about whether to consider the manuscript for peer review. This assessment is primarily editorial — ultimately it is an evaluation of the suitability of the manuscript according to the journal’s manuscript-selection criteria2 — although technical considerations (such as the extent of support for the claims and the quality of the datasets) can also be relevant. Reviewers may also provide such evaluations and may easily identify relevant precedent work and the implications of the findings for their area of expertise, yet their main task is to assess the technical soundness of the work and its reporting clarity and completeness. In this context, fairness implies the consistent application of the same predetermined criteria to all manuscripts, regardless of author names and affiliations. This does not mean that biases in manuscript selection are always unintended. Deliberate preferences can be beneficial; for example, a journal may wish to promote specific research topics or types of work by way of special issues, or to solicit papers from early-career researchers.

Naturally, technical criticisms are less prone to subjectivity than assessments of quality or of fit to a journal. Still, prejudices can creep in as unnecessarily burdensome technical requests, as misjudgements of previous work or of the authors’ interpretations of their findings, or as unjustified trust, insufficient vetting or previous beliefs3,4. As for editorial evaluations — the most common source of disagreement (pictured) — editors have the prerogative to set the assessment criteria and quality thresholds for their journals, and to balance their own judgement with any recommendations from the experts who they have recruited. Yet, beyond reasonable differences in opinion, the most pernicious forms of disagreement stem from unintentional biases. How can these biases be restrained?

Awareness of partiality and prejudice are essential. The most likely sources of unintended biases are inadequate technical knowledge, insufficient editorial experience, over-reliance on intuition, time or productivity constraints, unfamiliarity (or closeness) with specific subject areas or scientists, and unconscious preconceptions, be they about certain techniques, theories or topics, or about the standing of particular institutions or investigators.

An editor who has insufficiently learned about a topic may rely on flawed cues, such as the perceived prominence of the authors (judged by previous publications, by seniority5 or by the authors’ past success in publishing in the journal6), the amount of data included in the manuscript, the clarity of the language7, or the craft and detail in the figures. They may also take the authors’ claims in the cover letter8 or manuscript at face value, or misjudge their noteworthiness or implications. And, much like a doctor’s clinical eye, editors who have handled heaps of manuscripts or who have interacted with an uncountable number of scientists may unduly trust their gut, especially when they find themselves with a large load of old manuscripts; also, they may have favourite topics, find certain types of manuscript boring and be set in their ways. In addition, while novice editors may be readily swayed by the opinion of the most negative reviewer, the old-timers may be tempted to overly rely on a smaller pool of trusted reviewers.

Editors should also be aware of biases arising from the routine of their job: a string of rejected manuscripts may predispose a busy editor, particularly when mentally tired, to seeing the next manuscript with dismissive eyes (or to give it ‘the benefit of the doubt’ if they feel they have recently been overly harsh or if they need more manuscripts at the peer-review stage); similarly, high workloads may prompt an editor to save time by temporarily rejecting more manuscripts (finding suitable reviewers can often be onerous). That the assessment process demands critical thinking does not necessarily protect editors from unfairness in decision-making: even a careful and caring editor can easily convince themselves of the absence of merit in a manuscript, typically by giving undue weight to the weaknesses — real or apparent (that ‘the authors could have provided more evidence’ is a truism) — and by ignoring or de-emphasizing the contextual scientific challenges or implications; it can take domain knowledge and editorial or reviewing experience to see promising light in seemingly preliminary work.

How can we, editors and reviewers, mitigate the influence of all these types of bias? Curiosity, open-mindedness and feedback can go a long way. We can harness curiosity to acquire the relevant scientific knowledge and to recognize our preconceptions. And we should abstain from assessing the aspects of a piece of work that are beyond our expertise or that raise competing interests; an editor or a reviewer cannot fairly evaluate a manuscript if they do not sufficiently understand the main points or cannot place them in a fair scientific context. We also want to be open to alternative arguments and to checking the correctness of our assertions, opinions and intuition against the literature or by consulting with knowledgeable colleagues or experts. Moreover, we should intentionally be assessing our past judgement (remembering that citations to published work are an imperfect proxy for their impact9), and routinely be seeking feedback. For feedback to be effective, we ought to be transparent with our decision-making when possible (for example, by agreeing to have our review reports and editorial decision letters published10). And, when dutiful care to fighting bias slips because time is pressing or the resources are thin, we should recognize that the struggle for fairness can also be curbed.