We all need information filters. In scholarly communication, journals, scientific forums, search engines, citation indexes, social-media channels and a scientist’s network of colleagues function as lenses for sifting through the findings of research. Some lenses provide a large field of view, others fail to focus, and many can have aberrations at the edges. And, as with the photographic kind, one lens is rarely sufficient, and the most useful lenses provide higher signals relative to the level of background noise.

What makes an information filter useful for the pursuit of research? Usability, consistency and reliability should be top qualities. As with successful brands, the value of a filter service should be easy to grasp (even when difficult to articulate), its output should be consistently improving, and its service should be reliable. Maintaining these qualities demands persistent effort from the service’s managers, and makes satisfying every user nearly impossible. Content filters annoy authors who want in but are filtered out, and can irritate readers who rely on them to limit wastefulness in time and effort. The job of owning or running an information filter therefore involves a balancing act that comes with powers and responsibilities. Influence over an audience can be misused to manipulate it, and the responsibility to preserve what is true comes with dutiful correction of what is not. Yet, the dividing lines between quality and fairness and between carefulness and deviousness are often contextual and fuzzy. For social-media services, content moderation is, in fact, a most difficult job.

Curation in information filters is, therefore, essential. Scientific research is ever more complex and multidisciplinary, and the availability, dissemination and accessibility of research outputs are only increasing. Indeed, the Internet allows any research output to be published somewhere, social media spreads the information easier and faster, and open-access initiatives and policies1 are making the content increasingly more accessible. Hence, good judgement is a necessity. And so are checks and balances to maintain excellence and equity, to keep power in check, and to limit the amount of misinformation and its effects on public discourse.

Yet, author-pays publishing models, as well as increasing competition for academic reputation, are shifting power from readers to authors. As a case in point, eLife recently announced2 that, from 31 January 2023, the journal will publish all research that it peer-reviews as ‘reviewed preprints’3. It will let authors decide whether to revise their manuscript in light of the reviewers’ comments, and whether to make the latest version of the reviewed preprint the version of record (alternatively, authors can submit the reviewed preprint to another journal). eLife will also publish the reviewer reports and an assessment4 of each reviewed preprint that grades the significance of the findings as ‘landmark’, ‘fundamental’, ‘important’, ‘valuable’ or ‘useful’, and their strength as ‘exceptional’, ‘compelling’, ‘convincing’, ‘solid’, ‘incomplete’ or ‘inadequate’. eLife is thus determined to become a grading filter for readers — one that is complex and imprecise5, if judged by these assessment keywords. Still, experimentation in scholarly publishing should be applauded6.

By disentangling manuscript selection from peer review (as also enabled by Review Commons, an initiative from the European Molecular Biology Organization and ASAPbio, since December 2019), eLife will stop filtering manuscripts after peer review, and hence will implicitly be asking their readers to further evaluate the reviewed preprints that the journal will publish. After all, consistency and reliability in the quality of manuscript selection before peer review are hard to maintain, and removing the reviewers’ sway over the evolution and fate of a manuscript may condition their willingness and dedication to review the work. Equity will also be difficult to keep up, particularly in the face of a financial exchange — the article-processing charge — based solely on an initial filtering decision (whether to peer review the manuscript) by a practising scientist embedded in the same academic system of incentives and rewards. The trust and standing of eLife as a brand, and hence their quality as a filter, may be graded commensurate with the amount of incomplete and inadequate research on the journal’s website.

As consumers, evaluators and producers of research, researchers are compelled to pursue excellence and rigour. However, how these are perceived and the degree to which they are followed, are influenced by the actual incentives and constraints when researchers act as authors, reviewers or readers. Without checks by peers, authors can remain blind to biases that may affect the rigour of their research outputs. When taking the role of a reviewer, their views and requests are balanced by those of other experts, and are overseen by editors. As readers, they demand and benefit from their peers’ expertise and carefulness. Peer review thus serves as a filter for quality and soundness, and provides a healthy balance of scepticism and implicit trust7. Yet for peer review to work well, it requires suitable incentives and constraints, and judicious management by editors. That’s what the system of scholarly journals provides.