It is the best of times and the worst of times in the world of scientific publishing. The explosion in the number of scientific papers being published, and in the number of journals in existence, is a positive sign of the overall healthy state of research. However, the increasing cost of this growth — both financially and in terms of the increasingly onerous burden on referees — has led to a crisis that threatens the sustainability of scientific publishing as we know it1. This situation is made worse by the practice of fragmenting single coherent bodies of research into as many publications as possible — the practice of scientific salami slicing.

Peer review is a cornerstone of the scientific method. Although it is by no means perfect, it is the best means we have of ensuring real and steady scientific progress. For this reason, participation in the peer-review process is part of the high calling to which scientists (for the most part) gladly answer. But despite the fact that most scientists are not paid for their contribution to this process, this does not mean it is free — time spent reviewing is time not spent on teaching or research. Therefore, time spent on papers that make little contribution to new scientific understanding is time wasted.

Most journals have strict editorial policies against duplicate publication. Enforcing such policies is straightforward in instances where there is a substantial duplication of data, if not text, in different manuscripts. The dividing line between papers that make a meaningful contribution to understanding in a given field and those that are merely repackaging data, however, is rarely well-defined, and requires that editors and referees be in possession of all the facts. This is the reason that Nature Materials explicitly requires that all authors provide details and pre-prints of all papers that are under consideration, in press or recently published elsewhere that could have any relevance to their submitted work. Ultimately, though, this relies on the integrity of authors.

When authors fail to disclose all relevant work, they deny referees and editors the opportunity of assessing the true extent of its contribution to the broader body of research. Just as serious, failure to properly cite previous work not only misrepresents the field but runs the risk of omitting important pieces of information and potential insight. In the fast world of research, it is often difficult to be completely on top of all the relevant recent publications of other groups. But when authors omit reference to their own work there is no excuse.

No one would deny that the desire to publish new results rapidly is legitimate. The urge to do so solely to increase the number of one's publications, however, is not. Much of the problem arises not from an inherent desire among researchers to maximize their publication count, but from the conditions that are set by funding and appointment bodies, which determine what gets funded and who gets tenure. In the 'publish or perish' climate that has evolved over recent decades, overemphasis on the size of an individual's (and, increasingly, entire research group's) publication record as a means of quantifying their research output inevitably rewards quantity over quality. Moreover, this has the effect of abdicating responsibility for such assessment to the journals in which they publish — a responsibility that is neither appropriate nor desired.

Of course, the volume of any given researcher or research group's publication output is rarely the only criterion on which their scientific accomplishment is judged. However, it would be naive to claim that researchers, particularly those who are just starting out in their scientific careers, are not acutely aware of the demand to publish or be damned. And although it is right that researchers be accountable to the bodies that fund them, reliance on simple formulae for quantifying output threaten to override the need for scientists to follow up initial breakthroughs with larger, more comprehensive and rigorous studies. Perhaps the most direct means of improving the situation is through the next generation, by mentors setting the correct example to junior scientists and by instilling the idea of what constitutes best practice in grad school.

The challenge then is not only to establish more sophisticated means to assess the worth of a researcher's scientific contribution, but for bodies making such assessments to make it plain that it is scientific rigour and not merely numerical output that will lead to success.