Sir

We have developed an electronic systematic search tool to estimate the amount of duplicate publications in the 70 ophthalmological journals listed by Medline. Our results show that there is a considerable number of duplicate publications. If this holds true for other disciplines, it is bad news for research.

For our survey, we matched the title and author(s) of each of the 22,433 articles published in the 70 journals between 1997 and 2000 using a duplicate-detection algorithm1, and found that 13,967 pairs of articles give a matching score of 0.6 or more. Of these, we manually reviewed a random sample of 2,210. We found 60 genuinely 'duplicate' publications and estimate that 1.39% of the analysed articles are redundant. Because of the very restrictive selection process and the impracticality of detecting all duplicate publications, and because the estimated amount of duplicates increases with lower matching scores (Fig. 1), we regard this estimate to be the tip of an iceberg.

Figure 1
figure 1

Estimated number of redundant publications for matching scores of 0.6 or more, where 1 = total overlap.

Of the 70 journals, 32 were victim to duplicate publication — 27 journals published the first paper and 26 the duplicate, on average 6.4 months later (standard deviation 4.7, range 0–21.3 months). We found no statistically significant difference between the average journal impact factor of the first (1.13) and the second journal in which the duplicate article was published (1.42) (Wilcoxon-signed ranks test P > 0.1). The analysed publications were by 210 authors, suggesting by extrapolation that a total of 1,092 authors could have been involved in redundant publication during the time period that we analysed. The scientific conclusions of the original and of the duplicate(s) were identical in 88.3% of cases; we found slight changes in 6.7%; and major changes (different results despite identical samples, or omission of patients) in 5% of cases.

Duplicate publications are unethical. They waste the time of unpaid, busy peer reviewers and of editors; inflate further the already over-extensive scientific literature; waste valuable production resources and journal pages; lead to flawed meta-analysis; exaggerate the significance of a particular set of findings; distort the academic reward system and copyright laws; and bring into question the integrity of medical research. Republication of data yields no benefit other than to the authors.

It is important that journal editors can trust their authors. Although many duplicate publications are discovered by careful peer-reviewers or editors, they cannot provide complete protection. Scientific journals can combat redundant publication in various ways2, but in practice the penalties for duplicate publication are minimal3.

Proper deterrents are needed: for example, better education on publication guidelines, the introduction of registers for planned and ongoing clinical trials, and a change in assessment criteria from quantity to quality when papers are submitted for posts or grants. As long as publications remain the central requirement for academic advancement, a reasonable solution seems unlikely. Nevertheless, it is imperative that the problem of redundant publications be addressed, for it is the responsibility of all those who care about objective research and evidence-based medicine.