Sir

On 22 November, the Higher Education Funding Council for England announced that the assessment and funding of science-based disciplines will in future be “based on citation rates per paper, aggregated for each subject group at each institution” (http://www.hefce.ac.uk/Pubs/HEFCE/2007/07_34/).

Changes in performance indicators always strongly influence individual and institutional behaviour and 'citation game-playing' will no doubt become a staple of coffee-room conversation. What is less clear is how the citation practices of authors may influence bibliometric indicators.

Citation practices are known to be imperfect. The documented problems include excessive citation of an author's own work. Papers cited can be inappropriate or ambiguous in their support and, in some cases, the authors may not have read the papers they cite. Authors may form 'citation coalitions' within research networks. They may fail to provide citations to intellectual precursors or to work reporting conflicting conclusions. There are geographical and language biases. The increasing number of many-authored papers makes it impossible to have a clean-cut general metric in which one author is associated with one paper.

Taken together, these factors represent a problematic degree of error for the proposed bibliometric system of assessment. They place added responsibility on journal editors and reviewers as arbiters of appropriate author conduct.

Unfortunately, there are no simple solutions. Currently, identifying poor citation practices is not emphasized in the peer-review process, so perhaps journals could adopt a system of random citation audits, or periodically request evidence of citation appropriateness from authors. In reality, time constraints and the sheer volume of submissions to many journals mean that such measures are unlikely to be implemented soon.

Until referencing practices improve, we would argue that using citation rates to assess performance is fundamentally flawed.