Sir

Assessing the quality of candidates for academic positions has been the subject of controversy. Problems include nepotism and inbreeding, lack of impartiality among committee members — as M. Soler noted in Correspondence (Nature 411, 132; 200110.1038/35075637) — and the increasing use of impact factors as indicators of research performance (see, as one of many examples, Nature 415, 726–729; 2002).

We suggest, as an alternative, that peer review could be a fairer method of research evaluation when scientists are being assessed for new jobs. To ensure the expertise and impartiality of peers, appointment committees should include internationally recognized scientists who are not affiliated with the institution where the position is offered.

Operational costs could be kept low by use of e-mail, the Internet, videoconferencing and so on. Such cost-effective implementation would be of particular benefit to academic institutions in developing countries, which could call on a broader choice of specialists to comprise virtual appointment committees.

Rigorous criteria, relevant to the position, could be laid down before the job is advertised. These criteria might include originality of ideas, diversity of approaches, appropriateness of methods, statistical design and analyses, and so on. Candidates could then be assessed on the kind of quantitative scale that many journals use.

To that end, a comprehensive account of the candidate's research contributions, together with relevant published papers, can be presented without the journals' or the candidate's identities being revealed, so research committees can focus on scientific quality irrespective of journals' impact factors and potential personal biases. Moreover, the identity of the committee should be made public, to safeguard the impartiality of the selection process and the confidence of candidates.

Peer review has undeniably contributed to the advancement of science by providing a reliable system of quality control validated by experts. Some of the criticisms of its use (such as plagiarism and competition) are unlikely to apply to the selection of new academic staff, as it is usually a candidate's past achievements — rather than ideas, methods or data — that are under assessment.

No system is foolproof, and other criticisms of peer-review may apply to individual assessment — for example, sexism, opposing interests and lack of support for controversial ideas. However, these can be minimized by a policy of blind review, by having candidates specify potential conflicts of interest beforehand, and by including referees with different perspectives.

In summary, we believe that electronic information technologies can enhance the quality of recruitment systems in academic institutions by enabling the widespread use of peer review. This would be an improvement on current methods such as the impact factor, which is not a reliable measure of scientific excellence.