In March this year the Labour government announced the demise of the British research assessment exercise (RAE). First undertaken in 1986, the RAE attempts to assess the quality of UK research and to distribute government funds for research based on the results of the survey. This large and deeply unpopular survey is virtually unique to the UK, with only Hong Kong coming close. Most countries decide the distribution of public funds based on negotiations with academics or by using a simple formula that takes into account staff and student numbers. The last scheduled RAE in 2008 will be replaced with a 'metrics-based' system, bringing the UK in line with other countries, but whether this will silence critics of the RAE is questionable.

The RAE of 2008 will see the work of every active researcher over the past 6 years judged by some 900 academics (including international experts) sitting on 67 subject panels. Each department will be assessed based on three criteria: research output, its research environment and 'esteem indicators'. Research output requires that each submitted researcher put forward four papers for assessment. The research environment includes factors such as number of students, research income and plans of the department in the future. Esteem indicators involve aspects such as editorial work, keynote speeches at conferences and international collaborations. On the basis of the survey, each department is awarded a score (unclassified to 4*) that determines how much government funding is allocated. For 2008, US$2.6 billion are at stake.

Why is the RAE so unpopular? As was the intention of the British government, the RAE has made funding selective by concentrating research money on a university elite. RAE critics argue that the creation of centers of research excellence undermines departments and institutions. Departments that score badly in the RAE may find it difficult to attract good academics. Furthermore, the eligibility of researchers for other research funding or studentships is often restricted to institutions with the highest scores. According to a 2005 report by the Higher Education Funding Council for England, students are more attracted to departments with higher RAE ranks and departmental closures are more likely in departments with a low RAE score. In these times of economic challenges, RAE advocates would argue, however, that concentrating research money on fewer universities is necessary to protect the strength of academic research in the UK. After all, institutions that score highly receive more money, which probably increases the output of high-quality research.

The RAE is also criticized for wasting time and resources. The competition for funding is so intense that for some years preceding the exercise, the RAE can become an omnipresent shadow that consumes academics' time better spent on driving research and teaching. Inevitably, the shadow can become confused with the objective and itself become the driver of even short-term decision making. The promotion of safe, mainstream research that will deliver solid publications in a short time frame rather than longer-term, high-risk research is blamed on the RAE, as is the short-sighted ploy of some universities to 'second-guess' the outcome of the RAE by spending large amounts of money wooing or retaining 'star researchers' to bolster RAE scores. But by concentrating top researchers in state-of-the-art facilities, RAE supporters would assert that such tactics foster productive collaborations and amplify quality research.

The RAE in 2008 will place a greater emphasis on research output than previous assessments, with 75% of the total score being derived from publication outputs. It is unlikely that the RAE panel has the time to read all submitted publications, which raises the issue of how research output is judged. Some fear that impact factor is unconsciously used, driving the need to publish in high-tier journals. Although the target of submitting only four research papers published between 2001 and 2007 is not that stringent, junior academics who are establishing their first research teams in 2001 may find themselves approaching the RAE in 2008 with only three good papers. Too long in the job to receive the special dispensation for very junior staff to submit just two papers, such academics may attempt to publish less-developed pieces of work quickly to qualify for the RAE. Perhaps a better judge of research output would be citation frequency over a prolonged period of time.

Despite the many criticisms of the RAE, statistics published by Higher Education Funding Council for England (http://www.hefce.ac.uk/pubs/hefce/2000/00_37.htm) seem to suggest an assessment exercise of research quality is beneficial; overall, the UK is the first in the world for papers and citations per dollar expended and is rated fourth for papers per academic. After the lackluster 1970s and early 1980s, the introduction of the RAE reinvigorated British science, helping to expose the idle and reward the industrious. It represents one of the few ways researchers can demonstrate their productivity and in theory encourages research, grant applications and organization of conferences. Clearly the RAE has some faults, but whether this system should be replaced with one based on 'metrics'—general block grants awarded on the basis of external research grants won—is debatable. A report by the Higher Education Policy Institute suggest the metrics system could have many negative consequences, not least making employment of young staff without a research record less desirable.

The government should take heed that getting the new system wrong would be disastrous. Some form of assessment for research quality is desirable, but as David Melville of Kent University so eloquently said in The Times Higher Education Supplement, such a system must “take into account the needs of all researchers; fund good research where it is carried out; respond to early-career researchers; provide stability yet be responsive to changes on a reasonable time-scale; and look primarily at what is proposed in research rather than at history.”