Researcher working in a chemistry laboratory

Excellence is used to rank research and universities but it is a hard term to define.Credit: Oxford University Images/SPL

Excellence is everywhere in science. Or that seems to be the plan: to make excellence ubiquitous in research. This month, the University of the West Indies in Kingston, Jamaica, became the latest academic institution to encourage its scientists to excel, setting up a Regional Centre for Research Excellence in the Caribbean.

To be good is no longer enough — excellence, by definition, must go beyond that.

And for those who achieve it — from individual researchers and laboratories, to universities, regions and even entire countries — grants, students and political patronage follow. Britain’s largest biomedical-research funder, the Wellcome Trust in London, runs a grant scheme aimed at “Sustaining Excellence”, and the United Kingdom funds universities according to a mammoth Research Excellence Framework (REF) assessment every few years.

What does excellence mean? How is it measured? When do we know that we have reached the required standard? These are difficult questions, but if the excellence agenda is to be taken seriously, they must be asked — even if they cannot be adequately answered.

A paper in Science and Public Policy makes the latest attempt to ask — and indeed answer — them (F. Ferretti et al. Sci. Publ. Pol. http://doi.org/ckpg; 2018). The authors interview a dozen experts — from policy wonks to researchers — about excellence and quickly reach two points of consensus.

First, the idea of excellence as a measure of research quality makes many people uncomfortable. And second, these people — despite their discomfort — cannot suggest anything better, given that science and scientists must meet political demands of accountability and assessment.

These arguments will be familiar to those who follow the debate, but the conclusions of the study are still striking. The authors suggest that “the making of current indicators for research policy in the EU may be in need of serious review”. This is especially noteworthy because it is those very authors who devised the policy indicators — based, of course, on excellence.

The majority of the authors work at the European Commission’s Joint Research Centre (JRC) in Ispra, Italy, which in 2013 took the excellence agenda to its logical conclusion and set up a way to assess the scientific performance of nations. Policymakers in Europe now use this metric — the Research Excellence in Science & Technology indicator — to rank the performance of the member states, and so to set priorities and distribute funds.

Critics of the concept of research excellence (and there are many) will welcome the suggestion from the JRC excellence architects in the new paper that the system is flawed. But the scientific community should remember the second point of consensus identified in the study: if not excellence, then what?

Many scientists would like to see excellence metrics — indeed, all metrics — scrapped. Leave the job of directing research, they say, to researchers. Others suggest that the excellence effort should be rebranded to reflect its most important features — such as “soundness” and “capacity” (S. Moore et al. Palgrave Commun. http://doi.org/ckph; 2017).

The case for abandoning metrics is not realistic and not desirable: applied properly, metrics can indeed be a useful guide to policymakers and a way for the public to trace the billions of tax dollars funnelled into research every year. (This is especially the case in countries susceptible to cronyism and nepotism.) And to change the language used is politically unwise. Semantics matter — and excellence, to an extent, is what politicians and policymakers expect from scientists.

But it is true that excellence can be defined in many ways. And this is where reforms should focus. Nature, for example, intends to promote the health of research groups this year and, with that, the responsibilities of principal investigators and other group leaders to promote reproducibility. Can a university that does not offer adequate training to people in these positions truly be considered excellent?

Meanwhile, some funders are starting to place more importance on the societal impact and relevance of research. Britain’s REF exercise, for example, deserves credit for including such impacts in its assessment. And in recent years, the handling of issues such as equity and social justice have come under welcome scrutiny.

Perhaps most important, in both defining and applying excellence, is transparency. Local definitions can create problems. Young scientists trained at universities that downplay the need for high-impact papers, for example, can find themselves at a disadvantage when applying for jobs at places that attach greater value to them.

Excellence depends on context. But scientists, funders and officials can do more to discuss and agree on some suitable basic principles. A news story last week, for example, revealed that more than three-quarters of research organizations in the United Kingdom have no policy for preventing the misuse of metrics in hiring decisions. Many of these universities consider themselves excellent. Others will disagree.