It is almost impossible to work in research without hearing the word excellence. Universities use it in their mission statements and funding agencies name programmes after it. The word has of course made its way into a numbing array of institution titles, such as Germany's Clusters of Excellence and the Australian Research Council (ARC) Centres of Excellence.
A recent paper by a group of open access researchers and advocates has taken a sharp look at the science world's pervasive use of the word. They go so far as to call it a fetish and conclude that it's having negative consequences for research. “Excellence is not excellent, it is a pernicious and dangerous rhetoric that undermines the very foundations of good research and scholarship,” write Samuel Moore, Cameron Neylon, Martin Paul Eve, Daniel O'Donnell and Damian Pattinson.
How can a positive term be considered so damaging? For one thing, the term is not well-defined. Excellence could mean working with great teams, achieving the highest standards or producing research that has an immediate real-world impact, such as saving lives.
Even national agencies struggle to define the word. The ARC runs the Excellence in Research Australia (ERA) exercise to benchmark the country's universities. Research quality is rated on a 1-to-5 scale from “well below world standard” to “well above world standard”. Excellence is thus defined in terms of what others are doing.
If excellence in research exists, it should be possible to see it in data. For example, the highest ranked grant applications in either the US National Institutes of Health (NIH) or ARC should yield the most productive projects. But, when researchers examined grants funded by the NIH they found only a weak association between how expert reviewers ranked the grant and the eventual outcome of the research. The imprecise definition of excellence has diminished its utility. For instance, although negative and positive trials are equally valuable to science, a positive trial is more valuable to a researcher's career as it will be easier to publish in a top journal — a frequently used metric of excellence. When we reward excellence based on journal impact, we are somewhat rewarding luck.
The authors of the excellence paper suggest it would be better to focus on good research practice. For instance, a project would be judged by whether the researchers sought to answer a worthwhile question, planned and executed the study by defined standards and wrote up the results clearly and honestly. In this kind of system, excellence would be defined chiefly by how results were obtained, rather than by what actually was found.
As more researchers compete for limited funding some scientists are driven to spin their results to appear more positive. Rewarding research based on competence would take the heat out of a system that is hyper-competitive.
But, celebrating competence over excellence is a hard political sell. Funding agencies and universities want to celebrate 'excellent' research that changes lives, and this is welcome. These examples spark the public imagination and provide political capital for science.
“Science is a methodical process that sometimes discovers an important failure.”
Inside the research world, instead of just focusing on positive, if somewhat fortunate, discoveries, we must also recognize that science is a methodical process that sometimes discovers an important failure.
Box 1: NATURE INDEX
The Nature Index database tracks the affiliations of high-quality natural science articles, and charts publication productivity for institutions and countries. Article count (AC) includes the total number of affiliated articles. Weighted fractional count (WFC) accounts for the relative contribution of each author to an article, and adjusts for the abundance of astronomy and astrophysics papers. More details here.