Comment: It's time to break ranks on university assessment

Ranking systems for universities need to be re-evaluated.

  • Adrian Barnett

League tables may the standard for sport, but universities and hospitals need a more nuanced approach.  Credit: Andy Baker/ Alamy Stock Photo

It's time to break ranks on university assessment

4 November 2016


Adrian Barnett

Andy Baker/ Alamy Stock Photo

League tables may the standard for sport, but universities and hospitals need a more nuanced approach. 

Ranking systems for universities need to be re-evaluated.

“The table doesn’t lie” is a football cliché.

After every team has played every other team in the competition, both at home and away, the teams at the top have won the most points, and deserve to be there, while the bottom teams have no excuses.

Thanks to their effectiveness in sport, league tables are now used to rank hospitals, schools and universities. For hospitals, league tables are designed to identify poor performers and can have a punitive effect.

While university rankings are supposed to help prospective students, researchers and philanthropists use rankings to identify the “Barcelona” (arguably the world’s best football team in the last decade) of academic institutions.

But, the data used to rank hospitals or universities are more nuanced than the balanced data of football. To fairly judge hospitals we’d need to compare how they dealt with exactly the same patient, and create a league table based on a range of cloned patients that every hospital had to manage.

In real life, hospitals in sleepy coastal towns with lots of tourists and retirees see quite different patient groups to busy hospitals in cities. Imperfect data means there will never be a perfect metric and we should treat all tables with caution.

So are league tables ever useful outside sport?

First, we should stop calling them ‘league tables’ or ‘rankings’. It is a imprudent assumption to think that the 5th placed university is truly better than the 6th, or that a rise of 5 places in a league table with more than 100 competitors means that institution or university has really improved.

A cautionary tale from sport; statisticians recently ranked Test batsmen throughout history and found one standalone player, Don Bradman. Following him were more than 20 incredible cricketers, all deserving of second place. Even with years of stats and data, it was an impossible task to pick them apart.

In the case of universities, it would be better to call performance rankings ‘league groupings’, which use statistics to group similar performers. An elite group could be followed by a collection of rising stars. When a rising star moves into the top tier that suggests an improvement worthy of note – and possibly a university press release – more so than when an institution jumps one place up the ladder.

Another important consideration is the effort and money used to create rankings. The UK Research Excellence Framework (REF) to assess and rank universities cost almost £250 million in 2014 and required detailed submissions from universities involving the time of many researchers and staff across the country. Australia’s Excellence in Research Assessment has similar drawbacks. On the plus side, these exercises generate higher-quality data to compare universities than the more readily available, but potentially misleading data, based on publication rates, citations and journal impact factors.

The advantage of rankings based on routine data, such as the Nature Index, which counts published papers and collaborations in a select group of high-quality journals, is they require little effort or money from universities.

A middle ground between imperfect leagues tables and the expensive and time-consuming research assessment exercises could be using experts to measure performance. A bunch of academics, who were asked to predict the outcome of the REF and paid according to the accuracy of their predictions, did a pretty good job. Perhaps we could use a similar system in Australia, which would require less time and money than the ERA, then spend the time and money saved on actual research.

A meta-ranking of all the competing rankings might also be worthwhile. This would statistically combine the results from multiple league tables to give a meta-table (a technique widely used in meta-research to combine results from research studies). Or perhaps a league table of the league tables so we know which ranking is best? This would be quite literally turning the tables.

Dr Adrian Barnett is a statistician who works in meta-research at the Queensland University of Technology, Brisbane. He tweets @aidybarnett