French president Nicolas Sarkozy seems obsessed with the poor showing of his country's universities in international rankings — to the point where he has ordered France's science and higher-education ministry to set “the objective of having two French establishments in the top 20, and 10 in the top 100”. Sarkozy is not alone: the drive to improve university ratings has come to influence policy-making and funding decisions around the world — despite the ranking systems' well-known shortcomings.

There are a number of such systems, of which the most prominent are the one launched in 2003 by the Shanghai Jiao Tong University in a bid to compare Chinese universities with their counterparts elsewhere, and that launched as a commercial publishing exercise in 2004 by the Times Higher Education magazine in London. These rankings are generally based on composite scores that aggregate weighted indicators, such as a university's research publication output and its reputation. As many critics have pointed out, however, such schemes tend to focus too much on research, and pay insufficient attention to other key factors, such as other forms of scholarship and how well a university teaches its students to think critically and to innovate. And the schemes tend to over-reward institutions that have large programmes in biomedicine, in which papers have high citation rates, while penalizing those with a focus on engineering or social sciences. It's also questionable whether the university is even the appropriate unit for assessment. An individual department or laboratory is arguably more relevant when it comes to research.

Nonetheless, universities that do well in the rankings are too often happy to trumpet that fact, rather than ask critical questions, and thus give rankings an inflated credibility. Policy-makers — and journalists — also often tend to take the rankings at face value. This encourages a soccer-league mentality of dubious relevance.

Fortunately, a new generation of ranking systems has begun to address some of these issues (see page 16). These systems make an effort to be more multidimensional, comparing universities less on single, aggregate numbers, and more on specific aspects such as research, teaching, and regional and industrial engagement. They have also moved towards comparing like institutions with like, instead of lumping together massively funded universities such as Harvard in the same list as smaller institutions that may be excellent in their own ways. And, perhaps most importantly, they have begun a long-overdue shift from the publication of simple tables to publishing the databases that support the tables, so that users can do online queries to compare organizations by criteria that are relevant to them.

Indeed, whatever the rankings' problems, they have made apparent the need for databases of solid information on universities as a tool for transparency and accountability. Governments and institutions can help here by improving and expanding the data that are available. They could also help by redoubling their efforts to come up with still better ways to measure the core functions of universities, including their contributions to the economy and society, and by proposing their own rankings — as the European Commission is now doing.

Universities must also be vigilant in not allowing rankings to excessively affect their policy-making, a risk cited in a 2008 report by the Higher Education Funding Council for England on the impact of rankings (http://go.nature.com/Ssi6Rr). Like them or not, rankings are here to stay. The challenge for academia is to prevent their abuse, explain their limitations and support efforts to provide more holistic views of the university enterprise. As the Organisation for Economic Co-operation and Development (OECD) recently noted (http://go.nature.com/Lld7d7), in a swipe at rankings, higher education cannot “be reduced to a handful of criteria which leaves out more than it includes”.