Changing the way we measure and reward research could enrich academia and improve outcomes for society, says Alan Finkel.
Australian science suffers from a fundamental misalignment. Publicly funded researchers at universities face considerable pressure to generate academic papers. Taxpayers, however, would prefer to see more significant commercial and social benefits from their research investment.
At the national level, the Excellence in Research for Australia (ERA) process, through which the federal government evaluates universities, is driven by assessment of the quality of research publications. In the science, technology, engineering and mathematics (STEM) disciplines this assessment is based on citations — a score measuring a published paper's influence by the number of other papers in which it is cited. In the humanities and social sciences (HASS), the assessment is based on academic peers reading selected papers to determine quality. The ERA does credit other accomplishments1, including fellowships of learned academies, patents and registered designs, plant-breeders' rights and research commercialization income, but there is little evidence that assessments have given much weight to such achievements.
Because faculty members, departments and universities want to be judged as being world class or better in the ERA assessment, they pursue research for academic publications that are likely to be well cited — almost to the exclusion of other activities. And since evaluation programmes such as the ERA affect funding and student demand, they drive academic behaviour.
Individual researchers realize that the path to promotion is paved with academic papers, and so rarely spend time working on anything else. For example, academics have little incentive to join in industry programmes such as the successful Cooperative Research Centres (CRCs) if participation limits their ability to publish extensively. Universities and research institutes considering the appointment of an academic who has spent years in industry often worry that the applicant's grant-winning ability might be compromised by his or her time away from academia. Similar issues dog researchers within government research institutes such as the Commonwealth Scientific and Industrial Research Organisation (CSIRO).
These problems thwart engagement with industry, thus depriving researchers of useful commercial skills. In Australia, more than twice as many PhDs are employed in universities than in industry, whereas in Germany the ratio is the other way round2. As a result, Australia was rated last in the recent ranking from the Organisation for Economic Co-operation and Development (OECD) of university–industry collaborations3. Without such collaborations, industry does not fully benefit from the fundamental research undertaken in academic settings, and academic researchers are not as aware as they should be about market and societal needs and trends.
One way to encourage academics to collaborate more with industry would be to award 'citation equivalents' to various activities that advance the practical impact of science through means other than peer-reviewed publication of academic papers. In STEM disciplines, citation equivalents could be calculated for issued patents, commercial contracts and licence fees. More broadly, citation equivalents could be awarded for activities including writing books, opinion pieces and government submissions, PhD student supervision, and development of new approaches to teaching practices or novel training courses.
Citation equivalents earned could be counted in the same way as normal citations, even contributing to higher order measures such as the H-index (a measure of a researcher's impact and productivity) or institutional-level evaluations such as the ERA. Each contributing activity would count as equivalent to a paper with an agreed number of citations. For example, an Australian patent might be rated as equivalent to a paper with a small number of citations, say five. On the other hand, a triadic patent (which covers the US, Europe and Japan) might be rated as equivalent to a paper with 50 citations; patents taken up and used would be rated more highly than ones that lie dormant.
A system such as this, aiming to provide impact measures for individual effort, would be cheaper and faster than the labour-intensive methods that are needed to gauge institution-level impact. Case studies and expert evaluation panels take a long-term view, in some cases considering outcomes a decade or more after publication. The proposed citation equivalents, by contrast, would measure near-term achievements as soon as the impact activity is definitive, for example, the issue of a patent.
Because traditional citations are global in extent, citation equivalents could be considered for adoption not only in Australia, but worldwide. To be successful, citation equivalents would have to be embraced by research institutes, universities, national granting agencies and, ideally, international evaluation programmes and databases such as Scopus, Google Scholar and Web of Science.
Citation equivalents could be tested on a small scale and rolled out as experience is gained. With a little funding and some determination we could broaden the existing publications-focused metrics to achieve a better balance — acknowledging the best basic research while also promoting the STEM research that delivers the greatest impact for the society that is paying for it.