Science benefits society in myriad ways — but how to identify and encourage work with high impact is an obsession of funding agencies the world over. Last month, the United Kingdom brought new data to bear on the problem: almost 7,000 case studies chronicling the economic, cultural and social benefits of the nation’s scholarship, which were solicited as part of a unique assessment exercise. As policy-makers pore over the documents, Nature has commissioned its own analysis, revealing how researchers described the worth of their work to their paymasters, and hinting at buzzwords, including ‘million’ and ‘market’, that garnered high marks.
Many funding bodies ask academics to plan for the broader impacts of their work when they apply for grants. But the United Kingdom wanted to reward impact that had already been achieved, says Steven Hill, head of research policy at the Higher Education Funding Council for England (HEFCE). The country already has an audit culture: it grades the quality of university research every few years, and hands out £2 billion (US$3 billion) annually on the basis of that assessment. For the 2014 audit, known as the Research Excellence Framework, or REF, HEFCE tweaked the rules. It added a requirement that universities send in case studies detailing their work’s wider impact during 2008–13, and announced that 20% of an institution’s final grade would be based on these contributions (see Nature http://doi.org/zx8; 2014).
Meeting that challenge was a massive effort that sometimes involved hiring specialist writers and consultants. University College London alone wrote 300 case studies that took around 15 person-years of work, and hired four full-time staff members to help, says David Price, the university’s vice-provost for research.
The results have impressed. “Every government wants to know the societal impact of its research,” says Diana Hicks, who studies science and technology policy at the Georgia Institute of Technology in Atlanta. “The difficulty is how to do that broadly when you only have isolated case studies. Britain has cracked that problem and produced a wonderful data source.”
The case-study narratives demonstrate “extraordinary breadth and depth”, says Jonathan Grant, a public-policy researcher at King’s College London. They range from chemists who used nanoparticles to prevent bacteria damaging the wood of a sunken sixteenth-century warship to economists who tested the effects of cash transfers to poor households in Mexico and Colombia.
To draw further insights, Nature asked Paul Ginsparg, a physicist at Cornell University in Ithaca, New York, who has experience in text-mining, to run a statistical analysis of the language used in the case studies.
A straightforward word count revealed, unsurprisingly, that the terms ‘research’, and ‘impact’ were the most common, with 200,000 and 135,000 appearances respectively, after words such as ‘the’ or ‘and’ are removed. ‘Development’, ‘policy’ and ‘health’ also topped the lists. Notably, the documents name-check more than 190 countries, suggesting that the research has huge geographical reach.
Ginsparg also looked for statistically significant correlations between the use of certain words and the scores awarded. He found that across the disciplines, texts dense in words such as ‘million’, ‘market’, ‘government’, ‘major’ and ‘global’ tended to be given high scores by the judges, who were told to mark on the basis of ‘significance’ and ‘reach’ — whereas over-use of terms such as ‘conference’, ‘university’, ‘academic’ and ‘project’ correlated with lower grades (see ‘Power words’).
Although the correlations do not indicate causation, they might hint at judges’ preference for narratives of economic impact in particular, speculates Gemma Derrick, a researcher at Brunel University London who is examining how the studies were collected and assessed.
“I was sceptical about the ‘impact’ process, but now I think it’s a good thing,” says Price, who says it has revealed persuasive stories that the university can present to funders, industry partners, governments and alumni.
Some UK academics question whether the impact component to the research assessment will make a significant difference to how regional funders distribute their cash — and if not, whether it was worth adding. The formula that will link performance on the assessment to funding allocation will not be released until March, but it is already clear that universities that have traditionally excelled in the audit of academic output — Oxford, Cambridge and Imperial College London — also score highly on impact.
Internationally, some researchers criticize the idea of identifying research impact using case studies, rather than by tracking more quantifiable economic measures. “I am baffled why a scientific community would go through such a burdensome and artisanal system,” says Julia Lane, an economist at the American Institutes for Research in Washington DC and former director of a US government programme called STAR METRICS, which monitors the economic benefits of money spent on research, including job creation, patents and spin-out companies. On 27 January, a network of European researchers — mainly economists — met in Brussels for the first formal meeting of an effort to trace how science funding in Europe leads to wealth and employment across society. The effort is strongly influenced by STAR METRICS.
Whether anyone will repeat the United Kingdom’s impact assessment remains an open question. “We know lots of other countries are interested in learning from our experience,” says Hill. Across the world, most countries that have introduced nationwide assessments of research quality, such as Australia and Italy, do not measure impact. Yet governments in both Sweden and the Czech Republic are currently considering an exercise similar to the REF.
Back in the United Kingdom, researchers are already preparing for the next performance audit, in 2020, with mixed feelings. “We will all be encouraged now to do more research that could form a case study — whether you think this is a good thing or not depends on your subject area,” says Dorothy Bishop, a neuropsychologist at the University of Oxford. “I have a concern that I may be stuck spending more time evaluating the impact of what I do and this will take me away from actually doing it.”
- Journal name:
- Date published:
- See Editorial page 137