People embark on a career in science for many reasons. Some want to improve the world, others to understand how it works. But how many foresee that their work will help to resurrect a sixteenth-century English warship?

In the language of twenty-first-century science, such research has a new label: impact. Hundreds of thousands of people, after all, have queued to see the Tudor timbers of the partially restored Mary Rose, salvaged from the sea floor, and now on display in a museum in Portsmouth, UK.

They do so thanks to the efforts of physicists, who tested radar imaging on the wreck site; marine biologists, who spotted borer worms still living in the timber; and chemists, who created nanoparticles to prevent the waterlogged wood being damaged by bacterial action. Once artefacts had been brought up from the wreck, materials scientists examined the corrosion on Tudor cannon balls; biomechanics experts analysed the arm bones of Tudor archers; and archaeologists inspected skulls to reconstruct the faces of the Mary Rose’s crew. And all of this work was paid for — at least partially — by the British taxpayer, as part of UK investment in publicly funded science.

If scientists were once coy about the good work that they do, they cannot now afford to be. In fact, the British system now demands that they boast of the impact their research has on society. For the first time, the mammoth multi-year assessment of UK university research, used to help rank institutions and allocate grants, included judgements of such impact. This is a good thing.

The case studies and reports from this Research Excellence Framework assessment have now been published, providing a compendium of some 7,000 stories of good done, lives saved and ancient warships fixed up. As we discuss on page 150, scholars of research impact are rubbing their hands together at the thought of analysing the stories. Preliminary text-mining suggests that across many disciplines, studies strewn with words justifying the significance or reach of the work — such as ‘million’, ‘major’ and ‘global’ — tended to score more highly than narratives that over-used words such as ‘research’, ‘university’ and ‘impact’.

Conventional measurements of research impact beyond academia seek hard data, not stories. They typically revolve around econometric models that try to capture the financial return of investing in science, or count small slices of quantifiable business activity, such as patents or spin-out companies. To be sure, there are plenty of those examples in the case studies. But taken as a whole, the narratives remind us of the many broader ways in which taxpayer-funded research ‘pays back’ on its investment — and that hard metrics are not the only way to capture this.

If scientists were once coy about the good work they do, they cannot now afford to be.

Indeed, one benefit of the focus on broad impact is that individuals and institutions that do good work that makes a positive difference to people’s lives, society and the economy earn recognition — and motivation — even if they are not producing profound scientific insights.

There are some practical difficulties of running such an assessment, especially for the first time. Some researchers say that although they are pleased to see the results, it was not worth the burden on academics’ time and university budgets involved in collecting the case studies.

And it is true that, although a large set of good-news stories makes a valuable collection to dip into for advocacy purposes, the narratives from this particular exercise do not give a comprehensive view. Universities had to submit only a few of their best examples (and according to the data, many may have minimized the number of staff members whose work was submitted, so as to cut down on the number of case studies that they had to provide). Another problematic area concerns the difficulty of grading case studies when many different universities might each claim an influence on a final product (for example, a drug brought from bench to bedside).

These are teething troubles. The decision of the UK funders to grade the case studies, and to use the scores to help them to decide the destination of £2 billion (US$3 billion) in performance-linked annual funding, meant that universities across the country have taken the exercise seriously. The result is a reminder of the many ways in which publicly funded research benefits society in the United Kingdom and beyond.

It demonstrates one other important point. Although the ‘impact agenda’ may focus minds and give universities and funders another way to make science tangible and measurable, the UK exercise shows that academics had been committing to impact long before it became a buzzword. The impact claimed is recent, within the past 5 years or so, but the research on which that impact is based is often up to 20 years old.

The focus on impact is a new thing, in other words — but the creation of impact is not. The more visible those impacts become, the better for all concerned.