When Julia Lane began working in scientific-funding policy she was quickly taken aback by how unscientific the discipline was compared with the rigorous processes she was used to in the labour-economics sector, “It was a relatively weak and marginalized field,” says Lane, an economist at New York University.

Credit: John Harwood

In 2005, John Marburger, science adviser to then-President George W. Bush, felt much the same. He called on researchers and policymakers to focus on the “science of science policy”, an empirical assessment of outcomes and returns from funding agencies such as the National Institutes of Health (NIH) and National Science Foundation (NSF). “When the Congressional Budget Office does simulations of the effects of investment in areas like tax or education policy, they have models and processes,” says Lane. “But he said that when it comes to science, essentially all we say is 'send more money'.”

Around the same time, the UK government also began to explore how to significantly increase the economic impact of the country's research and development (R&D) investments. According to Lane, such efforts have historically been a low priority, because R&D accounts for only a small percentage of the economy — typically less than 3% of the gross domestic product (GDP), mostly from the private sector. However, public funding of basic research still represents a considerable sum.

In 2013, the United States spent more than US$40 billion on research at university- or government-run laboratories. Finding out what comes of this expenditure is crucial for economic reasons, but also has a moral dimension. “We can't sit in an ivory tower and expect the taxpayer to pay our salaries and not ask any questions,” says Ben Martin, who specializes in science and technology policy at the University of Sussex, near Brighton, UK. Over the past 10–15 years, economists and policy experts have been trying to build smarter tools to answer such question about how public research investments pay off — a process that has entailed an examination of what precisely it means to get a return on R&D.

Number crunch

The earliest efforts approached this question purely in economic terms. Martin and his colleague Ammon Salter, now at the University of Bath, UK, reviewed1 studies on the benefits of publicly funded basic research — including pioneering work by the US economist Edwin Mansfield, who surveyed businesses to learn what proportion of their products arose from this type of research and determined a 28% rate of return. However, they found that these studies generally took an overly simple approach to tackling a complex question. “We concluded that there are too many conceptual, methodological and empirical problems with these kinds of efforts,” says Martin.

Economic analysis is complicated by numerous intermediate indicators of performance (number of patents licensed, for example), as well as more direct impacts such as the number of products sold. The true impact emerges from a combination of these factors. “The temptation to come up with a number for an impressive-looking economic return can be strong,” says Adam Jaffe, director of Motu Economic and Public Policy Research in Wellington, New Zealand, “but I'd argue that you should look at a range of different indicators, including qualitative information.”

The most comprehensive studies tend to be technology- or field-specific. In 2008, the research institute RAND Europe teamed up with academics to analyse the impact of UK research grants for cardiovascular disease and stroke2. They used a strategy called the payback framework, which combines surveys and data analysis to assess the impact of research across many domains, rather than just basic economic gain. “You might prove that a method of developing stents for heart disease has generated jobs in industry, new skills, new research areas, benefits for patients who receive stents, and economic benefits in terms of helping these patients to return to work,” explains Steven Wooding, a researcher at RAND. “Then, at the other end, you can figure out what each one is worth.” They concluded that every £1 (US$1.43) invested in cardiovascular-disease research between 1975 and 1992 generated £1.39 of return in economic and health terms. However, this method is labour intensive and designed for biomedical research.

Patents based on academic research can provide a useful general indicator of commercial interest in a particular invention. But this is not always straightforward to interpret because not all patents become products. Furthermore, the public-sector origins of private-sector patents are not always obvious. A team led by Danielle Li at Harvard Business School in Boston, Massachusetts, has attempted to clarify these links by forging connections between NIH grants, the papers that they generate and patents citing those papers3. “She's used that to see, for example, whether NIH funding in a given therapeutic area advances the treatment options in that area,” says Jaffe. “That's getting a little closer to real impact.”

Such analyses depend on well-organized data. In 2009, the UK Medical Research Council (MRC) began using software called Researchfish to collect relevant information on the productivity of its researchers, including articles, patents and spin-off companies that arise from a grant. This programme has since expanded to encompass all of the UK Research Councils as well as other funding agencies; Ian Viney, director of strategic evaluation and impact at the MRC, anticipates that more than 40,000 UK researchers will file these reports in 2016.

Our goal is to involve every institution that gets at least $100 million of federal R&D.

In the United States, the Institute for Research on Innovation and Science (IRIS) relies on a more automated approach, drawing data directly from participating research universities. IRIS is a descendent of a federal programme created by Lane and colleagues at NIH and NSF to track research jobs created by President Barack Obama's 2009 economic stimulus, which included $52 billion for R&D. According to executive director Jason Owen-Smith, a sociologist at the University of Michigan in Ann Arbor, IRIS has already partnered with 24 universities, representing $15 billion of R&D funding. “Our goal is to involve every institution that gets at least $100 million of federal R&D, as well as flagship state and land-grant universities,” he says — a scope that would include data on more than 90% of all federally funded R&D.

The premise of the US assessment efforts is that scientists themselves — rather than the publications or patents — are the main vehicles by which research fuels economic growth. Owen-Smith says that, in his experience of university technology-transfer offices, such organizations generally believe that “disembodied inventions aren't particularly valuable”, and that for real economic pay off “you have to have a member of the original research team involved in the commercialization.” IRIS data allow observational experiments that can directly test this people-centric model by tracking how scientific training affects career trajectories and returns to industry. Preliminary IRIS data indicate, for example, that a science doctorate improves a person's chances of entering a high-tech industry, which will result in higher wages and greater productivity.

Beyond profit and loss

Disentangling causation from correlation remains difficult. “You can look at the impact on particular researchers who were funded compared to those who weren't,” says Jaffe, “but that's not quite the same as asking how a world that has a 'war on cancer' differs from one that doesn't.” Large-scale data collection programmes such as IRIS and Researchfish could clarify this by examining the changes associated with an influx of targeted spending such as the NIH Precision Medicine Initiative.

Julia Lane (centre) explains data-collection tool IRIS. Credit: Tessa Shaw/Julia Lane

The long time lag between inception and commercialization can also be a major confounder. “People tend to use at least 20-year time windows,” says Robert Tijssen, chair of science and innovation studies at Leiden University in the Netherlands. “You can't expect any economic impact in the narrow sense from a research programme within two or three years — that's only the case for exceptional research breakthroughs.” Wooding and colleagues have noted that many independent analyses have described a consistent gap of 17 years from initial publication to economic impact across biomedical fields, whether that impact represented formal adoption of a medical intervention or marketing of a new drug4, although the nature of these lags remains poorly defined.

Money isn't everything. Many research outcomes can benefit the economy more indirectly through factors such as environmental sustainability or improved quality of life. The United Kingdom has taken the lead in comprehensively measuring this diversity of benefits with its Research Excellence Framework (REF). REF, which helps to determine the allocation of funding to individual universities, relies on peer-reviewed case studies submitted by each institution that offer insight into both research 'quality' (in terms of outputs such as published papers) as well as impact on areas that range from the economy and health to public policy and culture. For example, the impact of medical research might be measured on the basis of evidence of public debate or changes in clinical or public-health guidelines. Viney notes that the first iteration of REF, completed in 2014, reflected a huge variety of impacts: “There's hardly any walk of life or part of society that research doesn't have some bearing upon.”

But REF is labour-intensive and Martin is concerned that future iterations may become even more time-consuming and expensive. “There is probably an optimum point beyond which the costs become greater than the benefits, and we're not very good at working out what that optimum point is,” he says. Nevertheless, the concept of impact assessment is being emulated in other countries, including the Netherlands, Norway and Australia (see page S22). Meanwhile, researchers developing IRIS and Researchfish are exploring strategies to track these impacts in a more automated and structured way; for example, by tracking citations in government policy statements.

Empirical evidence

The surge in interest could transform research assessment into a thriving, evidence-based subfield of economics. With hard numbers to hand, research funders and university administrators could gain the tools for making decisions that were once largely guided by dogma or instinct, such as determining what are the most effective ways to inject funding into new fields. Metrics could also help policymakers to identify the optimal GDP percentage that a nation should be spending on R&D.

The extent to which policymakers will respond to such a multidimensional view of socio-economic impact will vary. For some governments, demands for a sound-bite-friendly number that reflects simple return of investment may prevail. In 2012, Jaffe was part of a working group for the US National Academy of Sciences, which looked at the various ways in which scientific impact can be measured, only to find that politicians were mostly interested in lists of economic winners and losers. “They wanted us to tell them, in effect, whether the rate of return in energy research is higher or lower than in biomedical research so we can figure out where to redirect money, and I think that's a fundamentally misdirected question,” he says.

The economic assessment of science is an inevitability, says Owen-Smith. But if academics take the lead, they can strive to ensure that the assessment is fair, intellectually rigorous and a mechanism to grow, rather than constrain, the scientific endeavour. “We know as little about what our key social and economic needs will be 30 years from now as we might have known about the Internet in 1974,” he says. “We should be managing our publicly funded R&D system as a capacity and infrastructure for our society to hedge against an uncertain future.”