Credit: New York Univ.

The US Congress and European funding bodies increasingly require science agencies and universities to document the potential impact of research on economic activity. But science agencies, whose job it is to identify and fund the best research, are not the right institutions to unpack the links between research and innovation. Their often well-meaning attempts to count what can be counted — largely, publications or patent activity — have created perverse incentives for researchers and are not credible. More emphasis on publications means that early-career researchers have become replaceable (and often unemployable) cogs in a paper-production machine, while the amount of unread and irreproducible research and patents has exploded.

Better incentives, and science, can be established through thoughtful measurement. Countries should think before measuring by drawing on the social and economic sciences and applying standard approaches to evaluation: building testable hypotheses based on theory of change, identifying and measuring inputs and outputs, establishing appropriate comparison groups1, and collecting data and estimating the empirical relationships. Biologists, engineers and physicists might be good at decoding the human genome, expanding our understanding of materials science, and building better models of the origins of the Universe, but they lack the statistical and analytical expertise to evaluate innovation. Although there are enormous hurdles to overcome, more carefully considered approaches will make results more credible and lead to better incentives.

Part of  Nature Outlook: Science-led Economies

The resulting measurement would move the focus away from counting documents and towards tracing what scientists do and how this transitions to economic activity. The measure would focus on the ways in which funding steers scientists into particular research fields, and then the way those scientists transfer ideas. It would use automated approaches to collect data on both funded and non-funded fields of research, rather than relying on manual, burdensome and unreliable self-reports.

Let's consider how this approach of thinking, then measuring, might work in the real world to inform links between research and economic growth, and to improve incentives. Take the current imperative from both the US Congress and the Higher Education Funding Council for England (HEFCE) that grants should measure their “impact”. A thinking-first strategy would suggest that grants should be seen as a set of investments that constitute a portfolio, rather than a set of unrelated projects. Evaluating every grant's success would be replaced by a risk-balanced portfolio approach. As such, some grants would surely fail. The results of these failures would be published and valued. The incentives would change from rewarding the publication of positive (and sometimes irreproducible) results to encouraging the publication of failures, and science would gain from the identification of 'dry' research holes2. As US inventor Thomas Edison liked to say, he didn't fail, he just found 10,000 ways that didn't work.

The intense focus on publications as a way to measure scientific output has led to three suboptimal outcomes. First, researchers hoard knowledge in order to be the first to publish new findings. Second, institutional structures incentivize lower-risk, incremental research2. And third, too many graduate students are produced who are then put into the academic holding tank of postdoctoral fellowships.

Science would move forward more effectively by tracing the activities of people rather than publications

But the best way to transmit knowledge is through people3. Science would move forward more effectively by tracing the activities of people rather than publications, particularly if the focus is on regional economic development4. Treating the placement and earnings of graduate students and postdoctoral fellows as key outputs of investment, and their education as crucial for the adoption of new ideas, would result in their training being treated as valuable in its own right.

An excellent example of this type of investment is Cofactor Genomics, which was founded by graduate scientists working on the Human Genome Project at Washington University in St. Louis, Missouri. Rather than pursue an academic career, they used their expertise to create a company that uses genomics to develop RNA-based disease diagnostics and hired people they had met through grant-funded research. They saw that the technology had great commercial potential, which would have been difficult to pursue in an academic environment. The correct measure of this project's success was not the number of published articles it spawned, but the strength and vibrancy of the networks of human connections that it helped to create.

Establishing institutes is standard practice in many scientific domains — examples include the US National Center for Atmospheric Research and CERN, the European particle-physics laboratory, in physical sciences, and the Poverty Action Lab at the Massachusetts Institute of Technology in Cambridge in social science. To the credit of the US academic community, cooperatives have led to the establishment of the Institute for Research on Innovation and Science (IRIS) at the University of Michigan in Ann Arbor. A partnership between IRIS and the US Census Bureau is, for the first time, building links between funding, the scientists it supports and subsequent entrepreneurship. Teams of scientists from 11 universities are beginning to develop the thoughtful approach to measurement that is urgently required.

Alas, similar institutes have not been established in Europe, Australia or New Zealand, despite researchers putting the building blocks together. Given that changing incentives is imperative for any country aiming to foster economy-driving innovation, I hope that this gap is quickly closed.