The value that Australia places on publication quality over quantity has elevated it into the top echelon of science. Can it now improve its flagging track record in commercialization?
It is often said that when it comes to research excellence, Australia punches well above its weight. Despite a population of only 23 million, the country ranked 12th in the global Nature Index (see go.nature.com/1dbcsr), which tracks the contributions of countries and institutions to high-quality scientific journals. This impressive performance can be partly attributed to a research-output measure introduced in 2010 to encourage quality over quantity. The Excellence in Research for Australia (ERA) metric looks at the breadth of research from universities and evaluates the quality against international standards. “The ERA exercise, focusing on quality of the outputs at universities, has been very beneficial to the university system in Australia,” says Aidan Byrne, chief executive of the Australian Research Council in Canberra, which administers the framework. “It has been a focus that all of the universities in Australia positively responded to, and it added to the strength of the Australian university system.”
But in the shadow of Australia's research performance lurks the country's poor track record for translating that research into economic impact. The 2015 Global Innovation Index (S. Dutta et al. (eds) The Global Innovation Index 2015: Effective Innovation Policies for Development; Cornell University, INSEAD & WIPO, 2015) ranked Australia 72 out of 141 countries for innovation efficiency. The country places respectably high — number 17 globally — on the overall innovation-index ranking, which takes into account factors such as regulatory environment, investment, education and general infrastructure.
“Our challenge now is to look at what is the next step,” says Byrne, namely tailoring university research “to the benefit of the Australian commercial business and community more broadly”.
After more than a decade of discussion, analysis, pilot studies and initiatives thwarted by political change, Australia is embarking on a mission to bring its reputation for research commercialization into line with its track record for research quality. Buoyed by support from the federal government's National Innovation and Science Agenda, announced in late 2015, and recommendations from last year's review of research policy and funding, a multi-institution committee is developing a system to measure the amount of research engagement, interaction, knowledge transfer and collaboration between universities and potential public- and private-sector users of the research.
This process, called Research Engagement for Australia (REA), began in 2014, when the Australian Academy of Technology and Engineering (ATSE) began exploring ways to measure research engagement. The project's steering committee reviewed many options, but concluded that calculating the amount of money that the research attracted from the end user was the most suitable, says ATSE president and chair of the REA steering committee Peter Gray. “Dollars are auditable and they are a true measure of collaboration,” he says. “It's a good independent measure of the degree of commitment by the end user to the collaborative research programme.”
The measured income includes money from certain competitive grants, government contracts, industry contracts, funding from philanthropic groups, and money earned from participating in collaborative endeavours, such as one of the government's Cooperative Research Centres.
Income seems to be well suited to act as a measure of assessing commercialization. But if income is the numerator, what is the denominator? The committee initially considered three metrics with which the figure could be compared: full-time equivalent hours for that field, total national activity in that field and the university's total operating income.
Gray says that the committee is leaning towards using just the latter two, acknowledging that full-time equivalent hours “are a pretty rubbery number”. The advantage of measuring engagement in dollars is that all the necessary data are already collected and reported through the Higher Education Research Data Collection and the ERA programme.
To look at the resources that such a programme might demand, the ATSE ran a pilot of the metric with universities in Queensland and South Australia. Gray says that because the new programme requires only a handful more details than are routinely collected, it imposes little additional burden on the universities.
Beyond the numbers
Not everyone is satisfied that income alone is enough to demonstrate a university's research impact in a particular field. John Dewar, vice-chancellor at La Trobe University in Melbourne, and chair of the six-institution consortium Innovative Research Universities, says that there is a need for qualitative as well as quantitative assessment that will show not only income but also impact. The approach proposed by the ATSE measures the amount of money that research attracts from industry. “But we don't think that's the link of the chain that we need to improve,” says Dewar. “We think it's the second link — taking ideas and innovations and making something useful.” The consortium is therefore arguing for the inclusion of panel-based assessments of the value of university research for end users. “We don't see any alternative to some form of qualitative data where you talk to your industry partner and ask what impact has this had on your business or your sector of the economy,” Dewar says. The consortium has suggested adopting a case-study-based method such as that used in the UK's Research Excellence Framework.
But this idea elicits a nervous reaction from some. In 2014, the assessment of 6,975 impact case studies at 154 UK universities cost £246 million (US$347 million). “The case-study approach is very, very expensive and time-consuming, and it's also very difficult to track outcomes back to either an individual or institution,” says Margaret Sheil, provost at the University of Melbourne and a member of the REA steering committee.
The REA's originators are listening to both sides. Gray says that the ATSE is keen to avoid the case-study approach, but is open to a provision for qualitative data, particularly if an institution wants to highlight an especially fruitful engagement outcome. “From the pilot study, we thought we probably should give people the opportunity, if they've had a big success, to write a little vignette about why they have been successful,” Gray says.
Another concern about focusing almost exclusively on income is how this will work for the humanities, arts and social sciences, for which engagement and impact might not be as easy to quantify as they are in science, technology, engineering and maths. “When you look at just about any indicator, there are very strong discipline variations,” says Byrne. The Australian Research Council has been tasked with the development and piloting of the assessment in consultation with the research sector, and Byrne says that it intends to take an approach similar to that taken for the development of the ERA metric. The goal, he says, is “to get a sense of what are the most significant drivers for your discipline that will tell you something about engagement and impact”.
The objective of the REA programme is for universities to value achievements in the commercialization of research excellence alongside publication successes. This is a similar goal to that of the ERA metric, which encouraged academics to focus on publication quantity as well as on quality by providing a regular, nationwide 'stock take' of universities' research strengths and weaknesses.
Evidence of the success of the ERA can be seen in the consistent improvement of Australia's research rankings since the framework's introduction. Some policymakers think that the REA can do the same for the other half of the innovation equation. The metric's impact will lie not in its influence on funding but, as for the ERA, on the message it sends to the academic sector about what the government values.
“What a measure like REA is designed to do is counterbalance at the institutional level; to say that we also need to ensure that we've got engagement happening,” says Sheil. Although the intention is to change the institutional mindset, the hope is that this signal will be heard at all levels of academia, particularly among younger academics and students who might be more likely to contemplate investing their time and energy — and risk a gap in their publishing record — with a commercial endeavour. A measure like the REA could encourage universities to support a broader range of career trajectories, including commercialization and industrial collaboration, for their staff.
But Australia still faces the challenge of a relatively risk-averse and conservative commercial ecosystem, which lacks the kind of deep pockets found in other parts of the world, Sheil says. “We don't have a Silicon Valley where you can be an academic, go and try your spin-off, then come back to your university; we don't have the venture capital that's attracted by that,” she says. Financial capital in Australia is tied up in property, mining and retirement funds, and the country has relatively few private investors. But if the REA programme can impel universities and academics to improve their engagement with industry, and translate research into commercial success despite these constraints, it could establish a model for many other countries that face similar challenges.
These are early days for the REA, but the momentum is strong. In its National Innovation and Science Agenda, the government singled out the need for a measure such as the REA to be part of a national assessment of university research performance.
Although the aim is for the framework to re-adjust the historical focus on publication record, there is a risk that too much emphasis will be placed on research commercialization, jeopardizing financing of fundamental research. But those involved in the development are determined not to risk Australia's track record in basic research, stressing that this will require deft compromises.
Perhaps a light touch will be enough. “Universities are very good at responding to even the smallest signal from government,” Dewar says. “The signal being sent is of modest changes, but over time that could have a quite significant ripple effect across the sector.”