Jane Harding is deputy vice-chancellor for research and professor of neonatology at the University of Auckland, which is New Zealand's most well-funded university under the Performance-Based Research Fund. She discusses the country's approach to assessing science and measuring impact, and describes why she prefers a model that grades the individual not the research group.
What do researchers in New Zealand think about the Performance-Based Research Fund (PBRF)?
Most researchers see the PBRF as an inevitable chore. Still, although they find it time-consuming and distracting to prepare a portfolio, they see some advantages in having something that provides an external validation of research performance on a regular basis — in our case, every six years.
Although many researchers believe that the PBRF is a rather imperfect measure of research quality, it is, by and large, a useful thing. It would be difficult to find an alternative way to distribute the money to the most research-intensive groups. And there is evidence that the PBRF has improved the quality of research.
What is your involvement with the PBRF?
I am responsible for running the whole process within the University of Auckland. The early stages are mostly about preparation and education — making sure that everybody knows what they have to do. Then we get into the phase of assembling individual portfolios. We run an internal review of the draft portfolios before they get polished for final submission, and when the assessment is completed, we manage the results and make sure that they get to the right people.
It is a very big, complicated and continuous process involving a lot of human resources work: we have close to 2,000 portfolios to submit by a specific date. At its peak it can occupy about half of my time for two to three weeks.
This seems like a significant investment for the university.
Yes — but it is also very important to us. The PBRF contributes about 8% of our total budget, which is a significant chunk of funding. Universities in New Zealand did a rough approximation after the last PBRF round on the costs of the process to universities; our best estimate was less than 3% of the total PBRF income over the six-year period, which is not a huge overhead.
What is the grading experience like for researchers?
From an institutional point of view, the distribution of individual grades — A, B, C or R (for research inactive) — has little overall effect on us in terms of dollars. If one of our researchers is awarded a B instead of an A, that is usually balanced somewhere else in the institution by somebody getting an A instead of a B. Such variations don't make much of a difference to the profile of the institution, but they make a huge difference to the individuals. Getting an assessment of a B when you thought you might have been eligible for an A is a huge disappointment. People will inevitably interpret the grades as defining something about themselves; you can't stop them from taking it personally.
Have any categories of researchers been disadvantaged by the assessment?
The assessment is experience-dependent, which makes it difficult for a junior researcher to get an A grade. A brand-new postdoctoral researcher is not going to have a strong research portfolio. The PBRF has a category for new and emerging researchers, in which the threshold for getting a C is much lower, and this does mitigate some of the disparity.
You could even argue that researchers with less experience benefit more from the PBRF because there is assistance at an institutional level: the PBRF provides institutions with funds to specifically support supervision for research degrees. In our university, the PBRF has contributed to an increase in support for less-experienced researchers.
Does the PBRF have a bias against specific research fields or types of output?
That is what everybody worries about, but I don't think it is a reality. You have to trust that the reviewers can assess the different kinds of research in an appropriate way. A few concerns have been raised about the criteria for 'world-class' research, which could disadvantage disciplines focused on indigenous research in local communities. But any bias would result from a misconceived equating of world-class research with international research. You can do world-class research on New Zealand topics.
There is also a minor concern that some types of research, for example those involving commercially relevant work with private companies, might be discouraged because it doesn't necessarily result in a report that can go into a portfolio.
But overall, I don't think that the PBRF has changed the nature of scientific inquiry. It does create pressure to produce outputs, which means that people who would like to sit and spend 15 years writing a book are not going to do well in the PBRF, but they won't do well in any environment that is focused on research quality.
Overall, more than 10% of funding to universities in New Zealand comes from the PBRF. This is much lower than the 25% allocated by a similar programme in the United Kingdom, but higher than the 2% allocated in Norway. Do you think that enough money is distributed through the PBRF?
The amount distributed through the PBRF — NZ$262.5 million (US$224.2 million) in 2013 — is enough to provide a significant incentive but not enough to cover the costs of the research that it is designed to support or to ensure the highest quality research. However, the international comparison comes down to how other components of the system are funded. Universities in New Zealand are seriously underfunded by any measure. We have one of the lowest funding rates per student compared to other OECD [Organisation for Economic Co-operation and Development] countries. And our expenditure on research and development as a proportion of GDP is about half the OECD average.
As a result of this shortfall, researchers in New Zealand spend a substantial amount of time seeking funding. We also face difficulties in recruiting and retaining good researchers. Even though a world-leading professor may be interested in coming to New Zealand and accepts our salary levels, he or she is often discouraged by the lack of research funding and so might choose not to come here. That is a major disadvantage. More funding through the PBRF would help to support more research, but would not make up for the serious underfunding across the sector.
How does New Zealand's approach compare with other peer-review-based models such as in the United Kingdom?
The difference in New Zealand is that the assessment is done at the level of the individual as opposed to the research group. Arguably, an individual-based system could lead to selfish behaviour because there is no direct incentive to work collaboratively and to support a team or more junior researchers.
In New Zealand, this is counterbalanced by requiring that portfolios include not only research outputs, but also evidence for the section called Contribution to the Research Environment — a category that covers activities such as engaging in peer review activities, leading collaborative groups, supervising postgraduate students and mentoring early career researchers. It would be very difficult to apply the individual portfolio model to a larger system because of the scale of the assessment.
I prefer the individual model. The UK process requires gathering all of the individual material and then assembling that into an aggregate submission, so it seems to be a lot of additional work for not a lot of additional gain. My colleagues in the UK talk about their universities employing people full-time just to write the submissions.
And how does it compare with indicator-based models like those in Denmark and Australia?
Any peer-review process is vulnerable to the vagaries of individuals. Assessments can vary based on who is on the panel, how well they know the subject, their own personal prejudices, as well as many other unquantifiable factors such as how well the portfolios have been written. The system is also expensive, because each individual portfolio in New Zealand needs to be prepared and assessed.
A metrics-based system is much cheaper and simpler to run and would come out with almost the same outcomes if one was simply talking about allocation of the money and alignment of institutional goals to research quality objectives. But New Zealand's approach of submitting individual portfolios brings the incentives back to each individual staff member in a much more direct way than does submission of metrics at the institutional level. There is also the issue that the metrics themselves, rather than research quality, can become the target — and the lack of peer review means that different disciplines might be differentially treated.
Should the PBRF include measurements of impact, similar to the United Kingdom's new Research Excellence Framework?
It is challenging to get a single system to measure two different things. If you consider research quality and research impact to be separate things, then you need separate processes. It is difficult to measure impact — and expensive in terms of the effort required to assemble the evidence.
Introducing new measurements would be useful if they created incentives for academics to increase the impact of their research, but impact is so closely related to research quality that I am unconvinced as yet that a separate assessment is worth the enormous cost. We will learn from the UK's attempt to assess impact on a national scale.