ERC president Jean-Pierre Bourguignon expects a new evaluation process to showcase the value of basic research. Credit: Jakob Polacsek/World Economic Forum

Last month, neuroscientist Ileana Hanganu-Opatz began a risky project with a risqué name: Psychocell. With a grant of €2 million (US$2.2 million), she is studying whether a single type of neuron causes a miswiring in the developing brain that has been linked to psychiatric disease. But it may turn out that no ‘psychocell’ exists, or that her mouse models are unsuitable.

Supporting such blue-skies research is the mission of her funder, the prestigious European Research Council (ERC), which launched in 2007 to raise the quality of European science. “No one but the ERC would have funded such a high-risk project,” says Hanganu-Opatz, from the University of Hamburg, Germany.

Now, the council, which sits within the European Union’s Framework funding programmes and has a €1.7-billion budget this year, has embarked on an unusual exercise: to retrospectively evaluate the success of the projects it funds. By contrast, most funding agencies assume that the evaluation to select which projects they fund is sufficient.

“Virtually no basic research funding agency tries retrospectively to analyse its own performance and impacts,” says Erik Arnold, chair of Technopolis, a European research and innovation consultancy headquartered in Brighton, UK. “It would be nice if the ERC effort would inspire others to do so.”

On 26 July, at the European Science Open Forum in Manchester, UK, ERC president Jean-Pierre Bourguignon announced the results of a pilot investigation of 199 completed projects, almost three-quarters of which were deemed to have resulted in a scientific breakthrough or major advance (see ‘To science and beyond’).

“We push both scientists and grant-application reviewers to take a certain risk, so it is important to know that they are actually taking risks — and that we are selecting the right projects,” says Bourguignon.

The ERC now plans to evaluate a selection of completed projects each year and to keep refining its methodology. Bourguignon hopes that this will help the council during discussions with politicians. “We want evidence-based arguments to show that bottom-up, curiosity driven research is valuable to society,” he says. The council will have to lobby to keep its generous funding in the next Framework programme, due to begin in 2021.

The pilot evaluation rated projects that were among the first to be funded by the council, mostly in 2007 and 2008. It assigned eight projects each to 25 three-person expert groups.

The ERC gave the experts a bibliometric analysis of the publications from each project, but asked them to use their professional judgement to form an overall view of each one.

They found that 43 had led to a scientific breakthrough, 99 had generated a major advance — and only 7 had had no appreciable scientific output. That indicates an appropriate level of risk and ambition, says Bourguignon.

The evaluators also judged that almost 10% of projects had already had a large impact on the economy, policymaking or other aspects of society, and that around one-quarter were likely to do so in the future.

“It’s a delight to see a qualitative approach,” says science-policy specialist Ben Martin at the University of Sussex in Brighton. “Bibliometrics are misleading in isolation — but too often used this way.” Bourguignon says that many of the evaluators, who remain anonymous, struggled with the unfamiliar task of subjectively declaring research a “scientific advance”.

The study has limitations. Two experts in each group had served on ERC grant-awarding panels. None of the projects that they judged was included in the analysis, but the process could seem unobjective, says Arnold. Martin says the terms used to categorize the projects may be interpreted differently across disciplines. “As a social scientist, I can tell you that we don’t describe our work in terms of ‘breakthroughs’.”

Bourguignon agrees that the small study was not optimally designed. But the ERC has since solicited independent comments on the methodology, and an ongoing evaluation of a further 250 projects has been fine-tuned to let evaluators across disciplines report consistently.

In the pilot review, evaluators also stressed the ERC’s impact on an individual’s career, something that Hanganu-Opatz experienced at first hand. Her university gave her tenure on 18 July, and three other universities made her offers. “The visibility you get when you win an ERC grant is embarrassing,” she says.