WORLD VIEW

Let’s move beyond the rhetoric: it’s time to change how we judge research

Five years ago, the Declaration on Research Assessment was a rallying point. It must now become a tool for fair evaluation, urges Stephen Curry.

Declarations are bound to fall short. The 240-year-old United States Declaration of Independence holds it self-evident that “all men [sic] are created equal”, but equality remains a far-off dream for many Americans.

The San Francisco Declaration on Research Assessment (DORA) is much younger, but similarly idealistic. Conceived by a group of journal editors and publishers at a meeting of the American Society for Cell Biology (ASCB) in December 2012, it proclaims a pressing need to improve how scientific research is evaluated, and asks scientists, funders, institutions and publishers to forswear using journal impact factors (JIFs) to judge individual researchers.

DORA’s aim is a world in which the content of a research paper matters more than the impact factor of the journal in which it appears. Thousands of individuals and hundreds of research organizations now agree and have signed up. Momentum is building, particularly in the United Kingdom, where the number of university signatories has trebled in the past two years. This week, all seven UK research councils announced their support.

Impact factors were never meant to be a metric for individual papers, let alone individual people. They’re an average of the skewed distribution of citations accumulated by papers in a given journal over two years. Not only do these averages hide huge variations between papers in the same journal, but citations are imperfect measures of quality and influence. High-impact-factor journals may publish a lot of top-notch science, but we should not outsource evaluation of individual researchers and their outputs to seductive journal metrics.

Most agree that yoking career rewards to JIFs is distorting science. Yet the practice seems impossible to root out. In China, for example, many universities pay impact-factor-related bonuses, inspired by unwritten norms of the West. Scientists in parts of Eastern Europe cling to impact factors as a crude bulwark against cronyism. More worryingly, processes for JIF-free assessment have yet to gain credibility even at some institutions that have signed DORA. Stories percolate of research managers demanding high impact factors. Job and grant applicants feel that they can’t compete unless they publish in prominent journals. All are fearful of shrugging off the familiar harness.

So, DORA’s job now is to accelerate the change it called for. I feel the need for change whenever I meet postdocs. Their curiosity about the world and determination to improve it burns bright. But their desires to pursue the most fascinating and most impactful questions are subverted by our systems of evaluation. As they apply for their first permanent positions, they are already calculating how to manoeuvre within the JIF-dependent managerialism of modern science.

There have been many calls for something better, including the Leiden Manifesto and the UK report ‘The Metric Tide’, both released in 2015. Like DORA, these have changed the tenor of discussions around researcher assessment and paved the way for change.

It is time to shift from making declarations to finding solutions. With the support of the ASCB, Cancer Research UK, the European Molecular Biology Organization, the biomedical funder the Wellcome Trust and the publishers the Company of Biologists, eLife, F1000, Hindawi and PLOS, DORA has hired a full-time community manager and revamped its steering committee, which I head. We are committed to getting on with the job.

Our goal is to discover and disseminate examples of good practice, and to boost the profile of assessment reform. We will do that at conferences and in online discussions; we will also establish regional nodes across the world, run by volunteers who will work to identify and address local issues.

This week, for example, DORA is participating in a workshop at which the Forum for Responsible Metrics — an expert group established following the release of ‘The Metric Tide’ — will present results of the first UK-wide survey of research assessment. This will bring broader exposure to what universities are thinking and doing, and put the spotlight on instances of good and bad practice.

We have to get beyond complaining, to find robust, efficient and bias-free assessment methods. Right now, there are few compelling options. I favour concise one- or two-page ‘bio-sketches’, similar to those rolled out in 2016 by the University Medical Centre Utrecht in the Netherlands. These let researchers summarize their most important research contributions, plus mentoring, societal engagement and other valuable activities. This approach could have flaws. Perhaps it gives too much leeway for ‘spin.’ But, as scientists, surely we can agree that it’s worth doing the experiment to properly evaluate evaluation.

This is hard stuff: we need frank discussions that grind through details, with researchers themselves, to find out what works and to forestall problems. We need to be mindful of the damage wrought to the careers of women and minorities by bias in peer review and in subjective evaluations. And we need to join in with parallel moves towards open research, data and code sharing, and the proper recognition of scientific reproducibility.

Declarations such as DORA are important; credible alternatives to the status quo are more so. True success will mean every institution, everywhere in the world, bragging about the quality of their research-assessment procedures, rather than the size of their impact factors.

Image credit: Dave Guttridge

Nature 554, 147 (2018)

doi: 10.1038/d41586-018-01642-w
Nature Briefing

Sign up for the daily Nature Briefing email newsletter

Stay up to date with what matters in science and why, handpicked from Nature and other publications worldwide.

Sign Up

Competing Financial Interests

I am the unpaid chair of the steering group of DORA, which is the subject of this piece.