News

Five better ways to assess science

Hong Kong Principles seek to replace ‘publish or perish’ culture.

  • Benjamin Plackett

Credit: sorbetto/Getty

Five better ways to assess science

Hong Kong Principles seek to replace ‘publish or perish’ culture.

5 August 2020

Benjamin Plackett

sorbetto/Getty

The practice of valuing quantity above quality in research needs to change if trust in science is to be maintained.

That is the conclusion of a paper introducing a new set of guidelines for academic assessment called the Hong Kong Principles.

David Moher

Named after a 2019 conference in the city where they were first discussed, the guidelines are designed to incentivise more rigorous science.

The 5 Hong Kong Principles:

  • Assess responsible research practices
  • Value complete reporting
  • Reward the practice of open science (open research)
  • Acknowledge a broad range of research activities
  • Recognise essential other tasks like peer review and mentoring

“I’ve spent much of my career doing systematic reviews,” says one of the authors, David Moher, a professor of epidemiology and public health at the University of Ottawa in Canada.

“I was coming across a lot of science that I couldn’t use in my reviews because the studies were badly reported or badly conducted, but people were still advancing their careers off the back of it.”

This experience led him to propose the principles.

“Every metric can be gamed,” says Ivan Oransky, co-founder of Retraction Watch, a blog that has created a public database to catalogue retracted research papers.

“When you don’t take a holistic view of accomplishments, then you’ll always have this problem.”

The guidelines have been published in PLOS Biology.

Bad incentives lead to bad habits

Poor practices are a natural consequence of the rush to publish or perish, says Moher.

A 2018 analysis of Scopus data identified more than 9,000 scientists who published, on average, more than one paper every five days.

While there was no evidence that these scientists were doing anything inappropriate, the finding did suggest that some fields or research teams are employing different definitions of authorship that resulted in ‘hyperprolific’ authors.

“We’re seeing an inflation of the number of authors per paper,” says John Ioannidis, professor of epidemiology and population health at Stanford University, who carried out the study.

“Collaboration is important, but it’s possible that some of this is due to gaming of the system where they haven’t really contributed very much to the paper.”

At the same time as authorship responsibilities are being leveraged, outcomes of medical trials are being changed on the fly.

A 2013 review of medical trials found that between 40 and 62% of studies changed their stated primary outcome during the course of the trial, a practice known as outcome reporting bias.

Under the Hong Kong Principles, more transparency when it comes to data and practices will contribute to more robust results.

The principles recommend that funders and universities assess whether researchers publish all of their data, rather than just selected charts and graphs, so that other researchers can verify that the results can truly be reproduced.

The principles make recommendations for how scientists interact with their communities, for example, whether medical trials ask patients for their opinions.

They also make recommendations on how engaged researchers should be with peripheral activities, such as peer review.

Outside of academia, these sorts of extra-curricular activities are highly valued in a job interview scenario, says Anne-Marie Coriat, head of research landscape at the Wellcome Trust and a co-author of the Hong Kong Principles.

They allow an applicant to show that they are committed to their field, says Coriat, who would like to see the research world similarly value them.

Taking DORA forward

In 2012, the San Francisco Declaration on Research Assessment (DORA) called out the need to improve research appraisal methods and recommended that journal-based metrics such as impact factors not be used as a proxy for quality.

“We complement what DORA does,” says Moher. “But with a greater focus on implementing our recommendations.”

Coriat, meanwhile, doesn’t expect the old ways to vanish any time soon.

“It’s not going to be solved overnight,” she concedes. But as awareness of the issue grows, she expects more institutions to change their policies – when funders speak, universities listen.

However, Ioannidis warns that no system is impossible to subvert.

“I can think of ways that each of these features could be gamed. If you say you want more people to review rather than just write papers, then you can easily foresee a situation where you have zillions of poorly reviewed articles,” he says.

“I’m very supportive of these recommendations, but we need to keep an eye on them and gather empirical evidence of what happens when institutions do adopt these changes.”

Wider promotions criteria

Lesley Hughes

Macquarie University in Australia is one of those institutions. It changed its policy on academic promotions four years ago. Candidates must demonstrate that they can score points across a variety of different areas including teaching, interdisciplinary collaboration, engaging with the media, and mentorship, along with traditional research.

More Macquarie University academics now apply for promotions as a result, says Lesley Hughes, pro vice chancellor.

“Applications from women have increased by 87% and from men by 49%. I thought it was a blip, but that increase has been maintained.”

Anecdotally, Hughes says that committee roles are more easily filled and the faculty in general seem to be pleased with the new system.

“We all know of people who quite selfishly only go after publishing as many papers as possible, and they tend to progress and do better,” she says.

“But it’s now clear that to get into the professoriate, you have to do more than just focus on your own research.”