News

When is a scientific collaboration unfair?

A study enlists the h-index to try to find out why some research partnerships fizzle.

  • Gemma Conroy

Credit: rudall30/Getty Images

When is a scientific collaboration unfair?

A study enlists the h-index to try to find out why some research partnerships fizzle.

10 July 2020

Gemma Conroy

rudall30/Getty Images

Openness and expertise are key to successful collaborations, but it takes fairness to make them last.

A study of more than 170,000 co-authors defined fair collaborations as those in which each author’s h-index increased at the same rate. It found these were more likely to be long-lasting than those where only one author’s h-index increased.

The h-index, though controversial, is one of the most widely-used metrics in academia for measuring the productivity and impact of researchers. It is a measure of a researcher’s output and citations.

Understanding how fairness influences the longevity of collaborative work can help scientists choose their research partners more strategically, says lead author of the study, Wei Wang, a computational social scientist at the University of Macau in China.

“There are many factors that may influence the collaboration between two scholars,” says Wang.

“We wanted to figure out which of these factors led to sustainable collaborations so that scholars can improve their strategy.”

How to measure fairness

Wang and his team analysed papers by 172,680 co-author pairs who had published joint papers in journals published by the American Physical Society, such as Physical Review Letters and Physical Review D.

To gauge fairness, the researchers developed a variation of the h-index that measures the number of citations each author received 10 years after their first three collaborations.

The longest-lasting collaborations were found to be those that were the most mutually beneficial for the authors’ h-index.

Such “fair” partnerships accounted for almost two-thirds of collaborations lasting eight years or longer, while roughly 60% of partnerships lasting for three years or less boosted only one author’s h-index.

“If two scholars cannot gain equal benefit (h-index increase) their collaborations will not last for a long time,” the authors write. “A good beginning is half the battle.”

The findings have been published in the Journal of Informetrics.

More than metrics

Alexander Petersen, a computational social scientist at the University of California, Merced, who was not involved in the study, says there are more accurate ways to measure fairness that don’t rely on h-index scores.

He says the long lag between when articles are published and when they are reflected in a researcher’s h-index score can be misleading when assessing how fair a collaboration is.

Metrics such as the h-index also do not take into account key factors such as differences in age, discipline, and expertise.

Instead, the focus should be on measures that reflect what each collaborator puts in, rather than what they get out of the partnership, says Petersen.

For example, measuring how many projects researchers are juggling or how many collaborations they terminated in order to establish a new one could provide a clearer indication of whether each partner is putting in the same amount of effort, he says.

“Balance is the more relevant path to understanding fair partnerships,” says Petersen. “Not only does everybody make an effort, they also make sacrifices and compromises for the sake of the objective.”

While Wang agrees that there are many factors that play into long-term research partnerships, he says that a benefit of his approach is that it provides an objective way of understanding why some collaborations last longer than others.

“Tracing the influences of balance will be a promising future research direction on exploring collaboration sustainability if they can be quantified, not just qualified,” says Wang.