News

Better measures needed for research cooperation

Researchers wrestle with a measure of collaboration increasingly used to assess the impact of their work. 

  • Anthea Lacchia

Credit: PhotoDisc/ Getty Images

Better measures needed for research cooperation

Researchers wrestle with a measure of collaboration increasingly used to assess the impact of their work.

23 June 2017

ANALYSIS

Anthea Lacchia

PhotoDisc/ Getty Images

Tracking the names and affiliations of co-authors on a research paper is the most widely used method of measuring scientists working together, including by the Nature Index. But, researchers warn against the pitfalls of using co-authorship as a proxy for research cooperation.

To assess the impact of work they fund, agencies increasingly consider scientific collaboration. Many measure this by recording the number of authors and their affiliations on a published research paper. The assumption is that two or more authors who publish together are collaborating.

The Nature Index, for example, tracks the affiliations of authors on papers published in 68 high-quality journals. From this data, it provides a measure of collaboration (the collaboration score) between pairs of institutions and countries.

Giovanni Abramo and Andrea D’Angelo, based at the National Research Council of Italy’s Laboratory for Studies in Research Evaluation, routinely use co-authorship to study patterns of research collaborations and productivity in the Italian academic system. “It’s a reproducible, simple method and can be used for large-scale studies,” they say.

But many researchers question whether co-authorship reflects real cooperation, let alone real impact.

The problem, says Andrew Plume, director for market intelligence at the scientific publishing company, Elsevier, is that co-authorships offer a view of only one of many outcomes of research collaboration — a publication in a peer-reviewed journal. “Other types of research collaboration, such as those which lead to new applications in industry, may not be published in the literature and so remain invisible to this approach.”

Co-authorship also fails to account for the wider network of people with which researchers interact in producing a scientific paper, says sociologist Hajdeja Iglič from the University of Ljubljana in Slovenia. Researchers often exchange ideas in ways that do not result in co-authorship, such as at informal conference dinners or through written requests to access lab materials.

Furthermore, co-authorships do not reflect the time or effort researchers spent collaborating, and the nature of their partnerships, says Iglič.

Barry Bozeman, who studies public management and research policy at Arizona State University, has exposed some bad practises in naming co-authors of a paper.

In a book that will be published this year, Strength in Numbers: The New Science of Team Science, Bozeman reports on a survey of 640 academic researchers in universities in the United States: 7% of respondents revealed that a co-author on their most recently published paper did not deserve co-authorship on the basis of their contribution; and 1% of respondents said that someone deserved co-author credit and did not receive it.

Surveys and interviews can fill the gaps left behind by co-authorship, providing information on the amount of time researchers spend collaborating and the reasons behind their partnerships. But these methods are time-consuming and mostly provide qualitative data, says John Hogenesch, a pharmacologist at the University of Cincinnati.

With a shortage of options, Hogenesch is exploring other ways of using co-authorship data to measure research collaborations. In 2010, he developed a method that uses co-authored papers and grant applications to assess the strength of networks between research centres, as opposed to individual researchers.

In 2014, in a paper in JAMA Neurology, he and his co-authors further applied the method to 27 Alzheimer’s disease centres in the United States. They found an increase in the frequency of collaborations within the institutes and between them in the two decades since their establishment.

“There’s no perfect metric,” concedes Hogenesch, who likens the use of co-authorship networks as a measure of collaboration to Winston Churchill’s definition of democracy: “it's the worst form of government, except for all the others.”