The focus on impact of published research has created new opportunities for misconduct and fraudsters, says Mario Biagioli.
When scientists misbehave, the culture of ‘publish or perish’ is often blamed. Some researchers cut corners, massage data and images or invent results to secure academic papers and the rewards that come with them. This is rightly regarded as misconduct. But there is a new class of bad behaviour — one that is driven by a related but different pressure: ‘impact or perish’.
It is no longer enough for scientists to publish their work. The work must be seen to have an influential shelf life. This drive for impact places the academic paper at the centre of a web of metrics — typically, where it is published and how many times it is cited — and a good score on these metrics becomes a goal that scientists and publishers are willing to cheat for.
Collectively, these new practices don’t seek to produce articles that are based on fraudulent evidence or claims. Rather, they use fraudulent means to secure their publication, enhance their impact and inflate the importance of those who write them. They are on the march — and scientists no longer have to look far to find them. News about research now includes regular reports of authors who supply fake e-mail addresses of suggested peer reviewers. They then use those addresses to offer reports that are supportive enough to ensure that the paper is published. ‘Review and citation’ rings go a step further, trading favourable fake reviews for citations to the reviewer’s work. Others hack publisher databases to seek more invitations to review papers, and so possibly insert more citations to their own work.
All metrics of scientific evaluation are bound to be abused. Goodhart’s law (named after the British economist who may have been the first to announce it) states that when a feature of the economy is picked as an indicator of the economy, then it inexorably ceases to function as that indicator because people start to game it.
What we see today, however, is not just the gaming of science metrics indicators, but the emergence of a new kind of metrics-enabled fraud, which we can call post-production misconduct. It seems to be as widespread as other forms, with at least 300 papers already retracted because the peer review had been tampered with.
A curious feature of this kind of misconduct is that the work itself — the science reported in the paper — is usually not in question. Those responsible for this kind of post-production misconduct seek to extract value not from the article itself, but from its citations. From their point of view, it does not matter whether the article is ever read by a scientist, only that its citations will be harvested by bots.
This means that unlike data fraud and other forms of conventional misconduct, post-production misconduct does not necessarily pollute the scientific record with false results. But it does erode the credibility of the publication system. And it is more common in emerging countries, perhaps because universities there place the most emphasis on metrics to quickly become globally visible.
How can it be tackled? Post-production misconduct is less likely to be the work of individuals — a hyper-productive protégé operating under the protection of an established mentor who is unwilling to ask too many questions — but increasingly emerges from collaborations. As such, its traces are usually beyond the reach of peer review, which itself is often targeted by these fraudulent schemes.
All metrics of scientific evaluation are bound to be abused.
The exposure of citation and peer-review rings has generally been down to data analysis — of the wording of reviews, review turnaround times, citation patterns and the mutual relationships between authors and reviewers across different publications. Much of this can be mined only by teams of investigators who are carefully poring over journal databases. But publishers consider this type of information proprietary, so when irregularities are found and journals retract articles, they typically offer little detail. After all, these investigations expose weaknesses in their systems and services. (That’s why the new breed of grass-roots watchdogs such as Retraction Watch and PubPeer are so important.)
Given the increasing awareness of post-production misconduct— and how it undermines the assessment of publicly funded research — funders, policymakers and the science community should ask publishers to make available more of the information needed to investigate it.
The community must realize that, unlike previous fraudsters, from the unknown hoaxer who planted a mixture of bones in a British gravel pit at Piltdown to that of Paul Kammerer, who is blamed for inkingfeatures onto the feet of midwife toads to support Lamarckianism evolution, academic misconduct is no longer just about seeking attention. Many academic fraudsters aren’t aiming for a string of high-profile publications. That’s too risky. They want to produce — by plagiarism and rigging the peer-review system — publications that are near invisible, but can give them the kind of curriculum vitae that matches the performance metrics used by their academic institutions. They aim high, but not too high.
And so do their institutions — typically not the world’s leading universities, but those that are trying to break into the top rank. These are the institutions that use academic metrics most enthusiastically, and so end up encouraging post-production misconduct. The audit culture of universities — their love affair with metrics, impact factors, citation statistics and rankings — does not just incentivize this new form of bad behaviour. It enables it.
Author information
Authors and Affiliations
Corresponding author
Additional information
Related links
Related links
Related links in Nature Research
Let’s make peer review scientific 2016-Jul-05
Peer review: Troubled from the start 2016-Apr-19
Publishing: The peer-review scam 2014-Nov-26
Brazilian citation scheme outed 2013-Aug-27
Related external links
Rights and permissions
About this article
Cite this article
Biagioli, M. Watch out for cheats in citation game. Nature 535, 201 (2016). https://doi.org/10.1038/535201a
Published:
Issue Date:
DOI: https://doi.org/10.1038/535201a
This article is cited by
-
Linguistic positivity in soft and hard disciplines: temporal dynamics, disciplinary variation, and the relationship with research impact
Scientometrics (2023)
-
The rise of hyperprolific authors in computer science: characterization and implications
Scientometrics (2023)
-
Some thoughts on transparency of the data and analysis behind the Highly Cited Researchers list
Scientometrics (2023)
-
Detecting anomalous referencing patterns in PubMed papers suggestive of author-centric reference list manipulation
Scientometrics (2022)
-
The right to refuse unwanted citations: rethinking the culture of science around the citation
Scientometrics (2021)