This January, China was reported as overtaking the United States to become the largest producer of scientific papers. There is one major caveat, however, which consoles those who worry about China’s rise and worries those who cheer for it: a lot of those Chinese publications are of poor quality.
Over the past few years, China has taken steps to show that it is serious about fixing this problem. Officials are censuring individual scientists to deter them from fraudulent activity and are upping the pressure on the universities that might try to protect them. In May, China set its sights on a more ambitious target — predatory journals, those that put no effort into vetting papers and exist only to collect money that scientists pay to get their research published. Officials announced punishments for scientists who publish work in journals that the government feels are not good enough. The Chinese government has not yet announced which journals it intends to blacklist. But institutions such as universities and hospitals are already establishing their own lists of journals to avoid — to the vexation of some researchers.
China is not the first to make such an effort. Hunting down poor-quality and shady journals has become a mission for some librarians and governments. Most famously, Jeffrey Beall, a librarian at the University of Colorado Denver, started a list in 2008 of journals he said were dubious, which grew to more than 1,000 titles.
But creating such lists is not easy. Most scientists and scientific policymakers would agree that it’s good to condemn predatory journals. But it can be difficult to distinguish them from ones that operate in good faith, but which might have published some poor-quality or fraudulent research because of short cuts in editorial decision-making due to lack of resources, because scientists deceived them, because of lapses in judgement — or because people just make mistakes.
Listing such journals would risk denigrating some good research. That’s why, although many researchers supported Beall, others criticized his list for a lack of clear standards. The list was taken down in January 2017, but there have been new incarnations.
Some say a better approach is to produce lists of approved journals. That does solve some problems. For example, in logistical terms, it is easier to maintain. Instead of trying to track down every newly emerging predatory journal, the burden is on the journals to prove themselves. It does not generate the same stigma as bans, and thus allows reputations to be redeemed. The journal Tumor Biology was behind one of the most egregious research scandals to hit China — the retraction of 107 papers by Chinese authors in 2016. But it now has a new publisher and, since January 2018, a new editor-in-chief who is hoping for “a new chapter” for the journal. In August 2018, Tumor Biology was listed in the Directory of Open Access Journals, a vote of confidence from a website that catalogues high-quality publications. But Tumor Biology still appears on the emerging Chinese lists.
Whether it is fair to continue snubbing such a journal brings up a central question about the grading process: are the criteria for listing clear, transparent and consistently applied? This is the only way that the system can be fair to all parties — scientists who want to publish good papers, journals that want to communicate solid science and governments that want to ensure their funding is being spent wisely.
At present, the criteria for a journal to appear on a Chinese blacklist are not clear. This understandably leads some researchers to wonder why their research should be devalued just because it was published in the same journal as some poor-quality research. Word of mouth is even creating informal bans that could overturn the genuine achievements of a researcher.
Establishing and agreeing on such criteria is not a simple task. The analytics company Cabells in Beaumont, Texas, maintains a blacklist and lists suspicious signs that mark a suspect publication, such as the inclusion of fictional or dead editors, and poor spelling. Criteria to appear on an approved list might be more practical: as a minimum, journals should list their profit or non-profit status clearly, list editors who are aware they are editors, use basic technology to detect plagiarism, and carry out due diligence to ensure that, if reviewers suggested by the author are used, they exist, are competent in the field, and are the ones being contacted.
Publishers have an obligation to maintain standards so that scientists and governments can rely on them in evaluating research and achievements. But to do so, they need feedback when those who depend on them believe they are falling short.
Nature 562, 308 (2018)