A leading index of open-access journals is set to shrink by more than one-quarter after delisting around 3,300 titles as part of an effort to exclude questionable and inactive publishers.

The Directory of Open Access Journals (DOAJ), which at the beginning of the year listed more than 11,000 open-access academic journals, announced two years ago that it would be tightening its standards for inclusion. It asked every journal in its index to provide more details about their operations so it could ensure that they adhere to basic publishing standards.

Data: Heather Morrison, Dramatic Growth of Open Access, V11/DOAJ

The crackdown came after the index was criticized for including ‘predatory publishers’ — journals that profess to publish articles openly, often after charging fees, but that are either outright scams or do not provide expected services such as a minimal standard of peer review or archiving.

The DOAJ is now striking off journals that did not submit the necessary paperwork in time, says Lars Bjørnshauge, the directory’s managing director. He says that he is “absolutely sure” that the majority of the journals that did not reapply are not publications with poor ethics; rather, he thinks, they are small outfits that are unfamiliar with providing the information required for reapplication. Some 6,700 journals have reapplied, and Bjørnshauge thinks that most of them will pass.

Many of the journals that failed to reapply claimed to be based in the United States. But after investigating publishers’ claims about where they are based, the DOAJ suspects that some in fact operate from other countries, says Dominic Mitchell, community manager for the directory.

Improving trust

Investigating journals: The dark side of publishing

The tightened policy is a “huge improvement”, says Walt Crawford, a retired library-systems analyst in Livermore, California, who is currently analysing DOAJ-listed journals. “I believe DOAJ will be a reasonably trustworthy source of journals that at least intend to do honest, serious open access.” If anything, he says, he worries that legitimate journals may be omitted.

But Jeffrey Beall, a librarian at the University of Colorado, Denver, who assembled a blacklist of open-access journals that he terms "potential, possible or probable" predatory publishers, fears that the index still contains weak publications. A problem with ‘whitelists’ such as the DOAJ’s, he says, is that they rely on data supplied by publishers, which might exaggerate or misstate information to make their journals look more attractive.

Bjørnshauge agrees that there are publishers that cheat authors; the DOAJ has supported an educational campaign and plans to hire almost a dozen ‘ambassadors’ to identify questionable publishers and to promote good publishing practices across the developing world, he says. Assessing journals on a case-by-case basis should be straightforward, says Crawford. “My guess is that most scholars can figure out whether a journal is an appropriate outlet with 10–15 minutes work.”

The number of open-access journals is soaring: the DOAJ gets about 80 applications every week, Bjørnshauge says, and over the past 2 years it has rejected about 5,400 applications, most of them new applicants. Lack of information about licensing rights, publication permissions and editorial transparency are all important issues in rejecting publications, he says.

Whitelists such as the DOAJ and blacklists like Beall’s are not the only tools to make sense of open-access journals. Some crowdsourced efforts — such as Journalysis.org — ask academics to review journals. The most recent of these, a website called QOAM (Quality Open Access Market) asks volunteers with academic credentials to evaluate a journal’s policies and to describe their experience if they published with the journal — and then sorts journal titles into one of four categories, including “threat to authors”. 

“We go beyond what the DOAJ does to allow the stakeholders to rank the journals,” says Jelte Wicherts, who studies publication bias and data sharing at Tilburg University in the Netherlands and who helped to develop the QOAM scoring system. He thinks that there are reliable proxies — notably the transparency of a peer review system — that can be used to gauge the quality of a journal1.

Such crowdsourced systems can fall prey to extreme minority views, and to being gamed by pro-journal reviewers, unless many disinterested authors can be persuaded to take part, points out Bjørnshauge. Beall, too, is sceptical that a crowdsourced effort can take off. But it should be given a chance to succeed, he says. “It's great that there’s experimentation going on.”