Credit: Illustration by the Project Twins

What if scientific journals were like hotels, restaurants and holiday operators — easy to compare online and reviewed by those who use them? That thought occurred to conservation biologist Neal Haddaway two years ago: frustrated by a bad experience publishing his work with a journal he prefers not to name, he decided to launch Journalysis.org, a journal-review site that he likens to TripAdvisor. “I wanted to basically reward the journals that were doing a good job and, within reason, name and shame the ones that weren't doing so well,” he says.

Haddaway, now a project manager at the Mistra Council for Evidence-based Environmental Management in Stockholm, was not alone in his thinking. His site is one of a handful of comparison websites that have sprung up in the past few years. Those developing these tools say that, although the practice of rating journals online has been slow to take hold, the sites help authors to become discriminating consumers of publishing services, choosing the journals that suit them and dodging questionable operators.

Journal-comparison tools allow authors to search or filter journals by various dimensions of performance, from prestige to publishing speed. Many of these tools are free to use, created by consultancy firms that make their money from related services for researchers, such as English-language editing and advice on publishing. Among those websites are Journal Selector, created by the London-based firm Cofactor, collating several hundred journals; JournalGuide, from Research Square in Durham, North Carolina, covering more than 46,000 journals; and the Edanz Journal Selector, a site designed by the Edanz Group in Fukuoka, Japan, compiling some 28,000 titles.

These comparison websites might seem unnecessary: researchers tend to identify the best destinations for their work by checking where studies they admire have been published, or by asking colleagues for advice. But Keith Collier, chief operating officer at Research Square, says that his company sees a market in researchers who may be unfamiliar with English-language journals, especially those located outside the United States and Western Europe.

Even Western researchers might feel overwhelmed by the rapid growth in the scientific literature, finding it hard to keep track of the number of journals sprouting up. Online comparison tools could help them to select the best journal for interdisciplinary work — or steer them away from predatory publishers that take researchers' money, but offer little in return (see 'The right one for me').

boxed-text

Most sites provide an indicator of prestige, such as a score that denotes how many citations on average an article in that journal accrues. But there is much more to picking journals than this one measure, says James Maclaurin, a philosopher at the University of Otago in Dunedin, New Zealand. He has developed a mobile app called HelpMePublish (available on Apple's iOS operating system, with an Android version in development), which indexes more than 6,000 journals.

The consumer psyche

Some researchers want only open-access journals; others care more about the acceptance rate, fees and time to publication. For Maclaurin, a killer detail is whether a journal allows 'double-blind' peer review, in which the authors' and peer reviewers' identities are withheld from each other to prevent prejudice.

Often, sites are simply aggregating data from individual journals' webpages. But the relevant information is not always available: Maclaurin randomly selected 300 journals from his database, and found that nearly half of their websites made no mention of peer-review policies and only one specified the journal's acceptance rate. To get information for his database, he surveys journal editors, giving him access to details not available online.

Maclaurin's app allows people to search for free, but charges for access to certain data such as a journal's acceptance rate: individual subscriptions are US$4.99 a year, and institutional ones $1,250. He says that the app has been downloaded by “thousands of people” from 28 countries. JournalGuide shows similar success, hosting, at present, 13,500 users per month, according to product-management director Laura Stemmle.

Visit the Toolbox hub for more articles

The emergence of such tools reflects a shift in the dynamic between publisher and researcher, argues Peter Binfield, co-founder and publisher of the open-access journal PeerJ. As long as an outlet is of reasonable quality, he says, researchers are starting to recognize that the content of their article matters more than esteem garnered from the reputation of the journal. “It's not where you publish; it's what you publish,” he says. As such, authors are shifting towards a more transactional, consumer-like attitude to publishing, Binfield thinks — they are looking for the best deals on fees and time to publication (even though many still also hanker after prestige).

If that is true, consumer-oriented researchers might relish the chance to read reviews and leave ratings of their own — “a Yelp restaurant review for journals”, as Binfield puts it. HelpMePublish restricts users to numerical ratings — on a scale of 1 to 5 — on topics such as refereeing practice and communication. But some websites enhance the experience even further. Journalysis and SciRev — both of which are free — provide space for free-form comments. Created by economists Jeroen Smits of Radboud University in Nijmegen, the Netherlands, and Janine Huisman of the Centre for International Development Issues Nijmegen, SciRev boasts some 14,000 journals in its database and has received more than 1,000 user reviews in its one year of operation.

In theory, a lot of bad reviews might push publishers to change their procedures. But scientists have been slow to embrace the feature. Inexplicably, some publishers have received many reviews on SciRev, whereas others have received few. For example, the Open Access Macedonian Journal of Medical Sciences has 38 reviews (all accompanied by positive ratings), yet Science has 6 and Nature only 2, none of which includes ratings.

JournalGuide initially accepted user reviews, but dropped them due to a poor response rate. Journalysis is faring little better; few researchers have left ratings. “That's where we're all falling down really — users aren't submitting enough data,” Haddaway says. This is despite the fact that reviewing sites allow user anonymity. Typically, these tools require registration only with a validated academic e-mail address.

“I think there is just a reluctance to say something about a journal you may need to go back and try to submit something later on,” says Collier. If that is true, perhaps it is not surprising that review features have yet to achieve critical mass. Even so, the lack of engagement so far has not jolted Haddaway's conviction that the tools are needed. “I think there needs to be more transparency,” he says.