Nature | News Feature

Translations

عربي

Publishing: The peer-review scam

When a handful of authors were caught reviewing their own papers, it exposed weaknesses in modern publishing systems. Editors are trying to plug the holes.

Article tools

Illustration by Dale Edwin Murray

Most journal editors know how much effort it takes to persuade busy researchers to review a paper. That is why the editor of The Journal of Enzyme Inhibition and Medicinal Chemistry was puzzled by the reviews for manuscripts by one author — Hyung-In Moon, a medicinal-plant researcher then at Dongguk University in Gyeongju, South Korea.

The reviews themselves were not remarkable: mostly favourable, with some suggestions about how to improve the papers. What was unusual was how quickly they were completed — often within 24 hours. The turnaround was a little too fast, and Claudiu Supuran, the journal's editor-in-chief, started to become suspicious.

In 2012, he confronted Moon, who readily admitted that the reviews had come in so quickly because he had written many of them himself. The deception had not been hard to set up. Supuran's journal and several others published by Informa Healthcare in London invite authors to suggest potential reviewers for their papers. So Moon provided names, sometimes of real scientists and sometimes pseudonyms, often with bogus e-mail addresses that would go directly to him or his colleagues. His confession led to the retraction of 28 papers by several Informa journals, and the resignation of an editor.

Moon's was not an isolated case. In the past 2 years, journals have been forced to retract more than 110 papers in at least 6 instances of peer-review rigging. What all these cases had in common was that researchers exploited vulnerabilities in the publishers' computerized systems to dupe editors into accepting manuscripts, often by doing their own reviews. The cases involved publishing behemoths Elsevier, Springer, Taylor & Francis, SAGE and Wiley, as well as Informa, and they exploited security flaws that — in at least one of the systems — could make researchers vulnerable to even more serious identity theft. “For a piece of software that's used by hundreds of thousands of academics worldwide, it really is appalling,” says Mark Dingemanse, a linguist at the Max Planck Institute for Psycholinguistics in Nijmegen, the Netherlands, who has used some of these programs to publish and review papers.

But even the most secure software could be compromised. That is why some observers argue for changes to the way that editors assign papers to reviewers, particularly to end the use of reviewers suggested by a manuscript's authors. Even Moon, who accepts the sole blame for nominating himself and his friends to review his papers, argues that editors should police the system against people like him. “Of course authors will ask for their friends,” he said in August 2012, “but editors are supposed to check they are not from the same institution or co-authors on previous papers.”

Peer-review ring

Moon's case is by no means the most spectacular instance of peer-review rigging in recent years. That honour goes to a case that came to light in May 2013, when Ali Nayfeh, then editor-in-chief of the Journal of Vibration and Control, received some troubling news. An author who had submitted a paper to the journal told Nayfeh that he had received e-mails about it from two people claiming to be reviewers. Reviewers do not normally have direct contact with authors, and — strangely — the e-mails came from generic-looking Gmail accounts rather than from the professional institutional accounts that many academics use (see 'Red flags in review').

Red flags in review

Signs that an author might be trying to game the system

A handful of researchers have exploited loopholes in peer-review systems to ensure that they review their own papers. Here are a few signs that should raise suspicions.

  • The author asks to exclude some reviewers, then provides a list of almost every scientist in the field.
  • The author recommends reviewers who are strangely difficult to find online.
  • The author provides Gmail, Yahoo or other free e-mail addresses to contact suggested reviewers, rather than e-mail addresses from an academic institution.
  • Within hours of being requested, the reviews come back. They are glowing.
  • Even reviewer number three likes the paper.

Nayfeh alerted SAGE, the company in Thousand Oaks, California, that publishes the journal. The editors there e-mailed both the Gmail addresses provided by the tipster, and the institutional addresses of the authors whose names had been used, asking for proof of identity and a list of their publications. One scientist responded — to say that not only had he not sent the e-mail, but he did not even work in the field.

This sparked a 14-month investigation that came to involve about 20 people from SAGE's editorial, legal and production departments. It showed that the Gmail addresses were each linked to accounts with Thomson Reuters' ScholarOne, a publication-management system used by SAGE and several other publishers, including Informa. Editors were able to track every paper that the person or people behind these accounts had allegedly written or reviewed, says SAGE spokesperson Camille Gamboa. They also checked the wording of reviews, the details of author-nominated reviewers, reference lists and the turnaround time for reviews (in some cases, only a few minutes). This helped the investigators to ferret out further suspicious-looking accounts; they eventually found 130.

As they worked through the list, SAGE investigators realized that authors were both reviewing and citing each other at an anomalous rate. Eventually, 60 articles were found to have evidence of peer-review tampering, involvement in the citation ring or both. “Due to the serious nature of the findings, we wanted to ensure we had researched all avenues as carefully as possible before contacting any of the authors and reviewers,” says Gamboa.

When the dust had settled, it turned out that there was one author in the centre of the ring: Peter Chen, an engineer then at the National Pingtung University of Education (NPUE) in Taiwan, who was a co-author on practically all of the papers in question. After “a series of unsatisfactory responses” from Chen, says Gamboa, SAGE contacted the NPUE, which joined the investigation into Chen's work. Chen resigned from his post in February 2014.

In May, Nayfeh resigned over the scandal at his journal, and SAGE contacted the authors of all 60 affected articles to let them know that the papers would be retracted. Chen could not be reached for comment for this story, but Taiwan's state-run news agency said in July that he had issued a statement taking sole responsibility for the peer-review and citation ring, and admitting to the “indiscreet practice” of adding Taiwan's education minister as a co-author on five of the papers without his knowledge. That minister, Chiang Wei-ling, denies any involvement, but nevertheless resigned “to uphold his own reputation and avoid unnecessary disturbance of the work of the education ministry”, according to a public statement.

The collateral damage did not stop there. A couple of authors have asked SAGE to reconsider and reinstate their papers, Gamboa says, but the publisher's decision is final — even if the authors in question knew nothing of Chen or the peer-review ring.

Password loophole

Moon and Chen both exploited a feature of ScholarOne's automated processes. When a reviewer is invited to read a paper, he or she is sent an e-mail with login information. If that communication goes to a fake e-mail account, the recipient can sign into the system under whatever name was initially submitted, with no additional identity verification. Jasper Simons, vice-president of product and market strategy for Thomson Reuters in Charlottesville, Virginia, says that ScholarOne is a respected peer-review system and that it is the responsibility of journals and their editorial teams to invite properly qualified reviewers for their papers.

Nature Publishing Group (NPG) owns a few journals that use ScholarOne, but Nature itself and Nature-branded journals use different software, developed by eJournalPress of Rockville, Maryland. Véronique Kiermer, Nature's executive editor and director of author and reviewer services for NPG in New York City, says that NPG does not seem to have been the victim of any such peer-review-rigging schemes.

Free Podcast

 An interview with Ivan Oransky from Retraction Watch.

You may need a more recent browser or to install the latest version of the Adobe Flash Plugin.

But ScholarOne is not the only publishing system with vulnerabilities. Editorial Manager, built by Aries Systems in North Andover, Massachusetts, is used by many societies and publishers, including Springer and PLOS. The American Association for the Advancement of Science in Washington DC uses a system developed in-house for its journals Science, Science Translational Medicine and Science Signaling, but its open-access offering, Science Advances, uses Editorial Manager. Elsevier, based in Amsterdam, uses a branded version of the same product, called the Elsevier Editorial System.

Editorial Manager's main issue is the way it manages passwords. When users forget their password, the system sends it to them by e-mail, in plain text. For PLOS ONE, it actually sends out a password, without prompting, whenever it asks a user to sign in, for example to review a new manuscript. Most modern web services, such as Google, hide passwords under layers of encryption to prevent them from being intercepted. That is why they require users to reset a password if they forget it, often coupled with checking identity in other ways.

Security loopholes can do more than compromise peer review. Because people often use the same or similar passwords for many of their online activities — including banking and shopping — e-mailing out the password presents an opportunity for hackers to do more than damage the research record. Dingemanse, who has published in a number of journals that use Editorial Manager, including PLOS ONE, says: “It's quite amazing that they haven't got around to implementing a safe system.” Neither Aries nor PLOS ONE responded to several requests for comment.

Safety measures

Lax password protection has resulted in breaches. In 2012, the Elsevier journal Optics & Laser Technology retracted 11 papers after an unknown party gained access to an editor's account and assigned papers to fake reviewer accounts. The authors of the retracted papers were not implicated in the hack, and were offered the chance to resubmit.

Elsevier has since taken steps to prevent reviewer fraud, including implementing a pilot programme to consolidate accounts across 100 of its journals. The rationale is that reducing the number of accounts in its system might help to reveal those that are fraudulent, says Tom Reller, a spokesperson for Elsevier. If it is successful, consolidation will roll out to all journals in early 2015. Furthermore, passwords are no longer included in most e-mails from the editorial system. And to verify reviewers' identities, the system now integrates the Open Researcher and Contributor ID (ORCID) at various points. ORCID identifiers, unique numbers assigned to individual researchers, are designed to track researchers through all of their publications, even if they move institutions.

ScholarOne also allows ORCID integration, but it is up to each journal to decide how to use it. Gamboa says that not enough scientists have adopted the system to make it possible to require an ORCID for each reviewer. And there is another problem: “Unfortunately, like any online verification system, ORCID is also open to the risk of unethical manipulation,” says Gamboa — for example, through hacking.

That is a common refrain. “As you make the system more technical and more automated, there are more ways to game it,” says Bruce Schneier, a computer-security expert at Harvard Law School's Berkman Center for Internet and Society in Cambridge, Massachusetts. “There are almost never technical solutions to social problems.”

“As you make the system more technical and more automated, there are more ways to game it.”

It ultimately falls to editors and publishers to be on the alert, particularly when contacting potential reviewers. Carefully checking e-mail addresses is one way to ferret out fakes: a non-institutional e-mail address such as a free account from Gmail is a red flag, say sources. But at the same time, it could also be a perfectly legitimate address.

Jigisha Patel, associate editorial director of BioMed Central in London, says that it is definitely possible to catch cheaters by being on the alert for dubious e-mail addresses. “We've had some cases where we've caught them tweaking the e-mail addresses to try to steal someone's identity,” she says. But such screening is imperfect. In September, the publisher retracted a paper in BMC Systems Biology, stating that it believed that “the peer-review process was compromised and inappropriately influenced by the authors”.

Some scientists and publishers say that journals should not allow authors to recommend reviewers in the first place. John Loadsman, an editor of Anaesthesia and Intensive Care, which is published by the Australian Society of Anaesthetists in Sydney, calls the practice “bizarre” and “completely nuts”, and says that his journal does not permit it.

It is unclear exactly what proportion of journals allows the practice, but as fields become more specialized it provides an easy way for busy editors to find relevant expertise. Jennifer Nyborg, a biochemist at Colorado State University in Fort Collins, says that most of the journals to which she submits articles request at least five potential reviewers.

For most of the 60 articles retracted by SAGE, the original peer review had used only author-nominated reviewers. Despite this experience, the Journal of Vibration and Control still allows authors to suggest peer reviewers (and provide their contact e-mails) when they submit a manuscript — although more safeguards are now in place, says Gamboa.

The Committee on Publication Ethics (COPE), which serves as a kind of moral compass for scientific publishing (but has no authority to enforce its advice) has no guidance on the practice, but urges journals to vet reviewers adequately. Good practice is always to check the names, addresses and e-mail contacts of reviewers, says Natalie Ridgeway, operations manager for COPE in London. “Editors should never use only the preferred reviewer.”

NPG journals do allow authors to suggest independent reviewers. “But these suggestions are not necessarily followed,” says Kiermer. “The editors select reviewers and the selection includes checking for the absence of conflict of interests.” On the flip side, authors can ask an editor to exclude reviewers who they believe to have unmanageable conflicts, such as competing research. The publisher usually honours such requests, as long as authors do not ask to exclude more than three people or labs, Kiermer says.

Sometimes, recommending reviewers can backfire. Robert Lindsay, one of two editors-in-chief of the Springer-published journal Osteoporosis International, says that his publication allows authors to recommend up to two reviewers — but that he often uses this information to rule those reviewers out. This is based on past experience, in which he has seen authors recommend their own contacts, or worse: “We have had family members, folks in the same department, postgraduate students being supervised by an author,” he says. The journal generally uses suggested reviewers — who have passed screening — only if it runs into trouble finding other scientists to perform the task.

But screening can be difficult. Usually, editors in the United States and Europe know the scientific community in those regions well enough to catch potential conflicts of interest between authors and reviewers. But Lindsay says that Western editors can find this harder with authors from Asia — “where often none of us knows the suggested reviewers”. In these cases, the journal insists on at least one independent reviewer, identified and invited by the editors.

In what Lindsay calls the worst case that he has seen, an author suggested a reviewer who shared her first name but not her surname. Some investigation revealed that the surname was the author's maiden name — she was recommending that she review her own paper. “I don't think she is going to submit anything to us again,” says Lindsay.

Journal name:
Nature
Volume:
515,
Pages:
480–482
Date published:
()
DOI:
doi:10.1038/515480a

Author information

Affiliations

  1. Cat Ferguson, Adam Marcus and Ivan Oransky are the staff writer and two co-founders, respectively, of Retraction Watch in New York City.

Author details

For the best commenting experience, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will see comments updating in real-time and have the ability to recommend comments to other users.

Comments for this thread are now closed.

Comments

40 comments Subscribe to comments

  1. Avatar for Christopher Mebane
    Christopher Mebane
    As an editor, reviewer, and author who’s quite familiar with the ScholarOne publishing system, I put the fault for this fiasco squarely on both the handling editors and Editor in Chiefs (EICs) at the journals, rather than on security of ScholarOne software. The fact that the articles with the scam reviewers were all accepted with ONLY the author-suggested reviewers is egregious, and the Editor in Chief was at fault as well for not overseeing the handling editors. I may grumble at ScholarOne for its clunkiness, but it sees and tracks all, which makes it easy for the Editor in Chief to see all as well. For the EICs not to at least do some spot checking is inexcusable. Making ScholarOne, Editorial Manager and the like, highly “secure” with complex passwords that have to be changed every couple of months, maddening captcha quizzes, and other IT tricks may just increase the hassle factor for users but might be fixing that that’s not all that badly broken. The system depends on volunteer peer reviewers and editors, and real peer reviewers (the scammers in this article excepted), don’t need more computer hoops to jump. However, the issue of suggested and excluded reviewers is complicated. Our journal (a society-based journal in the environmental science field) requires suggested reviwers from authors, as does almost every other journal in my field. The stated rationale is that it demonstrates awareness of other topical researchers, and in my experience, that’s how most authors treat it. They recommend others who have published on the same topic, and typically both author and reviewers are known to the subject matter. When the editor is from Europe or North America and the authors and suggested reviewers are all from unfamiliar institutions in China, with non-institutional emails and ambiguous names that confound Google Scholar, a bit more effort is required. A few times I've written to the author and advised them that while I would like to include a suggested well qualified reviewer, I have had difficulty identifying their suggested reviewers and ask for the title of a recent relevant publications and other identifying details on what qualifies the suggested reviewer. As our journal has gained editors and editorial board members from China, this has been less of an issue. I and most of my peers will use one suggested (and vetted) reviewer, and would never consider relying solely on suggested reviewers. The scam Ferguson and others wrote about seems a red herring. More at issue is the risk of subtle bias from recommending reviewers likely to be highly qualified, but also positively biased toward the research. As an author, I am not about to suggest an assassin. There’s quite a bit of research that supports what seems self-evident - suggested reviewers tend to be less negative than editor selected reviewers. For example, search for “Suggesting or Excluding Reviewers Can Help Get Your Paper Published”, (behind Science’s paywall, but plenty of postings elsewhere), or Sara Schroter (JAMA) http://dx.doi.org/10.1001/jama.295.3.314. Feguson and others name an EIC who admitted to a practice I’ve suspected exists: asking for suggested reviewers and then using this information to rule those reviewers out. In my view this EIC’s behavior is just as unethical as is suggesting softball reviewers. If an EIC doesn’t want authors to suggest reviewers, change the instructions to not ask for them, rather than asking for them for dishonest purposes. Ferguson’s article, Bohannon’s sting, and other news certainly shows what we all know: there’s some sketchy goings on in scholarly publishing. Yet, at least in my fishbowl, the vast majority of authors, reviewers, editors trudge along and try to make it work.
  2. Avatar for Dr. Walter Coffey
    Dr. Walter Coffey
    WITHOUT PREJUDICE IN THE PUBLIC INTEREST AS A MATTER OF PUBLIC POLICY "Retraction Watch" is a great idea and fine company. The authors of this article (apparently the same who manage "Retraction Watch") should, however, retract their own statement regarding "free" email accounts. A group of colleagues of mine all definitely prefer the use of "free" email accounts over "institutional" email accounts for a wide variety of reasons: storage capability, confidentiality, privacy, independence, work product & content ownership, etc. You guys got it way wrong when you decided to pick on "Gmail" or "Yahoo". Moreover, we would trust an author with a "free" email account over one from a fourth or fifth rate University "institutional" email account any day of the week. Not that I would ever put a great deal of credence into the "Times Higher Education" rankings. The fact that Mr. Phil Baty and his "rankers" are based in the UK automatically makes us skeptical. Naturally, the UK bias is most noticeable with regard to its overly zealous protection of some undeserving Australian "Universities". Sure, the University of Melbourne and ANU are well placed (but I wouldn't rank either of them higher than Northwestern); it's absurd, however, to place either QUT or LaTrobe better than the first 300...somewhere there might just be an institutional email address for you. Who made the United Kingdom's "T.H.E." the "Global Authority on Higher Education" anyway? None of their institutions ever put a man on the moon. http://www.timeshighereducation.co.uk/world-university-rankings/ The Brits still insist the "Sun never sets on their Empire"...but it did long ago. The USA should respond with its own World University Rankings. It's nice to see that "Retraction Watch" is based in the USA anyway...where real science & medicine holds "Centre Court". Nevertheless, we'd place much more credence into the Leiden Ranking system over the UK's highly biased "T.H.E." Here is what Paul Wouters, Professor of Scientometrics says on behalf of the Leiden Methodology: "The US are still the dominant scientific world power, but new centres of science are emerging. MIT is the university which has the highest citation impact of its publications in the world. Princeton and Harvard take positions two and three. These are some of the findings of the new Leiden Ranking 2011 – 2012 which has been published on the website: www.leidenranking.com. The top fifty list consists of 42 US based universities, 2 Swiss (Lausanne at 12 and ETH Zurich at 18), 1 Israeli (Weizmann Institute of Science), 4 British (Cambridge at 31, London School of Hygiene & Tropical Medicine at 33, Oxford at 36 and Durham at 42), and one Danish university (Technical University of Denmark). Aggregated to country level, the US has 64 universities in the top 100 list, the UK 12, and the Netherlands 7. The latter is remarkable given its small size." "The Leiden Ranking 2011-2012 is based on an advanced methodology which compensates for distorting effects due to the size of the university, the differences in citation characteristics between scientific fields, differences between English and non-English publications, and distorting effects of extremely high cited publications. Publications authored by researchers at different universities are attributed to the universities as fractions. This prevents distortion of the ranking by counting these publications multiple times (for each co-authoring university). This distorting effect is often overseen in other global university rankings, which leads to a relative advantage of clinical research and some physics fields in these rankings. This makes clear how sensitive global rankings are to the nitty-gritty of the calculations." "The Leiden Ranking is based on data of the Web of Science. Data on the arts and humanities are not included since these fields are not well represented in the Web of Science. The Leiden Ranking exclusively measures the citation impact of research of the 500 largest universities in the world. This prevents an arbitrary combination of performance in education, valorization and research, a disadvantage of many global university rankings." In summary, we'd rank a "free" email account much, much higher than a so called "Institutional" one from the likes of either QUT or LaTrobe.... Amen and Yahoo for Gmail!
  3. Avatar for Peter Gerard Beninger
    Peter Gerard Beninger
    This rather wordy article can be boiled down to the following single failure: due diligence was not performed by the handling editors. Of a list of six potential reviewers, I never select more than one or, very rarely, two that are on the authors' 'recommended' list, and only after checking that they have indeed recently contributed to the field (I mean, just how difficult is it to type this into Google Scholar?), and even then I always put them at the bottom of the list. I do the rest of the work myself - checking who does what, how long ago, how pertinent it is to the ms to be reviewed, and whether they have also published with the authors. Editors who simply copy and paste reviewer suggestions from the authors' recommendations should be replaced and disgraced.
  4. Avatar for James A.
    James A.
    A quality peer reviewing is obviously non-existent in some journals. For example, DOZENS of papers published in the Modern Physics Letters B with many reputable scientists on the Editorial Board http://www.worldscientific.com/worldscinet/mplb were accepted within 0-3 days, right after submission. Just few examples: Xin Huang and Shuai Dong, Mod. Phys. Lett. B 28, 1430010 (2014) [25 pages] Received: 20 July 2014 Accepted: 20 July 2014 Ikuo Ichinose and Tetsuo Matsui, Mod. Phys. Lett. B 28, 1430012 (2014) [33 pages] Received: 28 July 2014 Accepted: 29 July 2014 Xiaoshan Xu and Wenbin Wang, Mod. Phys. Lett. B 28, 1430008 (2014) [27 pages] Received: 16 July 2014 Accepted: 17 July 2014 At the same time, according to MPLB Peer Review Policy http://www.worldscientific.com/page/authors/peer-review-policy , “Papers will be refereed by at least 2 experts as suggested by the editorial board.” Is 1 day enough to review a 33 pages paper by 2 experts? Or can a 27 pages paper be reviewed within 24 hours? Just another case for a massive retraction...
  5. Avatar for Peter Gerard Beninger
    Peter Gerard Beninger
    This publisher obviously did have a good reputation, otherwise it would not have Nobel winners' papers. However, from its web site, it appears to be OA, dominated by Asian editors/offices, and very aggressive. There is definitely spillover of the mentality and practices of Beall's list predatory publishers going on in the publishing world, creating what I believe is an even more dangerous 'grey' publishing sphere, where questionable practices are mixed with respectable ones, such that a journal cannot be classed as predatory, even though it may often act like one. All of this is debasing the currency of contemporary science.
  6. Avatar for R. Valentin Florian
    R. Valentin Florian
    I found the article quite confusing. The article blames computer software for not catching these scams, but what is to blame are vulnerabilities in the publisher’s process, which were independent on the software used by the publisher to manage the review process. These two vulnerabilities were that the publisher (through the editors) asked authors to suggest reviewers, and that the publisher did not properly check the credentials of reviewers and the association of the emails used by the system with the actual persons selected for being reviewers or with their publications. The issue of fake identities overlooks a proper discussion about review and citation rings that can also be composed of real persons that unethically agree to support reciprocally. I wrote a blog post discussing in more detail these issues at http://www.epistemio.com/blog/confusing-nature-article-on-peer-review/.
  7. Avatar for Bastille Tai
    Bastille Tai
    Other cases about plagiarism in Taiwan, see Taiwan Plagiarism at http://taiwanplagiarism1.blogspot.tw/2014/12/personal-ties-reciprocity-competitive.html http://taiwanplagiarism1.blogspot.tw/2014/12/personal-ties-reciprocity-competitive_5.html http://taiwanplagiarism1.blogspot.tw/2014/12/emotionallabor-of-tour-leaders.html
  8. Avatar for Georges T
    Georges T
    There is a ridiculous statement mentioned in this article: "a non-institutional e-mail address such as a free account from Gmail is a red flag, say sources." Are you serious about this? Why do you use Google, then, if you do not trust Gmail? What does an email address change in an evaluation process intended to evaluate a text but not a name? So, if the address of someone is in one quarter, he/she would be more 'respectable' than another whose address is in another quarter? What a ridiculous argument. Email addresses, whatever the extensions, are equivalent tools to exchange with each other and all do exactly the same job. So, I do not see what such an argument mean! With smartphone, there are much more versatility with Gmail than with "professional" emails! Gmail is more polyvalent than most professional emails. Editors and reviewers have to judge articles in their hands but not addresses, institutions, countries, or author names, otherwise the evaluation process is obviously biased. It is the text and methodology that should be evaluated but nothing else, and reviewer selection should be the job of editors, not authors.
  9. Avatar for Joshua Fletcher
    Joshua Fletcher
    I believe that you've missed the point. Authors are allowed (or even encouraged) to suggest reviewers for their manuscripts. What is being stated is that an author that gives a list of reviewers using only personal free accounts like gmail should be red flagged. While institutional email may be inferior to gmail or other free online email services, at the very least scientists should be reachable at an institutional email address, in general.
  10. Avatar for Dr Walter Coffey
    Dr Walter Coffey
    WITHOUT PREJUDICE IN THE PUBLIC INTEREST AS A MATTER OF PUBLIC POLICY 5 January 2015 Dear Mr. Joshua Fletcher, With all due respect, you've missed the point yourself. You are advocating for the predominant usage of "Institutional" email accounts as a form of filtering the credible from the alleged "not-so-credible". Don't you see how biased of a statement that is? It's like saying somebody who uses the "US Postal Service" is more credible than the person who chooses to send mail via "Fed Ex". You might as well label everyone using "Gmail" or "Yahoo" as "Communists". Don’t you see that you are advocating for the prejudicial negative certitude of “Email Profiling”. In fact, you invoked the phrase “Should Be Red Flagged”. How particularly “stigmatizing” can you get? Soon you and your “Elitist Free Email Hate-Mongerers” will have persuaded all Human Resource Managers (HRM) around the academic globe to view anyone who communicates to them with a “Free Email” account as not worthy. And we all know how powerful HR Managers are…those who think they have access to everyone’s most private documentation. No sir, you are patently wrong. You’re obviously too young to ever have known about McCarthyism. That’s right, Mr. Fletcher, please tell me where the “Slippery Slope” which you advocate for ends? Are you suggesting, Mr. Fletcher, that you know of absolutely no bad or fraudulent science transmitted via institutional “Upper Crust Email Addresses”? If you don’t, then I’ve got important news for you: most of the institutional addresses out there are not from MIT or Harvard. There’s a good deal of “academic” and “scientific” and “sociological” fraud out there in cyberspace that originates among Universities that are not ranked in the top 75 globally. In other words, there are so many, many, many more Universities out there that are NOT in the top 75 than there are those WITHIN the top 75. There’s a tremendous amount of third, fourth, and fifth tier “Universities” out there, Mr. Fletcher, which may potentially provide “cover” for bad or fraudulent research. Let’s face it, many of these lesser known Universities are desperately vying for recognition. There are many new “Open Access Journals” out there that are unethically manipulating the “Impact Factor” just so they can appear “credible”… Then there’s the subject of “Conflicts of Interest”. I know of many “Researchers” from fifth tier “Universities” who have intentionally founded “Open Access Journals” in order to promulgate their bogus “Research”…”Outcomes”…”Findings”…etc. Frankly, I’d rank much higher someone with a Gmail or Yahoo Email account than someone with an “Institutional” Email account from a third, fourth or fifth tier ranked “University”. So before you go stigmatizing others; please consider if you’d want others to stigmatize you on something just as baseless. Incidentally, what accounts for your passionate bias? How does one reconcile the myriad of people out there who’d rather preferentially use their Yahoo or Gmail accounts over their “Institutional” accounts? Your dogmatic "Email McCarthyism" propaganda is not appreciated and so, so very passe. Let’s start the New Year out right and eradicate this nonsensical criticism about the use of Google or Yahoo “Search Engine” email addresses. Shall we? Respectfully Submitted P.S. For the record, I do not own any shares in either Google or Yahoo.
  11. Avatar for David Osterbur
    David Osterbur
    There are knowledge management systems now in place, such as ORCID (http://orcid.org/), that should make it very easy to verify whether a reviewer is legitimate. Everyone should sign up for ORCID as it provides a lot of benefit to the scientific community for author disambiguation, for keeping track of your own articles, for making it easy for others to see what you have published and for knowledge management, allowing people to see who is practicing in what area of science.
  12. Avatar for Georges T
    Georges T
    This is, in contrary, a biased system in my view. Editors and reviewers would only favor Famous authors and/or authors having top-tiers journals records! To avoid.
  13. Avatar for Wim Crusio
    Wim Crusio
    Everybody can have an ORCID, no need for top-tier journal publications. ORICD is a great tool here. I have been using a Yahoo email address for a decade now (my institutional email address has since changed 5 times or so - without me having moved anywhere) and ORCID confirms I'm legit, despite the Yahoo address...
  14. Avatar for Robert Cluley
    Robert Cluley
    I ran a survey in management studies concerning personal influence within peer review. It had some fairly stark findings. Unfortunately, no journal would publish it. It is online here: https://nottingham.academia.edu/RobertCluley.
  15. Avatar for Peter Gerard Beninger
    Peter Gerard Beninger
    I took a look at your paper. I thought it was very interesting, perhaps you should submit it to a psychology journal. However, you do confuse effect size with statistical significance, and you really must straighten this out.
  16. Avatar for Mehmet Emir Yalvac
    Mehmet Emir Yalvac
    How about publications have been seen by neither reviewers nor the editors of the journals. There are journals whose editorial offices work as editor in chief, master reviewers and manuscript processing officers. They just ask you to pay publication is guarantied.
  17. Avatar for Steven Haussmann
    Steven Haussmann
    The fundamental lack of security seen in these systems is troubling- plain-text passwords are downright embarrassing, and naively trusting the identity of clients is a terrible practice. How old is this software? It seems like it's due for an overhaul, if not a complete replacement.
  18. Avatar for Vijay Shankar
    Vijay Shankar
    Who would you suggest as a potential reviewer? Albert Einstein? It could be someone you are familiar with - old friend of yours, past collaborators, or at least someone 'nice' to you. This doesn't only apply to early career researchers. Don't we see papers by the 'big names' in the field getting accepted and published within a week? If Mr. Very Famous submits a paper, how would Mr. Famous reject it? That's how things work in most of the journals. The peer review process is painfully slow and often biased (especially if you're from a lesser-known group). And then there is this East vs West bias going on, declaring that groups from the U.S. and Europe are more reliable than those from other countries. Why don't journals have their own set of subject matter experts? Why so much hesitation to follow double-blind review system? Is it because even the papers by the 'big names' would eventually get rejected?
  19. Avatar for Georges T
    Georges T
    A good point! Upstream of the process, a blind submission too would be a good option: http://link.springer.com/article/10.1007%2Fs11948-014-9547-7
  20. Avatar for Wim Crusio
    Wim Crusio
    Blind review only works for authors that are completely unknown or in a very large field. In smaller fields, it's often not too difficult to figure out who the authors are...
  21. Avatar for Harish Gadhavi
    Harish Gadhavi
    Just to add to your point, in case of location based studies which is quite common in atmospheric sciences, it is nearly impossible to keep authors name secret.
  22. Avatar for Denys Wheatley
    Denys Wheatley
    The use of nominated referees ought to be discouraged – indeed, the practice should be abolished as it causes unnecessary and avoidable problems. Editors and publishers that allow it (few would condone it) ought to follow John Loadsman lead, with which I fully agree. As a very last resort, I have in the past considered checking out a nominated reviewer, but finally decided that it was too risky. [Anyway, a paper that fails to be reviewed within a reasonable time by carefully selected independent referees is usually not worth publishing.] For this reason, I operate a strict Triage stage as many other editors must do. If editors send out all submitted manuscripts for review, they simply can expect some trouble and must take full responsibility for any consequences. After triage, both my approved and border-line papers are sent to editorial board members for their concurrence or disagreement on whether they should go out to review. Each member can also write a full report on the paper, but will otherwise suggest suitable independent reviewers who are very reliable. This ensures a high improbability of ever getting cases of self-reviewing and fake-reviewing. Denys Wheatley Editor-in-Chief, Cancer Cell International, and Oncology News
  23. Avatar for Wim Crusio
    Wim Crusio
    In my experience, it is mixed. I have gotten very good and critical reviews from suggested reviewers (some even recommending rejection...). Others just send a one-liner: "Great article, accept it!". Guess how much attention I pay to something like that... Having said this, I never take ONLY recommended reviewers, that is asking for trouble.
  24. Avatar for Richard Gordon
    Richard Gordon
    The journal Nature leads in undercutting an accountable peer review system: "All submitted manuscripts are read by the editorial staff. To save time for authors and peer-reviewers, only those papers that seem most likely to meet our editorial criteria are sent for formal review. Those papers judged by the editors to be of insufficient general interest or otherwise inappropriate are rejected promptly without external review (although these decisions may be based on informal advice from specialists in the field)."
  25. Avatar for Wim Crusio
    Wim Crusio
    If you want reliable science, you don't go to Nature, Science, Cell, etc, but to a scientific journal, not these glossies...
  26. Avatar for Guest
    Guest
    The application of science is clearly required. We need a database of peer reviewers and their reports matched to follow up outcomes of the papers in question as measured by citation records etc. This database would then be the raw material for research projects to document the properties of the system rather than the pallid accusations of bias cronyism sexism etc. Admittedly it does less to shine light on the work rejected by the process but some ingenuity could be applied.
  27. Avatar for Weishi Laura Meng
    Weishi Laura Meng
    It is no news that the peer review system is broken, whether in grant proposal evaluation or in refereeing journal submittals. Plagued by cronysm, lobbying, old-boy's network, editor courtship and other venal practices, peer review is failing. What this article reports is the childish modus operandi of people duping amateurish journals. Rather than focusing on the big picture and taking the high ground, this article only fuels hysteria, which is what Cat Ferguson, Adam Marcus and Ivan Oransky do in their blog Retraction Watch.
  28. Avatar for Kenneth Cohen
    Kenneth Cohen
    The points are well taken, and account for another level of problems in the peer-review process beyond the far-too common statistical and methodological errors. I agree that "journals should not allow authors to recommend reviewers in the first place." In all the years I have written peer-reviewed articles, I have NEVER been asked to recommend a reviewer for one of my own pieces, and would have been outraged if asked. Perhaps this is because I write for CAM (Complementary and Alternative Medicine) and Integrative Medicine peer-reviewed journals and medical school textbooks (in which the editors commonly act as the peer-review team). I am also on several peer-review committees. Here is some information that may surprise readers: because CAM journals are under such close scrutiny by those who have not yet passed from "hardening of the paradigms,"our peer-review process and rejection rate are often more stringent than the standard scientific journals.
  29. Avatar for Harish Gadhavi
    Harish Gadhavi
    As pointed out by one of the commentator, this article only fuels hysteria without addressing real problem. Suggestion that free email address like gmail, yahoo etc as potential indicator of rigged/fake reviewer is not well thought. Most institutional email server including those in developed countries are run by small team and are far from reliable and hence many researcher prefer using one of reputed free email services. This problem is far more severe in economically backward countries. Also, retired professor/scientists and post-docs constitute big chunk of people who do reviewing job but for them access to institutional email address is not same because of changing/leaving job. For them, it is advantageous to use free email address. Email address should not be criteria to check for genuineness of reviewer.
  30. Avatar for Wim Crusio
    Wim Crusio
    And institutional email addresses, even in developed countries like France, never allow attachments of the size allowed by Yahoo and such...
  31. Avatar for Charlie Niwrad
    Charlie Niwrad
    Review of grants and papers should be open, accountable and paid, because it is a necessary step in research process dealing with funds going in millions. The idea which changed little from small groups of research 'peers' 100 years ago is in the current form simply not fit for competitive market-oriented science. The potential for mis-management is simply too high. I agree that scientists are very honest in general, but nobody would guide big money based on honesty only.
  32. Avatar for Lachlan Coin
    Lachlan Coin
    Academic Karma (http://academickarma.org) has built a registry of reviewers with verified ORCID, as well as tools for editors to find and invite reviewers from this registry. Its a fairly new initiative so we would love to get more reviewers signed up and also more editors using our tools to source reviewers.
  33. Avatar for Charlie Niwrad
    Charlie Niwrad
    Evaluating of researchers as reviewers, much like evaluating researchers by original research, is a good idea! Otherwise editors are a bit in the dark, how good is review.
  34. Avatar for Tim Peterson
    Tim Peterson
    Onarbor, https://onarbor.com, solves many of these peer review issues. It's publishing, reviewing, and funding. Think Kickstarter merged with Stackoverflow. I'm one of its creators so would be grateful if you'd be interested in talking more. tim@onarbor.com.
  35. Avatar for Charlie Niwrad
    Charlie Niwrad
    Good idea, will look for it more.
  36. Avatar for Edward Ciaccio
    Edward Ciaccio
    The authors point out a pervasive problem. The use of suggested referees should be limited. Better to google for academic referees with expertise on the topic of the paper, and to use the journal editorial board as a backup. As editor of Computers in Biology and Medicine (Elsevier), I want to get the best papers possible published in the journal. Many a time I have also had to reject papers from my editorial board, associate editors, and colleagues that did not meet the high bar. And maybe I have lost a few friends in the process.
  37. Avatar for Andrew Preston
    Andrew Preston
    That's a good editorial process. We try to help by allowing reviewers to build officially verified records of their past reviews on Publons.com. Editors can then check these profiles before assigning reviews. The goal is benefit all stakeholders but we do see instances where editors decline to take part.
  38. Avatar for Weishi Laura Meng
    Weishi Laura Meng
    The modus operandi described in this piece is a copycat of a previous scam by a young Korean scientist already described by Retraction Watch. While serious, this scam is far more childish than the far more pervasive and venal old-boy's network that has plagued peer review for decades. Rather than resorting to these childish manouvers, more astute scientists simply invest plenty of time lobbying, courting editors and befriending potential reviewers before submitting their manuscripts for publication. I think this piece by Ferguson, Marcus and Oransky does little beyond fuelling the hysteria over post publication peer review that they themselves seem to have generated.
  39. Avatar for A nonymous
    A nonymous
    +1 Even if referees were selected randomly, they'd still be influenced one way (competitors of the authors) or another (friends of the authors). Serious journals should rather hire full-time referees who have no stakes (direct or indirect) in a publication, who understand statistics and are able to review raw data as well. But of course that won't happen since publishers are for-profit entities and don't want to pay referees with expert knowledge...
  40. Avatar for Paul Vincelli
    Paul Vincelli
    So glad to see that integrity of the publication process will prevail. The peer-review system certainly is imperfect, but it is sooooo much better than the free-for-all of self-publishing on the internet. We are not living in the Age of Information; rather, it is the Age of Misinformation.

Recommended

RNA

The weird and wonderful architecture of RNA

Cells contain an ocean of twisting and turning RNA molecules. Now researchers are working out the structures — and how important they could be.

Science jobs from naturejobs