Steven Abman (SA): We are pleased to welcome you to another session of our webinar series on “Challenges in Pediatric Academic Medicine,” which is jointly sponsored by the American Pediatric Society (APS) and the Society for Pediatric Research (SPR). The purpose of this “virtual chat” series is to provide a forum that brings together diverse members of our pediatric academic community at many different stages of their careers, including students and residents and fellows and faculty, as well as chairs, deans, and senior leadership. The goal of this series is to address critical topics and challenges that we face in academic medicine, especially as related to child health.

Past sessions have included such topics as navigating career transitions, disparities in health care and outcomes, mentorship and mentee-ship as a two-way street, and many other topics. I am especially excited about today’s session, which is entitled “Academic Skills: Publications.” Certainly, the hallmark of a successful career is directly reflected by one’s publication track record, which serves as a key metric for having impact in one’s field. Consequently, understanding the peer review process, multiple factors related to publication integrity, ethical issues and related concerns, and how to best communicate our science are vital for career advancement as well as advancing our understanding of diverse clinical problems. As such, we are grateful to have three outstanding editors of pediatric journals as panelists for today’s discussion.

First, I am delighted to introduce Dr. Bill Balistreri, who is the Dorothy M.M. Kersten Professor of Pediatrics, Division of Pediatric Gastroenterology, Hepatology, and Nutrition at Cincinnati Children’s Hospital and the University of Cincinnati College of Medicine. Bill has been an incredibly successful academician with an outstanding publication and mentorship record, especially as related to liver disease. He has served as Director of the Pediatric Liver Care Center, Liver Transplantation Program, and Fellowship in Transplant Hepatology. Over the past 25 years, Dr. Balistreri has served as Editor of the Journal of Pediatrics, and under his stewardship, this journal has blossomed into becoming one of the leading publications in pediatrics.

Our second panelist is Dr. Kurt Albertine. He is the Professor and the Edward B. Clark Endowed Chair IV in Pediatrics as well as an Adjunct Professor in Internal Medicine and Neurobiology and Anatomy at the University of Utah School of Medicine. In addition to his internationally renowned research on the developing lung, Kurt is also known as the “master mentor.” He has developed and led numerous workshops and seminars on mentorship, research skills, career development, and many aspects of how to best train our next generation of scientists. He has directly mentored an amazing number of undergraduate and graduate students and fellows and faculty, with an outstanding track record of developing successful trainees. In addition, he has extensive experience in grantsmanship and has an extensive publication record. Over the past 16 years, Kurt has been the Editor in Chief for the Anatomical Record.

Our third panelist is Dr. Cynthia Bearer, who is the William and Lois Briggs Professor of Pediatrics and Chief of Neonatology in the Department of Pediatrics at Case Western Reserve University and the Rainbow Babies and Children’s Hospital. As a scientist and neonatologist, Dr. Bearer has been a successful investigator with an extensive CV listing numerous publications as well as an extraordinary track record of grant support. Cynthia has had many leadership roles throughout academic medicine as well as an outstanding record of mentorship. Dr. Bearer is currently the Editor in Chief of Pediatric Research, which has become a leading academic journal in pediatric medicine under her leadership.

In addition to our outstanding panel. I am pleased to introduce our moderator, Dr. Stephanie Davis. Dr. Davis is the President of the SPR and the Chair of Pediatrics at the University of North Carolina.

Stephanie Davis (SD): Welcome to this “virtual chat.” Dr. Bill Balistreri.

William F. Balistreri (WFB): I want to thank Stephanie and Steve for inviting me to participate and to interact with Cynthia and Kurt in this important discussion.

In my introductory comments, I’ve chosen to discuss a timely and overriding issue, maintaining trust in science, and of course, in scientific publications. Our goal must be to maintain, and in some cases, unfortunately, restore the validity of the scholarly record. In that context, I would like to discuss the ethics of scientific publications.

Dr. Summerskill said several years ago, “…the role of ethics in research extends through the moral obligation to conduct and to report that research in an honest, transparent, and timely fashion” (Summerskill et al. Lancet 373, 992 (2009)). That’s the key. Scientific publications are the coin of academics, used for academic promotion and funding. That’s the good news, but it can bring about the “publish or perish” phenomenon. That fierce competition, an increasing necessity to publish, may lead authors to engage in some questionable behavior. Authorship is clearly a valuable commodity, but as with all commodities, it’s bought, sold, traded, and stolen.

I want to discuss the commonly encountered ethical transgressions that we as Editors see in published biomedical research and also discuss the broad impact of unprofessional behavior and fraudulent science. Then, finally, what can be done?

The common transgressions are conflicts of interest or failure to disclose, plagiarism, duplicate or selective publication, authorship disputes, ghostwriting, and outright fraud—including fabrication and falsification. Why is misconduct perpetrated? I already mentioned one of the issues, the academic pressure to “publish or perish.” Other factors are a degree of naivete, the desire to distribute the findings among a broad audience, or a lack of accountability. The authors do not understand the harm or may not understand the “rules.” That last one is on us—as Editors we must ensure that the rules are clear. Our guidelines should not be vague or overwhelming.

One of the more common issues that we must deal with is searching for plagiarism. That’s defined as the appropriation of another person’s ideas, processes, results, or words, without giving appropriate credit. We now have computational text-searching algorithms, electronic indices, and full-text manuscripts, which makes it easier to detect plagiarism or unethical publications.

Authorship is also a common concern expressed to us. There are clear recommendations and criteria for authorship. The website that I would urge you to keep on your list is (the International Committee of Medical Journal Editors). This establishes clear guidelines and is our “rule book.” Briefly, an author must have contributed to the conception/design, acquired, analyzed, or interpreted the data. In addition, they must draft or revise the manuscript, they must approve the final version of the manuscript, and they must agree to be accountable for all aspects of the work.

Ghost authorship and/or industry sponsorship is also an issue we have to assess. Ghost writers are individuals who participated in the writing but are not named nor acknowledged. This can bring about potential problems, unknown biases, and conflicts of interest. Often, there is a reason why they are not disclosed.

The impact of failure to disclose is far reaching. “…as Editors, we recognize that the publication of clinical research findings in peer reviewed journals is the ultimate basis for most treatment decisions and guidelines” ( The question is—what can we do to ensure trust in scientific publications, specifically, the scientific publications in our journals? Number one is individual responsibility and awareness—know the rules. Within the institutional environment, create a culture of compliance with training and accountability, modeling of ethical behavior, provide tools to know what’s right, and know the rules. Of course, establish a culture of zero tolerance—“if you see it, say it.”

One final comment. The world of open access is increasing, and that brings about several benefits—readily available manuscripts, no paywalls, and access to articles immediately. However, I will mention the potential downside. If authors pay to publish, even in legitimate open access journals, there are often conflicts between scientific research productivity, personal academic promotion, and the financial interests of the publishers. Obviously, if a business model links revenue to volume, the publisher has a financial incentive to publish more articles.

Thus, an unforeseen consequence of the open access movement has been what are called “predatory journals”. These are journals that take advantage of the open access model. They bypass the traditional peer review process, but they do charge an article processing fee. So, if you pay, they will publish anything. This practice clearly can damage scholarly publishing. It is dishonest and lacks transparency. It threatens the quality of medical research by irresponsible publishing, leading to rising retraction rates, irreproducible results, and a flood of inconsequential publications that distract readers from more meaningful scholarship. The bottom line is what I tell our mentees and colleagues—“if you’ve not read an article from that journal, you should think twice about submitting to it.”

Hopefully, these comments will resonate and inform our discussion tonight. I look forward to your comments and questions, and thank you again, Stephanie and Steve for inviting me.

(SD) Thank you, Bill. What can we do to ensure trust in scientific publications?

(WFB) Stephanie, I take this as a major responsibility, as the Editor. I need to establish transparent and clear guidelines and rules so that there’s no question that a manuscript that goes through our peer review process is trustworthy. In addition, if there is ever any claim or any suggestion of an ethical transgression, we must investigate it with due diligence.

(SD) Thank you very much. We are going to now move to Dr. Albertine.

Kurt H. Albertine (KA): Thank you Steve and Stephanie, for this invitation, and also to Michelle for coordinating the practice sessions. I am honored and humbled to be an invited panelist. I will chat about sense of urgency to publish in today’s new world that includes alternatives to traditional peer-reviewed publications as well as personal marketing. I combine these two together into a phrase that I refer to as publishing commerce. As already mentioned, peer-reviewed publications are the coin of the realm. They validate scholarship and make the scholarship available and permanent. Without peer-reviewed publications in the traditional context, the content and the science simply do not exist.

There are fluctuations in today’s publishing commerce. Value and dissemination are two very different elements today compared to 20, 30, or 50 years ago. One element of publishing commerce is value in terms of value to one’s career; another element is value to the field. With respect to dissemination, traditional peer-reviewed journals have been the essence of communication. There are other articles for dissemination today that I will address. Today, the sense of immediacy is evident and growing: life scientists want their paper published yesterday. An important question was asked by Bill: “Where does the money come from?” Any kind of publication has cost associated with it, cost has to be considered, which requires homework by authors. Examples include the source of the funding and how content is used, which may surprise you.

So, what are the contributors today to the changing landscape of scientific publishing? The two that I will focus on are preprints and open access. Preprints are posted on community preprint servers. For the field of physics, preprints were adopted many decades ago, without fanfare or controversy. For the field of life sciences, I think the best way of describing the reaction to preprint posting is strong passions in a hyper competitive world.

So, let’s review a little bit of the history related to preprints. In the 1960s, some of us on this panel can relate to those years, NIH mailed photocopies of draft manuscripts to groups of biologists. That was a short-lived experiment. Next, in 2003, arXiv opened a quantitative biology section, which I will return to in a moment. In 2007, Nature Publishing launched a server, Nature Precedings, but this venture folded in 2012 because it could not finance itself. In 2013, BioRxiv was launched by Cold Spring Harbor Laboratory. Not until 2017, just 3 years ago, the concept of preprint servers in the life sciences was endorsed by the UK Medical Research Council, the Welcome Trust Foundation, Howard Hughes Medical Institute, and the NIH.

However, there are reasons for caution regarding preprint servers. One reason is an evangelist movement amongst preprint protagonists. In 2016, a nonprofit was created, ASAP Bio. This group deploys preprint ambassadors, who are enthusiasts that evangelize the merits of preprint server article distribution. Many life scientists remain wary. Wariness stems from concerns about competition, the potential platform for stealing ideas or data, and/or being scooped. Also, preprint servers are a time-sink, if you have not spent time browsing them. You can spend hours perusing the mishmash of papers that are of various quality.

Why not use preprint servers? Well, the most skeptical characterization that I found on the Web was vanity communications. Others criticize preprint server articles because they are a shortcut to avoid critical peer review. As Bill mentioned, the issue of transparency in publications requires awareness. For example, explicit identification as a preprint is necessary when submitting to a traditional peer-reviewed journal. I urge junior faculty with whom I work, who are going through faculty review for advancement, to find out what the local perception is of preprint server posts by their Department Chairperson, the school, and the institution at which the assessment occurs. Getting this insight is important because perceptions for retention, promotion, and tenure that are published suggest either neutral or negative perceptions about the potential impact of preprint articles as coins of the academic realm in the life sciences.

Passions about preprint articles extend to what is referred to as preprint wars because of concern about open communications that have not been vetted by traditional peer review. The concern is that such communications may leave the public bewildered by a mishmash of content of uncertain scientific merit. There is an important question that is timely; namely, are tweets preprints? The big-picture worry is potential erosion in public trust in scientists and data.

What was surprising recently is a statement by PLoS ONE. PLoS ONE is a pioneering open access journal. The Communication Editor made a statement that preprints “do not diminish the need for reputable, peer reviewed journals.” This was quite a surprise statement coming from PLos ONE.

Now, if we return to arXiv, more than two decades have been passed since this preprint medium was founded. As I mentioned, physicists use this preprint server but eventually, they do publish in peer-reviewed journals. Today, many life sciences journals, including Pediatric Research, and funding agencies allow preprints.

I end with some questions for the audience. Which digital object identifier (DOI) is the DOI of record, the identified number assigned by the preprint server or the traditional peer-reviewed journal? If you are submitting to a preprint server, you need to ask that question, and other questions. Should the author retain copyright of a posted preprint? If so, which type of creative commons license does that preprint server offer authors? Is the publisher of that preprint server in business to make profit for themselves? So, I end with the following conundrum. To preprint or not, that is a question for life scientists. Thank you.

(SD) Thank you, Kurt. What metrics are important to the journal for publishing?

(KA) There are the traditional and non-traditional metrics. The traditional metric has been impact factor, which is a journal-level metric. It has nothing to do with an author or a paper. Impact factor continues to be misused in its application. Fortunately, that misuse gradually is being deflected to article-level or author-level metrics. An example of article-level would be the Eigenfactor, or article influence that makes adjustments across disciplines. The H-index is an author-level metric. Relevant to preprint articles is Altmetrics. Altmetrics are both quantitative and qualitative indicators that are sourced from the Web. The source-list for Altmetrics is long, including citations, post publication communications, peer review platforms, Mendeley (an online reference manager), mainstream and multi-media sources, research highlights, public policy sources, open syllabus projects, and even Wikipedia. This is particularly helpful if you are in teaching, submitting patents, and using social media.

There are a couple of points I want to end with, one of which is the strengths of Altmetrics, which showcases the attention to, or influence of, research. This is done by the rapidity with which data or qualitative information accumulates, and it applies to more than traditional journal articles and books. There are some weaknesses that one should be conscious of in terms of Altmetrics. It’s only part of the story. There is great concern about gaming and bias, particularly by those who are very effective at self-promotion. The data that are accumulated in Altmetrics are not validated, especially for retention, promotion, and tenure. Again, I always advise our folks who are going up for review to find out what how their institution views Altmetrics and preprints in the scale of weighing for advancement. Thank you.

(SD) Thank you, Kurt. Fantastic. We are now going to move to our third speaker, Dr. Bearer.

Cynthia F. Bearer (CB): I also want to express my gratitude to Steve and Stephanie and the APS and the SPR for inviting me to be on this panel, and to represent Pediatric Research. I’m just headed into my sixth year as the Editor in Chief, so I am sort of a “newbie” on the block here. I thought I would try to describe what I think is really important for people to know about and that’s the instructions for authors. This is something that every journal has and describes for each particular journal. You can find a lot of information in these descriptions, if you read it all the way to the end, which almost nobody, in my experience does. However, you can find a lot of information about the journal in the instructions for authors.

For example, in Pediatric Research, we do publish original research articles, that’s the main bulk of what we publish. We also publish commentaries, correspondence, insight pieces, and narrative medicine. All of these are described in the instructions for authors. So, for those people who are listening, you don’t have to just publish original research, you can publish quite a number of other types of articles. I really like it when people submit articles that are commentaries on other articles or letters to the Editor. Then, sometimes we can have a nice conversation occurring in the journal about a particular finding, and not only the finding or the thoughts of other scientists that are experts in that area, but also the thoughts of families and patients who are impacted by that research.

In the instructions for authors you will also find the page charges. So, in some journals, you can publish, and it doesn’t cost you anything. For other journals there is a page charge, and we used to have that on the very last part of our instructions to authors. This is the part that nobody reads. So, people would be surprised when after an article got published, and usually like 3 months later, it goes into print, and they receive an invoice for a couple hundred dollars for the page charges. So that’s why it’s worth reading all the way through to the end, even though we put the page charges up right up front now, so that people are forewarned that they actually agreed to the page charges.

I think reading the instructions for authors is very good. We update them frequently. There is a new development for our instructions for authors. We wanted to look at our impact factor, because it is something that we do pay attention to as a journal, certainly we’re rated against other pediatric journals, in terms of impact factor. Although people are drifting away from having the impact factor serve as the end all, be all of a journal, it still is held in some regard by people who are submitting their manuscript. So, we wanted to figure out how to increase the impact factor of the journal by looking at the number of articles that received no citation. We wanted to sort this out early on, so we could tell authors quickly. We could reject papers without review, which would save time both for our editorial board and for reviewers, as well as guide the authors to an appropriate place to publish their work.

We went through this extensive data analysis, evaluating all our articles that were published within 2 years and their citations. We evaluated the number of citations in 2019 for articles published in 2018 and 2017. We also thought that the number of people we needed to invite to review an article was an indication that an article is not going to receive a lot of citations. This was a unique dataset, because we had to go to the publisher for the number of citations, and we had to go back to our ScholarOne platform to look at the number of reviewers invited. Sure enough we did find a relationship. Surprisingly, it didn’t indicate that the articles where we asked 25 reviewers to review, were going to be cited any less than the articles where we asked two people to review and they both said, yes, right away. What we did find was that the number of words in the title was associated with the number of times it was cited. If you have more than 13 words in your title, your chances of not being cited is lower than articles which have 13 or less words in the title. I don’t know if anybody here knows what a MeSH term is, but it’s MEdical Subject Headings and it’s a database of words and phrases that PubMed uses to find an article. If one has three or more MeSH terms in one’s title, then one is far more likely to receive more citations. This is important because it increases your H score, which Kurt also mentioned. This is the individual impact of the author. So now we have, in our instructions for authors, instructions on how to receive more citations for your article. So that’s why the instructions for authors are very important to read. Thank you.

(SD) Thank you, Cynthia. How can one become a reviewer and specifically what are the rewards for reviewing?

(CB) Reviewers are really what we count on in order to publish the peer-reviewed articles. In order to be a peer reviewer for Pediatric Research, our ScholarOne platform runs through every manuscript, to help the editorial board find reviewers. Reviewers are automatically listed that might be appropriate for certain articles. ScholarOne pulls potential reviewers from the Web of Science. So, if you register yourself into the Web of Science, then you might be more likely to be asked to be a reviewer.

Annually, Pediatric Research also asks members of the three societies to participate as a reviewer. We are the official journal of the APS, SPR, and European SPR. Every year, we have a campaign in the summer to ask members of those societies to volunteer to be reviewers for our journal.

The body of people who do the review is very important. The reviewers are usually what contributes the most to the turnaround time. Kurt was talking about how fast people want to see their articles accepted, and that really depends on getting reviewers and having the reviewers turn in their reviews on time. At Pediatric Research, I can’t speak for the other two journals, if your article is accepted and you’ve finished all the final paperwork, your article will be online in 3 to 5 business days. That counts as being published online, even though it’s not the final copy and edited version, but it will be available in PubMed at that point.

Turnaround times are really important to most authors. The rewards of being a reviewer, I think, need some attention at our academic institutions. I don’t think that our departments and medical schools fully recognize this service nearly enough as a contributor to somebody’s promotion. There’s an organization called Publons which makes it easy for people to track the number of papers that they’ve reviewed. Either the journal automatically sends the record of the review to Publons, or you can just send your thank you letter that you receive from the journal to Publons. They put it in a bibliography for you and then if you need it for promotion, Publons will send the list to you.

The other reward for reviewing is it really keeps you up to date in your field. I’m not a fan of the preprint world, because I really think everything needs to be peer-reviewed before it gets out there. The only way you can see a lot of data and research in your field is if you are reviewing articles that are submitted. It also keeps you in touch with the Editors that sit on your editorial board, who happen to be some of the leaders in your field. So, reviewing allows you to be in communication with them. At least for Pediatric Research, we feel that our society members have an obligation to review, especially other society member’s articles, just to support the field. Thank you.

(SD) Thank you, Cynthia. I want to thank all three of you for your introductory comments. We have a lot of questions from our audience. The first question is more focused on grant funding. Publications are critical for successful grant funding. I am going to pose this question specifically to Cynthia, as the Division Chief of Neonatology.

This question is about R01 funding, which is often not enough to support a lab. For example, you may pay 25% for salary and participate in 10% clinical work. As someone who may be transitioning from a K to a R, how do you obtain additional funding, not only for the lab, but to protect your time?

(CB) I think you could put yourself on an R01 grant, up to 40% effort and I think you need to have a conversation with your Department Chair about how that salary gap is covered. Typically, there’s a NIH ceiling on salary, I think this is around $199,000 and I know most neonatologists make more than that. That’s what’s called the salary gap, and different organizations find the money to pay for that salary gap from different sources. It helps to talk to your Department Chair about how that’s going to be covered. At Case Western, we have a lot of internal funding. We have core facilities and a CTSA. These provide opportunities for resources and internal grants. If you can incorporate them into your research plan, that’s another way that you can extend your funding.

There’s also supplements that are available through R awards that people have used. There’s a minority supplement. There are other supplements and RFAs for supplemental grants that can piggyback onto your R01. To tell you the truth, I’ve had a couple of R01s and I’ve pretty much discovered that they can cover the cost of running the projects that I have proposed. I think the problem for funding occurs when you want to try to obtain preliminary data to compete for a second R01. You may have to find other funding to try to obtain preliminary data so that you can apply for a second R01. Again, some institutions will give some of the indirect costs back to the PI in order to achieve this goal, but others won’t. So, it really helps to know the indirect policies at your particular institution. It also helps to know the strategy for covering that salary gap.

(SD) Great, thank you, Cynthia. Kurt, do you have any other sage advice?

(KA) Sure. One of the things that I encourage for young folks and I follow myself, in response to what Cynthia’s so beautifully described as the cost pressure to sustain a laboratory being in excess of a single R01, is to look for opportunities for a secondary project that may not require your direct leadership but uses your strengths. In other words, team science, and you establish a research program with another individual where you are secondary, in leading that project, but it expands your opportunities for productivity. It also expands your opportunities for other streams of potential funding. It also helps to make up for time and resources when your primary project has run into a hole or is up against the wall, which can occur in all projects. You may get waylaid for months or longer so having a fallback that allows you to continue your momentum both through an activity and during the perpetual search for continuing funding. So, I think a primary and secondary project, particularly in today’s world with the greater expenses, is a hedge against the consequences of going to a Division Chief or Department Chairman and just saying, I don’t have a way to fill the salary gap or the research gap, that exists.

(CB) Can I make another comment? I just want to build on what Kurt said. We realized with COVID-19 that a lot of people couldn’t get into their laboratories or do patient-oriented research; however, at Pediatric Research and in quite a few other journals, we saw a tremendous increase in the number of research articles that were submitted to us. We almost doubled the amount from 5 years ago, in what we call the COVID affected months, which were from February through July, which is where we collected our data. It was still going up at the end of July. We will have to take another look at the data in a year, but the number of submissions went through the roof. Only part of that increase was explained by anything related to COVID. The rest, I think, were just that people finally had time to write up a lot of their data and to submit it to the journal. It was a good use of time, even though one wasn’t in a lab generating more data. A lot of data was written up and published during that period of time.

(KA) Over the last 6 months or so, I have worked with trainees in national K12 programs, and what has been stunningly exciting is what I refer to as a pandemic pivot. These young investigators, with their K awards or their first R01 awards, recognize that their technology is relevant to asking a question about or testing a hypothesis related to COVID. These investigators submitted a new application that they would otherwise not have submitted. The young investigators took a risk with their pivot to potentially reap benefits in terms of surprise publications and surprise grants. All of this occurred because they recognized an opportunity that was not in their normal trajectory, and they jumped on it. This pivot is very exciting.

(WFB) Yes, certainly we have seen this with The Journal of Pediatrics. We exceeded our annual average number of submissions sometime in September 2020, and it wasn’t just the over 550 COVID-related articles. This may be, as you both have suggested, people finding the time or the impetus to dig out what was in their file cabinet or lower on their priority list.

(SD) Thank you to all three of you for your insights. It is important to understand cost share, which is the financial gap that Cynthia highlighted. This is the gap that may occur for those whose salary is above the NIH cap. Additional resources may be available internally, through the CTSA, or through foundation funding.

The next question is specific to Bill. What was the website that you referenced that defines authorship? You mentioned that in during your introduction.

(WFB) I was referring to the website of the International Committee of Medical Journal Editors,

(CB) Actually every institution, as far as I know, and every School of Medicine has a policy on authorship too. So, you might want to check both the website that Bill mentioned, but you should also look at your own institution’s policy. If there is a dispute about authorship, it’s going to be your institution that’s going to resolve the issue.

(WFB) That’s a great point, Cynthia, because we are not the judge and the jury. We don’t adjudicate when these author disputes occur, we do our best to make sure that all of “our ducks were in line” and then turn it back to the institution.

The ICMJE website gives us what I would consider to be the rules of the game. However, similar to the lacrosse field, the referees can interpret the rules in various ways.

(KA) There’s a third resource that’s available and it’s called COPE, the Committee on Publication Ethics, Anyone can login into and read the discussion threads. You can query author problems, plagiarism problems, and obtain resource references. COPE is available worldwide, and most journals have access to this tremendous resource for their Editors. I use it more as an Editor than I expected.

(CB) Guidance is also outlined in the instruction for authors, so you can always find it there.

(SD) Great, thank you. I am now moving to predatory journals. How does one know a journal is a predatory journal? Bill, do you want to answer that?

(WFB) Well, it’s difficult. There used to be a “blacklist” (Beall’s list), compiled by a librarian from Denver, but that is no longer available. Quite frankly, I don’t know if we need to identify the scammers as much as we need to continue to identify and certify legitimate publishers. There are organizations committed to enhancing ethical integrity. As Kurt mentioned, COPE (the Committee on Publication Ethics) has guidelines. There’s the DOAJ,, a Directory of Open Access Journals and OASPA,, which is the Open Access Scholarly Publications Association. These groups are committed to enhancing ethical integrity. It really boils down to you as an individual. “Think, Check, and Submit.” Think about what journal you’re going to submit to and check to make sure it has some legitimacy. My simple rule, as I mentioned, is if you’ve never read an article from that journal, it should not be your journal of first choice.

(SD) Can you use an article from a predatory journal, as a reference in a non-predatory journal? Is that a problem?

(WFB) Well, that difficult, because that means that the reviewers or the Editors are going to have to sort through that the list. I’ve been in meetings where I’ve seen somebody present data and said this is based on a study published in XXXX and it’s like “Abman’s Journal of Everything Scientific.” You think—this clearly can’t be a legitimate journal. We must make sure at our institutions that people really understand the “predatory” game.

(CB) We went through this at Pediatric Research, wondering if we should scan the references in submitted articles for references in predatory journals. There seem to be five databases, PubMed is one of them. We thought if the article is listed in PubMed, its not from a predatory journal. Unfortunately, there’s a lot of start-up journals that aren’t predatory, but aren’t listed in PubMed yet, so we didn’t think that was fair.

(WFB) That’s a great point. Let me extend it a bit. Some people submit to a predatory journal either because they are naive or because they want to play the game. They want to pad their CV. It is important for us, as academic institutions, Division Chiefs, and Department Chairs, to make sure you keep that in mind as you review CVs. If there are articles published in a predatory journal, ask “was the person naive or deceitful?”

(SD) In regard to predatory journals, isn’t it better to just get your work published in this format, in the “publish or perish” atmosphere of academia?

(WFB) The problem is these journals often disappear as soon as people are onto their game. Many of these have an address, that is for example, a strip mall in Denver. If these journals disappear, your article, for archival purposes of a journal is erased. Obviously, I’m very much against the idea.

(KA) As am I.

(CB) The other problem with predatory journals is that they will accept your article and then you can’t submit it someplace else. They will hang onto it forever. Some never get peer reviewed or pass-through review. It can take months compared to the turnaround times at our three journals. Once you submit your manuscript, it’s not ethical to submit someplace else. So, I would think twice before submitting to predatory journals. I mean, you may think you’re more likely to get it published, but you may wind up losing it.

(KA) Plus, you are going to have to pay $3500 to $5000 upfront. This is non-refundable, regardless of the outcome. Just another reminder, Editors and journals have a way of finding these things out because all three journals represented here today use iThenticate, which as Cynthia said, searches for all content published and on the web for plagiarism, including self-plagiarism. IThenicate provides editorial offices with protection from authors who may think, well, this one will pass, no one will catch it. I would not play that game, because your career is at risk.

(WFB) Kurt I’m glad you mentioned that. All of the manuscripts that are submitted to our journal are put through the cross-check process. The difficulty is to sort out the data, because you get a list of how much overlap exists and where the overlap is placed. If it’s in the methods section, that may be OK. There’s only so many ways you can write how you have done your analysis. You have to make a decision when you receive that print out, and often “overlap” is the reason that we reject.

(CB) Some journals, like our journal, don’t check every manuscript that comes in because there’s a cost to iThenticate and I think it’s per review. So, Pediatric Research won’t run iThenticate until we are fairly sure that the article is going to be accepted. At the time of the last revision, we tell the authors that the paper is not meeting the iThenticate score. So, an article can get overlap to another publication up to 5% at Pediatric Research. Again, it’s not the reference list, or the authors institutions, or the methods section that concern us but the introduction, results, and discussion. We ask authors to then edit so that they lower their iThenticate score so it’s not verbatim taken from someplace else. Also, iThenticate can be set for sensitivity. I’d be curious, Kurt and Bill, how sensitive do your journals run iThenticate. You can have it pickup three words that are the same, or six words in a row that are the same. So, you can actually vary the sensitivity of the program. I think we’ve been using six words. We’re not down to three words.

(KA) Yes, we have gone as low as six words, but we pay particular attention to whole sentences and entire paragraphs in the Anatomical Record.

(SD) That was a great discussion. Is it acceptable to use a medical writer, for instance, to help with final grammar review, to format data and references, in order to help you meet general requirements?

(KA) For the Anatomical Record, and the other journals that are part of the American Association for Anatomy, the answer is yes, that is acceptable, with some caveats. Whenever a professional writer contributes, the article has to go back to the authors. The authors have to sign off that the meaning and the content of their paper was not changed by the professional writing service. We obtain English writing service from the publisher, which is Wiley, for only accepted papers. Emphasis is placed on improving grammar and punctuation, and word choice and phraseology for clearer meaning and understanding. We limit this service to accepted papers because the service is expensive for the Journal. Also, the editorial office, associate editors, and reviewers assess the quality of English writing during the submission and review steps.

(SD) Great, thank you, Kurt. Have you noticed any new types of scientific misconduct that have emerged in recent years that perhaps weren’t a concern in the past?

(WFB) I think we’re better at detecting misconduct, and I think there’s a greater awareness. We talked about this with plagiarism—being able to use tools such as iThenticate to cross-check, so overlap becomes more obvious. I think salami science is now more obvious—where you take a big study and slice it into little pieces. I think that is becoming more prominent and with COVID, we’ve seen a variant of this approach—submission of cases by more than one subdiscipline within an institution. That is clearly something that has gone on for years, but with COVID, everybody wants to get information out right away.

(KA) Stephanie, my answer is yes to your question, and it is something that has happened globally. By global, I mean journals published not only in North America, but in the UK and in Europe in 2019 and 2020. Globally, this has been seen in hundreds and hundreds of papers. The specific issue involves a company that is a processing plant for Western Blots and mRNA measurements. Many laboratories signed contracts to send their samples to the company. Our journal received 11 such submissions with fabrication and falsification of data, mainly graphs of FACS, immunoblot, immunohistochemistry, and histology. Detection requires digital graphic analyzing software to see manipulation of axis labels, gel lanes, and tissue sections. For example, identical graphs or gels in different papers that have different X axis or Y axis labels (conditions). This issue has thrown a real monkey wrench into the peer review process and accountability. I have a stack, about eight inches tall, of printed papers that we combed through by hand to verify that FACS scans, gels, et cetera were the same ones used in multiple papers. The outcome of each investigation resulted in withdrawal of each submission or retraction of each publication because of concerns about the integrity of the data.

(WFB) Kurt, I assume you also were looking at image manipulation, which is so easy to perform now, with all these graphic tools.

(KA) Yes, unfortunately, it is too easy to do. Image manipulation is chipping away at the tree trunk of integrity of science, which we cannot afford.

(WFB) Journals got off to a bad start with COVID. You remember that the first paper published on COVID was subsequently retracted and that just set up even further skepticism among the public. Who can we trust?

(SD) Absolutely. How do you as Editors identify image manipulation?

(CB) We do have a lot of images in our manuscripts since we publish basic translational science. This is the first I’ve heard of these issues, so I’m sitting here, terrified right now. We don’t run images through a program like iThenticate. The only thing that might show up in our peer review is that our reviewers recognize that the article is very similar to something else that they’ve seen. That’s the only time that we’ve actually picked up that kind of problem. However, it hasn’t been a manipulation of an image, at least at this point. So, I’m going to have to go back and ask our publisher, how they actually screen for image manipulation.

(KA) I hate to add to your nightmares Cynthia but go to Be sitting down because you will be shocked at what you see. It is a helpful website because they do use digital graphic analytical software and algorithms to look at pixels to detect whether individual pixels are manipulated. In fact, pubpeer identifies all papers that share the same manipulated digital content. Also, the title page and digital object identifier for each incriminated paper are provided. We used the reports by pubpeer to lead our investigations. Unfortunately, I had to get to know the pubpeer reports quite well, because of the 11 submissions that we received that had dubious scientific integrity.

(CB) Did you know about these 11 submissions due to the website? Or did you know because all the articles had figures related to a certain company?

(KA) Actually, it was really simple. contacted our editorial office and said, hey, look what has happened within your journal. Your journal has been victimized, which opened the door for investigation.

(SA) Can I interrupt? I just want to add this is such an important topic. I usually don’t say anything at this stage. I think this is so vital because the NIH and others have recognized issues with reproducibility of science, as an ethical issue. How we solve problems, what we do, how we approach disease, how we train folks; these issues have huge implications. So, I would love to hear from each of our panelists, if you identify plagiarism, image manipulation, what do you do? Also, in the context of what should all of us do, as it’s a team sport? It’s the lab director, it’s the Section Chief, it’s the Chair, it’s the Dean, but as the Editor, how do you play into that?

(CB) We follow the COPE guidelines and a lot of times there are discussions between the editorial office and the publisher. Our publisher, Springer Nature, is really facile with knowing where the resources are, and where we should go to find out what our strategies should be, but the backbone is the COPE guideline. So, there are times when we reject the manuscript, but we tell the authors that we’re going to notify their institution regarding our findings.

(WFB) Let’s just say with iThenticate and cross-ref we say this is too close, and we editorially reject the article. However, it sometimes gets to the point where you say, come on, this is more than just sloppiness. This is more than just cutting and pasting. This must be reported to the institution. There’s been talk about blackballing certain authors. I probably wouldn’t go that far. I think that this goes back to the institution. The institutional leadership needs to be informed and then they can decide.

(KA) The policy in our journal is when we have a reviewer identify a potential manipulation of figure contents, we ask authors to send their original data files. We stop the review while we wait and we inform the authors of this pause. We give them a secure website into which to provide their original data files. What has been interesting is some authors do not like that kind of scrutiny of their original data, so the authors disappear. I think that says something about those individuals.

Within my lab, we have a second pair of eyes that are naive to look at data and do data history tracking to protect integrity of our data. Otherwise, a hazard is not having daily, mentored oversight and training interaction to assure good science, with an expectation of ethical conduct.

(SD) How should a mentee address the following issue: he or she feels his or her mentor used her or his research idea and turned it into a grant to obtain funding? In other words, how does he or she deal with stolen research without fear of repercussion?

(KA) My institution recently had some experiences like the one you describe. The challenge is a person who is responsible to another person runs the risk of intimidation or worse when identifying concerns about data. My institution created an Omnibus person position that allows individuals who are in a precarious situation, such as having to answer up a hierarch, to identify concerns. Protections are used. The process has been navigated successfully. I think that short of having a mutually respectful relationship with a mentor, a sense of workplace safety should exist for confidential discussions about a problem. I remain pleased with how my university is addressing this important topic.

(SD) Thank you, Kurt. Should journals provide compensation for individual reviews, or for those on editorial boards? If there is no compensation, then how do you balance reviewing with all the other responsibilities that you have as a faculty member?

(CB) Well, we don’t pay our reviewers. I think that would be a conflict of interest. We don’t pay our editorial board either. It’s volunteer, but I think being on an editorial board and being a peer reviewer actually does help you with your promotion. These are non-monetary rewards. I think a lot of people probably just conduct reviews or complete editorial duties during their scholarly time. I know this is the approach at Rainbow Babies. I read this in the faculty handbook, because I had to sign that I read this handbook before I took my position. Specifically, a full-time clinical FTE is defined as working 36 hours a week, but is expected to work 50 to 55 hours a week total. So even if you’re a clinician educator, that gives you 14 to 19 hours a week that you’re spending on your scholarly activities. I don’t know how it works at other places, but I think that’s a pretty good deal. Peer reviewing would fall into this scholarly time.

We do offer our top reviewers an honorarium at the end of the year because we like to honor those people. I don’t think it’s a great incentive because I think of an incentive similar to a Pavlovian response. So, the honorarium is too far beyond the time when the actual review occurred to be considered an incentive. We also give tokens of appreciation to our editorial board and allow them to decide whether they’re going to actually take the token or they’re going to re-invest in the journal. So last year, about 30% of the Editorial Board decided to re-invest that token into the journal. With this resource, we started this new project called The Kids Science Project, where we’re inviting teachers to partner with one of the editorial board members to design a project around one of our articles which can either be rewritten in language that their peers would understand or, doing a YouTube, dancing it out or something similar to these options. They can be creative. At the completion of that project, we pay the teacher an honorarium for helping us with that project. So, we actually just had our first project completed and it should be online. That’s the only renumeration that anybody receives.

(KA) Yes, that is an interesting question. Peer review is a payback system in many ways. You submit a manuscript. You want someone to provide constructive feedback. So, reviewing is like any other public service, it is one of the obligations of the field.

(KA) We at The Anatomical Record do find that the Pavlovian approach works. We provide dark or milk chocolates. You get to people’s stomach, you get to their heart and their head.

(CB) I did send an editorial board member once a bag of goldfish, because he finally found reviewers to review this paper on fish consumption after like 27 invitations to review.

(SD) Do editors think that steps toward open data sharing and other measures to increase transparency, such as open peer reviewing, publishing peer review reports, would help build trust in the peer review process and the published science?

(KA) My answer is yes. It certainly increases transparency, but there is a cost. For example, I agreed to review a paper that had more raw data, excel spreadsheet files for sequencing results, than the length of the paper. As a reviewer, my cost is high to read and critique all of the supplemental data. Nonetheless, I think that the availability of these large datasets improves transparency. Such datasets also provide opportunities for discovery by someone who reads the datasets.

(SD) Another question has to do with reviewers working with trainees. Do you encourage that, and if so, where should it be disclosed?

(WFB) I think this is a great way to interact with young investigators and young trainees. We would certainly encourage this, but you have to disclose it. You might say this paper was reviewed with “so and so” for transparency—obviously when you receive a manuscript to review, this manuscript is not to be shared. It should be for your eyes only. However, we certainly would encourage you to ask young individuals to review with you. We love to have young individuals involved in the peer review process. We’ve had residents and fellows on our editorial board, and they have had tremendous input. So yes, I very strongly encourage it, but disclose it.

(CB) We also encourage this at Pediatric Research. Actually, our invitation letter to reviewers encourages them to ask a more junior member, and supervise them. It’s disclosed that a junior member was involved in the process when they submit their review.

(SD) Another question is about articles that have beautiful graphics. Any suggestions of software that can be used to make infographics or flowcharts, for instance? Do any of you have any suggestions? Kurt do you have any suggestions?

(KA) My first suggestion would be to find a digital graphic artist at your institution and introduce yourself to him or her, and say I’d like to learn how to make a digital graphical abstract. Graphical abstracts are a terrific way to display intellectual content visually in a small compartment. Once you receive that lesson, you can be creative in illustrating your science to others.

Photoshop, I do not have any financial interest in Adobe. Photoshop is certainly one route to go. There are other commercially available software packages for creating scientific artwork. An efficient way is to introduce yourself to the digital graphic artist at your institution. Be prepared to have fun and learn, and you probably will want to buy two large monitors and a more powerful computer.

(WFB) I was going to say there are also useful articles, websites, and textbooks. One that I am familiar with is Edward Tufte, who has written Envisioning Information. He talks about how the world is multivariate, so your display should really be high dimensional, and he gives a number of useful examples. So that’s a nice quick read with some good graphics.

(CB) I attended one of his seminars, it was a present for my birthday. I wouldn’t recommend it, stick to the book. At Pediatric Research we look for mechanistic pathways in our basic translational articles. Then we ask our authors to send us a graphic explaining that pathway, which can be hand-written. Springer Nature actually has a graphic artist that interprets and converts that drawing into a graphic illustration. So, there are examples of this in the journal. I think some of the journals also do the infographic, what did you call it, Kurt?

(KA) The graphical abstract.

(CB) Yes, the graphical abstract. I think Archives might do that. I think they do this for the authors. I don’t think the authors actually do it. Some places you can get the graphic artists, as part of the package.

(SD) These are great suggestions.

(WFB) Not to digress, but I think the use of clear graphics in presentations also is key. Again, it’s knowing the rules, running your presentation by individuals and make sure you’re telling your story clearly and concisely.

(SD) Absolutely. Authors often feel that the reviewer comments are all addressable with revision, and ultimately their manuscript may be rejected. Can you comment on how you address this issue?

(WFB) Certainly, on the review form for the Journal of Pediatrics, there is a box for comments to the author, and a box for confidential comments to the Editor. There may be a night and day difference. The authors only see the comments addressed to them. They don’t see the confidential comments. This can be dealt with in a number of ways. You can weave the comments into your rejection letter or your revision letter. Authors need to understand that there is that dichotomy.

(KA) I totally agree. I am managing a submission for which the recommendation of three reviewers was to reject. The submission reports an interesting finding. The submission could have been presented differently to anticipate potential concerns raised by the reviewers. I reached out to the corresponding author to inform them of the reviewers’ comments yet that I see merit and value in their study. The author was delighted with my reach-out and agreed to take the reviewers’ comments and my suggestions to compose a new version for submission. I informed the reviewers so that they were not blindsided. We await the new submission. I think that one of the enjoyable, and also challenging, roles of an Editor is juggling input of reviewers with the potential for an impactful report of science. If they so choose, Editors have some latitude.

(CB) I would have to agree with that, too. We have a two-tiered editorial board. Eleanor Molloy, who is the Associate Editor in Chief, and I alternate which papers come to us. Then we have Section Editors that we can assign and then the Section Editor can assign an Associate Editor and the Associate Editor is the one who finds the reviewers. So, a lot of times, an article may potentially go down a different pathway once it has been through all those Editors and out to reviewers. The decision may come back totally different. So, if I receive something back where the Editors are at odds with each other or the Editors are at odds with the reviewers, I tend to have personal conversations with them. We have the option of a final decision of reject and resubmit. This option may be for something that’s really interesting, but needs more data. We can use this option to take the manuscript down a different path of peer review. The authors would resubmit the manuscript, after discussion with me, and then we go to a different Section Editor and then down a different path. This also breaks the chain.

(SD) This has been a fantastic discussion. We were unable to get to all of the questions. I want to remind everybody that this session was recorded and will be on the SPR and APS websites. I really want to thank Cynthia, Bill, and Kurt, for your very sage advice and knowledge on journals. I will now turn this over to my colleague, Steve.

(SA) Thanks to each of our panelists for an open and insightful discussion. This was a great session, covering much ground that is not often clearly expressed to those of us in academic medicine. Certainly, being an Editor is no easy task, and is clearly more than just pushing papers around and communicating decisions. There are many issues about ethics, integrity, and the quality of science. I think those topics were so nicely presented by everybody. So, thank you very much.