Research grants: Conform and be funded

Journal name:
Date published:
Published online

Too many US authors of the most innovative and influential papers in the life sciences do not receive NIH funding, contend Joshua M. Nicholson and John P. A. Ioannidis.


  1. Tatsioni, A., Vavva, E. & Ioannidis, J. P. FASEB J. 24, 13351339 (2010).
  2. Horrobin, D. F. Lancet 348, 12931295 (1996).
  3. Ioannidis, J. P. Nature 477, 529531 (2011).
  4. Nicholson, J. M. BioEssays 34, 448450 (2012).
  5. Young, N. S., Ioannidis, J. P. A. & Al-Ubaydli, O. PLoS Med. 5, e201 (2008).
  6. Garfield, E. Citation Indexing: Its Theory and Application in Science, Technology, and Humanities (Wiley, 1979).
  7. Kelly, C. D. & Jennions, M. D. Trends Ecol. Evol. 21, 167170 (2006).
  8. Couzin, I. D. et al. Science 334, 15781580 (2011).
  9. Horrobin, D. F. New Sci. 94, 842844 (1982).

Download references

Author information


  1. Joshua M. Nicholson is in the Department of Biological Sciences, Virginia Tech, Blacksburg, Virginia, USA.

  2. John P. A. Ioannidis is at the Stanford Prevention Research Center, Stanford, California, USA.

Corresponding author

Correspondence to:

Author details

Supplementary information


  1. Report this comment #52771

    Elena Martinelli said:

    I appreciate the need to investigate into the NIH review system to assure that the "best" research is funded (especially in a period with such tight funding availability). However, I am not convinced that the "best" and "most original" (as oppose to "conforming") research can be quantified in terms of number of citations. Often the most highly cited papers and scientists are the most well-known figures in a specific field. It is common that these "highly cited scientists" publish on high impact journals more easily than the "unknown" young investigators and this definitely helps in increasing the number of citations. Finally, more often than not, these scientists do not participate to study sessions because they are "too busy" being "too famous". I personally think that the decision of funding or not a project should be based ONLY on the merit of the proposed project. Once verified that the PI has enough "potential" to execute the research, he/her should not being discriminated because of his/her publications. Moreover, young scientists should be encouraged to participate to study sessions where they can learn more about the process and might be even less biased by the relative "importance" of the name of the PI in the application.

  2. Report this comment #52776

    Pratik Desai said:

    I agree that people will tend to fund research that is familiar or similar to theirs, leading to an unintended conformist culture. However, this has to be balanced against the need for reviewers who are actually experts on the topic and are truly capable of judging the scientific merit, as well as feasibility, of an application. Comment sections are already rife with complaints about how grants were rejected for silly or uninformed reasons, likely because one of the reviewers didn't understand the area. By having some type of "stakeholder" who may or may not have full understanding of the issues, you are increasing the 'bad reviewer' problem.

    The idea of guaranteeing funding to those who have authored one or two highly cited papers is also unconvincing. Citation metrics are popular because they are easy to calculate. People use what is available, and what is available are citation counts. They have numerous problems with counts that are documented elsewhere. As an example, the developer of a statistical tool that is used by many labs can receive thousands of citations without being particularly innovative or groundbreaking. You can have highly cited paper because you institution had access to a new technology and you were able to do one of the first study on a topic that later turned out to be bad and wrong. And just as study sections can be reinforcing, the author/reviewer networks at high impact journals can also be self reinforcing. You can certainly get a boost in citation counts by appearing in a high impact journal, even if the paper turns out to be mediocre.

    But assuming that you did publish a truly innovative high impact paper once, should that guarantee what amounts to a 'free ride' for the rest of the career? Currently, NIH MERIT awards take into account a long track record of productivity. A decade of free ride for these awardees is generally very well deserved. Your proposal, which amounts changing the criteria for MERIT awards to 'has published one paper with more than 1000 citations' does not sound particularly good.

    In reality, the bigger picture is that current problems are caused by abysmal success rate for everyone, including famous and non-famous people. The editorial in this issue correctly points that out. Lack of funding has caused what amounts to infighting among applicants, with various groups asking that they should receive preferential treatment. These proposals include:

    • Give more funding to new/young investigators
    • Give more funding to famous people (this article)
    • Give more funding to those who have funding (to ensure continuity and stability)
    • Give more funding to those who don't have funding (spread the wealth)
    • Increase the size of grants
    • Decrease the size of grants
    • Increase the funding period
    • Decrease the funding period
    • Decrease the indirect funding
    • Increase the number of attempts

    and so on. The underlying problem is not that there are not enough attempts allowed, or not enough famous people are being funded, or any of the other issues. The problem is that there is to little funding relative to the number of applicants.

  3. Report this comment #52781

    Jim Woodgett said:

    Ouch. There is a lot in this study that weakens the conclusions. An arbitrary cut-off of 1000 citations was perhaps necessary for practical reasons but setting this at such a high level is undermined by author repeat frequency. It looks as though most 1000 cite authors do not reproduce this level of citation, hence the papers are anecdotal, "black swans" and highly atypical of career behavior. Moreover, as discovered by the Nature editor who contacted several of these authors, they have varied and justifiable explanations for not having current NIH funding. There is also no recognition of the effect of funding agencies such as HHMI that skim some of the cream of high performance researchers, effectively removing them from the NIH system (it's more difficult to compete for NIH awards if holding substantial HHMI funding).

    There may well be network bias among study section participants but perhaps this is due to legitimate reasons such as learning to write better grants, interacting with fellow scientists, picking up new ideas, learning from others, etc? The best activity a young investigator can do is accept an invitation to a study section. This is not gaming the system, its a reflection of the need to understand the myriad elements upon which a proposal is judged. The Australian MRC proposed eliminating current grant holders from its grant review cohort due to "conflict of interest" but found the remaining pool insufficient. The Canadian CIHR is considering making all grantees obligate reviewers. Thus, there are major differences in approaches. Surely, what is most important for effective screening of scientific ideas is that:

    1. The most meritorious applicants are identified to receive funding
    2. The process is transparent, lacking in bias and is as fair as possible
    3. The process does not influence or shape the type/style/content of applications

    All three elements depend on the quality of reviewers. If this is assured, the rest of the issues (whether real or not) should take care of themselves. We do not train reviewers well and we do not evaluate their performance (to weed out bad apples). Such quality assurance methods are essential for effective peer review.

  4. Report this comment #53484

    Richard Ebright said:

    Ioannidis and Nicholson either are ignorant of how biomedical research is conducted or are willfully deceptive.

    Their analysis involved first, last, or single authors of publications with >=1000 citations.

    Their terms of their analysis ensured unreliable, invalid conclusions for two reasons:

    First, in the biomedical sciences, the norm is to assign first authorship to persons who collect data--typically junior personnel in training and technical positions--rather than for persons who conceive and design research. As such, first authorship is not a reliable indicator of creativity and scientific impact.

    Second, in the biomedical sciences, the majority of publications with >=1000 citations are reviews and clinical-trial reports, both of which are strictly des criptive, and neither of which, even remotely, is a valid indicator of creativity and scientific impact.

  5. Report this comment #53606

    Joshua Cherry said:

    Nicholson and Ioannidis claim, based on current funding status of authors of extremely highly-cited articles, that the US National Institutes of Health (NIH) fails to fund innovative science, instead supporting mediocrity and conformity. This analysis is flawed in rationale and provides little insight into how the process can be improved.

    Ioannidis himself has stated that highly-cited publications do not necessarily represent the best science 1, and suggested that they are more likely to report spurious results [1,2]. Most scientists can point to highly-cited work that is deeply flawed, and to innovative work that has received little attention. The highly-cited publications analyzed include many valuable contributions to science, but few are strikingly innovative, and some are mundane.

    Ioannidis reminds us elsewhere 3 that ?it is often difficult to see what an author has truly done in a specific paper?. Even if a publication is innovative, it would be wrong to conclude that the first and last authors are both exceptionally innovative.

    Perhaps most dubious is the implication that high citation is associated with non-conformity. Surely it is often the result of mainstream work in a popular, even fashionable, area. Nicholson himself has claimed that non-conforming views are relegated to obscure journals and ignored 4.

    It is unfortunate that such an unsupported and inflammatory conclusion was given a high-profile venue and accompanying media exposure. It is also a shame that the list of publications analyzed was not provided on the Nature web site. Public availability of raw data is important to science 5. Inspection of the list would lead many to conclude that authorship of one of these publications is not a reliable indicator of exceptional innovativeness, non-conformity, or likelihood of submitting a meritorious grant proposal to NIH in the relevant timeframe. It would also reveal that the vast majority of the relevant publications cite NIH funding, undermining the conclusion that NIH does not fund innovative research.

    1. Ioannidis J. P. A. & Panagiotou, O. A. (2011) Comparison of effect sizes associated with biomarkers reported in highly cited individual articles and in subsequent meta-analyses. JAMA 305, 2200-10. Pubmed

    2. Ioannidis J. P. (2005) Contradicted and initially stronger effects in highly cited clinical research. JAMA 294, 218-28. Pubmed

    3. Ioannidis J. P. A. (2008) Measuring co-authorship and networking-adjusted scientific impact. PLoS ONE 3, e2778. Pubmed

    4. Nicholson J. M. (2012) Collegiality and careerism trump critical questions and bold new ideas: a student's perspective and solution. The structure of scientific funding limits bold new ideas. Bioessays 34, 448-40. Pubmed

    5. Alsheikh-Ali A. A., Qureshi W., Al-Mallah M. H. & Ioannidis J. P. (2011) Public availability of published research data in high-impact journals. PLoS ONE 6, e24357. Pubmed

  6. Report this comment #53658

    Peter Cary said:

    On behalf of John Ioannidis:

    I disagree with Richard Ebright's view that "first authorship is not a reliable indicator of creativity and scientific impact". Isn?t a young scientist who first-authored a paper that 5-10 years later reaches the top 0.01% of citations worthy of independence? If this view is common among established senior investigators, it explains why young superstars abandon the NIH-academia path without even trying/applying.

    You find reviews and clinical-trials "strictly des criptive, neither of which, even remotely, is a valid indicator of creativity and scientific impact." Top-0.01% cited reviews in basic science advance pioneering ideas (reviews offer liberty lacking in "original" papers), develop new hypotheses, and/or are written by field leaders. In clinical science, systematic reviews and meta-analyses are the most important original research (JAMA 2005;293:2362). In genomics, meta-analyses are responsible for almost everything we know about genetics of common diseases. Clinical trials represent original, experimental, not "des criptive" research. Their past NIH funding ensured medical progress: e.g. demonstrating antihypertensive therapy effectiveness; or hormonal therapy and beta-carotene ineffectiveness despite extensive promising "basic" research. Currently clinical research is delegated to industry (BMJ 2006;332:1061) with dire consequences for science, patients, and innovation, and derailed healthcare budgets.

    Independently, the President's Council of Advisors on Science and Technology reach identical conclusions, calling current NIH innovative grant mechanisms "tiny, almost invisible" (Science 2012;338:1274), with only 50 awards made this year in the three director's categories among 35,944 grants. NIH is the global beacon for research. It deserves a much larger budget and must efficiently support superb scientists of diverse scholarship. Having leading basic scientists dismiss clinical/translational scientists and their research – or vice versa – fails this goal.

  7. Report this comment #53731

    Peter Cary said:

    On behalf of John Ioannidis:

    Cherry perpetuates the misconception of focusing on the papers rather than the excellent scientists whose work is high-marked by top-cited papers (Nature 2011;477:529). We carefully used the term "influential" in framing our citation analysis. If one ignores objective citation data, identifying what is influential becomes so subjective, that reviewers may only consider their own sub-field worthy of funding. This is testified ironically by comments to our analysis where leading scientists discard all highly-cited papers that differ from what they do themselves. This is also what our analysis of similarity fingerprints showed: reviewers fund grants similar to their own work. Given this human limitation, extremely innovative projects suffer the most, since no reviewer is doing anything similar (for bias against novelty in grant-review, see: Extremely innovative work is rare and difficult to anticipate in grant proposals. Even among research findings published in the top basic science journals and heralded as major innovation promises, only 5% get successfully translated within 25 years (Am J Med 2003;114:477). What can be (partly) measured is researcher excellence (influence): excellence predicts further excellence (PNAS 2007;104:19193).
    I fully encourage transparent, detailed author contributions (JAMA 1997;278:579). NIH funding mechanisms could actually help enforce their routine adoption to optimize credit attribution. Until then, I have to trust that first and last authors fulfill Vancouver criteria and should get credit for their work.
    I made all my data immediately available to Cherry, Santangelo, Salzberg, and many other investigators who have requested them, as promised in the commentary. A major advantage about citation data is that they are already available in public, either for free (GoogleScholar, Microsoft Academic Search) or to thousands of subscribing institutions (Scopus, Web of Knowledge). Anyone can generate transparently and reproducibly a continuously updated list of top-cited life/health science papers in Scopus. Conversely, three weeks after my request, I have yet not received the raw re-analysis of our data by Santangelo that subjectively eliminated two-thirds of the most influential papers.
    Our commentary noted that if extreme citation metrics were to be adopted for directly funding investigators, they should focus on unrefuted papers. Scientometrics are not perfect, but can be very useful, and several countries (e.g. UK, Switzerland) have started employing sophisticated citation metrics. I welcome experimental studies to compare different modes of funding allocation, but systems ignoring the science of measuring science are unlikely to perform well.

  8. Report this comment #53831

    Joshua Cherry said:

    Ioannidis writes that I "perpetuate the misconception of focusing on the papers rather than the excellent scientists whose work is high-marked by top-cited papers". This statement takes as fact the main assumption that is being disputed: that this citation analysis has reliably identified "excellent scientists". In fact Nicholson and Ioannidis have assumed something stronger: they repeatedly equate these authors with "innovative thinkers". Many have pointed out reasons to doubt this assumption a priori. Examination of the list of publications only adds to this doubt.

    Ioannidis' reference to the Vancouver criteria does not address my point. We all know that these criteria are often not met, but let us put that aside. Consider a truly innovative publication with more than one author. Must all such publications involve at least two exceptionally innovative thinkers? Do innovative thinkers never publish with more ordinary co-authors? How many innovative thinkers does it take to produce a highly-cited publication?

    Ioannidis did promptly provide me with data when I requested it, for which I am grateful. However, few readers will take the time to do so. Many more would follow a hyperlink that leads to a list of the 158 publications analyzed, enabling them to judge for themselves whether they identify exceptionally innovative thinkers. Most would conclude that they do not. They would also see additional problems, such as that 10% of these publications that have no particular connection to life sciences or medicine, Scopus classifications notwithstanding. Absent such a readily accessible list, we can only discuss these publications in the abstract. Far more enlightening for interested readers would be inspection of the list of publications.

Subscribe to comments

Additional data