Illustration by Phil Disley
After half a century of citation indices, several intriguing questions remain. Are the most highly cited papers the most important ones? Does science make progress mostly through evolution or through revolution? Are these two processes mutually exclusive or complementary, and which do high citations most reflect? Are surprising findings difficult to publish?
Highly cited papers are nodes in the network of the dissemination and discussion of scientific information. But citation counts alone cannot reveal why a paper is considered so important as to attract repeated mention by other researchers. To contribute to these debates, we surveyed the most-cited authors of biomedical research for their views on their own influential published work1, 2, 3.
We got some intriguing feedback. The vast majority of this elite group felt that their most important paper was indeed one of their most-cited ones. Yet they described most of their chart-topping work as evolutionary, not revolutionary.
Best of the best
We listed the 400 most-cited biomedical scientists in the period 1996–20114. We selected each author's ten most-cited papers (adjusted for publication year) published in 2005–08, and asked them to score the papers (on a scale of 0 to 100) in six dimensions.
We restricted the period to 2005–08, because the perception of the importance of a paper can change over time5. Old, highly cited papers might become stereotyped6: people unquestioningly treat them as canonical. Recent papers (those published within the past five years), have had insufficient time to accrue citations.
Overall, 123 scientists responded, scoring 1,214 papers between them. On average, investigators tended to give their blockbuster papers high scores for dimensions that reflect evolution: Continuous Progress, Broader Interest and Greater Synthesis (for definitions of these terms and extended data, see Supplementary Information). They gave the papers lower scores on average for dimensions that reflect revolution: Disruptive Innovativeness and Surprise (see 'Self assessment'). They also indicated that blockbuster papers were easy to publish, with some exceptions.
Twenty scientists (16%) felt that their most important paper published in 2005–08 was not among their top ten most cited. However, most of these 20 papers were still heavily cited (on average in the top 3% published in the same year in terms of citations; seven were in the top 15 papers that the author published in 2005–08). Authors scored these papers higher for Disruptive Innovativeness (in nine cases) and Surprise (in five cases) than their ten most-cited papers.
Fifty-two papers were appraised by at least two authors. We evaluated co-author agreement by comparing the scores for each dimension to their median values for that author. We considered the paired responses as an example of an agreement if both authors scored a paper above their median, below their median or at their median. The expected proportion of agreement given random responses is 50%. The rate of agreement ranged from 74% to 86% for the six dimensions, but the sample size was limited.
Predictably, the strongest correlations were between Disruptive Innovativeness and Surprise, and between Surprise and Publication Difficulty. We had expected that papers would be perceived as either evolutionary or revolutionary, not both. A skew towards evolution or revolution would have shown up as a strong negative correlation between Disruptive Innovativeness and Continuous Progress, but this relationship was not statistically significant. Instead, Broader Interest scores correlated with strong scores for both revolution and evolution.
Our survey, of course, has limitations. First, only just over 30% of the contacted authors replied. The non-responders might have given higher scores for revolution or evolution. Second, we assessed papers published only in a narrow time frame, in an effort to achieve uniformity. Third, authors may appraise their own work more positively than work by others.
A fourth caveat is that the sample of the most-cited scientists is biased in favour of individuals whose work has already been widely accepted. Those with paradigm-changing ideas that are not so accepted (and thus cited) would not be in this group. Finally, scientists whose citations are closer to the average may have scored these highly cited papers or their own middle-ranking papers quite differently.
We suspect that among more averagely cited work, evolution scores would have been even more prominent. True out-of-the-box innovation and major surprises are probably not common across the literature.
This small survey is a salutary reminder both of how much information citations can convey, and of how much about the science of science we have yet to understand. For example, given that revolutionary papers are quite rare, how might we identify them early? Do innovative papers make connections between areas of knowledge that are not typically made7, or do they get cited early on by papers in remote fields?
In the future, with all the low-hanging fruit plucked, will we see (among the most-cited papers) a relative drop in the percentage of revolutionary papers and a corresponding increase in papers that provide a synthesis of the literature? How quickly does stereotyping of opinion about the importance of a paper happen, and how quickly and how much do such opinions change over time? What proportion of the most important papers across each of these six dimensions might be found among the output of the large majority of scientists with more-average citation profiles?
It would be particularly useful to know whether successful out-of-the-box ideas are generated and defended largely by the most influential scientists or by colleagues lower on the citation rankings. Would the opinion of scientists who cited the top papers that we examined square with the opinions summarized here? Are there other dimensions in addition to the six that we examined that might capture the essence of important work? For example, some respondents pointed to the significance of translational potential and social impact for research, which might have been captured only in part under our Broader Interest dimension. And it would be interesting to know whether there are major differences in the evolution-versus-revolution pattern in the physical sciences.
One way to answer some of these questions would be to survey those who cite the highly cited papers or investigators with more-modest citation rankings. We must continue to hone indices other than citation-based metrics to complement appraisal of scientific accomplishment8.
- Journal name:
- Date published:
- See News Feature page 550