Analysis

100 articles every ecologist should read

Received:
Accepted:
Published online:

Abstract

Reading scientific articles is a valuable and major part of the activity of scientists. Yet, with the upsurge of currently available articles and the increasing specialization of scientists, it becomes difficult to identify, let alone read, important papers covering topics not directly related to one’s own specific field of research, or that are older than a few years. Our objective was to propose a list of seminal papers deemed to be of major importance in ecology, thus providing a general ‘must-read’ list for any new ecologist, regardless of particular topic or expertise. We generated a list of 544 papers proposed by 147 ecology experts (journal editorial members) and subsequently ranked via random-sample voting by 368 of 665 contacted ecology experts, covering 6 article types, 6 approaches and 17 fields. Most of the recommended papers were not published in the highest-ranking journals, nor did they have the highest number of mean annual citations. The articles proposed through the collective recommendation of several hundred experienced researchers probably do not represent an ‘ultimate’, invariant list, but they certainly contain many high-quality articles that are undoubtedly worth reading—regardless of the specific field of interest in ecology—to foster the understanding, knowledge and inspiration of early-career scientists.

The progress of science is built on the foundations of previous research—we take the flame of our predecessors and pass it faithfully to the next generation of scientists, and so it has always been. But this implies knowing the state of the art of our field, as well as being aware as much as possible about progress in other relevant fields. Hence, science can be represented as an ever-growing brick wall of published evidence, which subsequent research bricks can add to—and sometimes challenge, erode or even smash. Scientific articles have more recently also started playing another role: as metrics of the progress of projects and of the ‘quality’ of researchers and institutions1. Regardless of the pros and cons of this additional function, boosted by a parallel increase in the number of researchers2, this has produced an enormous increase in the number of peer-reviewed scientific articles. There are now well over 50 million peer-reviewed scientific articles in existence3, with an increase of 8–9% each year over the past several decades4. This means that over 1.5 million new articles are published each year across all scientific disciplines3.

This metric aspect of publishing has led to an increase in the competitive facet of the publication race, which has precipitated a rush by postgraduate students—encouraged by their supervisors—to focus on rapid publication5, which can inadvertently discourage students from developing a strong knowledge base in the sciences. This rush and the overwhelming load of available reading material makes it difficult to remain at the forefront of the methodological and conceptual advances of one’s discipline. Furthermore, this means that it becomes increasingly plausible to overlook older papers that might nonetheless be essential for acquiring the necessary understanding of key concepts. Prospective and current postgraduate students are also confronted by another characteristic of modern research: the continued trend towards specialization of knowledge and expertise6,7, which does not favour integration of information on related topics, even from the same discipline.

These challenges are made more daunting by their synergy—too much information, but too little time to obtain, assimilate and process it all. It is self-evident that this harms scientists’ ability to be both rigorous and creative—two complementary features needed for high-quality research. Even experienced scientists find it difficult to allocate time to push aside grant writing, supervision, meetings and teaching, and often end up reading only the latest ‘hot’ papers4. As online searching has increased as a strategy to identify needed journal articles8, one may focus on more direct and immediate knowledge needs to the detriment of more basic readings. Unsurprisingly, important papers covering topics not directly related to one’s own specific field of research, or that are older than a few years, are even more difficult to identify, let alone read. It follows that defining which papers every ecologist—and certainly every ecology student—should take the time to read ought to become a priority to achieve satisfactory ecological literacy9.

Our aim was to collate a list of objectively chosen and ranked seminal papers deemed to be of major importance in ecology, thus providing a general ‘must-read’ list for any ecologist, regardless of particular topic or expertise. We defined a paper as one that should be read because it provides information that is particularly relevant for today’s ecologists. These can include well-known classics, lesser-known methodological gems, general demonstrations of fundamental principles or philosophical essays on ecological science. Our approach was to solicit a candidate list from ecology experts (journal editorial members) and then rank those papers according to a random-sample voting process done by an even larger sample of ecological experts.

Results

The ecology experts proposed a total of 544 different papers. As we expected, the distribution of the number of times articles were proposed was highly right-skewed, with most (74%) papers proposed only once (Supplementary Fig. 2), illustrating the great initial diversity of papers proposed, but also the richness of the pool of important papers in our discipline. We then resampled this list of 544 papers for the voting phase without any restriction or distinction among them (that is, completely random samples of 20 from all 544 papers). Overall, 368 respondents voted on 1,558 separate samples of 20 papers, providing 12,410 individual-article votes in total (median = 23 (12–36) votes per article). Before analysis, we removed the few votes for papers that were identified as “Not known” by the respondent, but that they ranked regardless. We provide the list of the 100 highest-ranked articles in Box 1. In addition, we provide in the Supplementary Material a list of the 75 top-ranked papers that were indicated as ‘read’ by the respondents and that were not already in the overall top 100 list. Of note, the number of auto-cited papers (that is, papers suggested by proponents that were co-authored by them) among those nominated was low (5.5%), and these were flushed out during the voting procedure and subsequent ranking.

Although it was not our primary aim, the vote provided us with some interesting information on which papers the experienced members of our community deemed to be ‘must-read’. First, correlations of the top 100 ranked papers were similar to those of the full list of the 544 ranked papers. We found no relationship between the all-article ranking and the 2014 impact factor of the journals in which the 544 papers were published. However, there was a positive relationship between the all-article ranking and the average number of article citations per year (Fig. 1), as measured by the International Scientific Indexing Web of Knowledge in 2014 (we obtained similar results using mean per-year Google Scholar citations). This might be at least partially due to the positive relationship between article age and its rank because older papers are generally ranked relatively higher (Fig. 1). However, the top-ranked papers did not have the highest number of citations; as an example, only one of the papers in our two lists belongs to the 100 most-cited papers in ecology, according to the International Scientific Indexing Web of Knowledge. The distribution of the age of the 544 top papers shows two peaks: the first one in the 1960s–1980s (and older), perhaps corresponding to more ‘classic’ papers, and the second one in the 1990s–2000s.

Fig. 1: Relationships between the mean score of each article and the impact factor of the journal that published it, its number of citations and its age.
Fig. 1

a,b, 2014 International Scientific Indexing impact factor of the journal that published the article. c,d, Number of article citations. e,f, Age of article (log10). The mean score of each article is the average score provided by voters who gave one point for each selection of the Top 10 category, two points for Between 11–25, three points for Between 26–100 and four points for Not in the top “100” (see Methods). The top panels (a,c and e) are the results for all votes, whereas the bottom panels (b,d and f) are for ‘read-only’ articles (see text for details). P ran refers to the probability that a randomly generated order of the dependent variable results in a root mean-squared error (RMSE) less than or equal to that of the observed RMSE (over 10,000 iterations).

More interestingly, we examined the relationship between the number of times each article was proposed and: (1) the number of times it received a vote, (2) its mean score after voting, (3) the article’s age in years and (4) the Web of Knowledge annual citation rate (Supplementary Fig. 3). Again, using a randomization correlation, we found that papers proposed more often had in fact fewer overall votes, but a lower mean score (meaning that they were more highly ranked). The papers more frequently proposed were also older on average and had a higher citation rate. However, while all relationships were statistically non-random, they were also all rather weak given the skewness of the data.

For the proposed articles for which we had information on the gender of the proposer, women proposed 54 papers and men proposed 365 papers (a female-to-male proposing ratio of 1:6.8). Similarly, and for articles for which we had information on the gender of the voter, there were 62 women and 292 male voters (a female-to-male voter ratio of 1:4.7). For the experience of voters (that is, < 10 years, 10–25 years or > 25 years), we had information for 1,516 sets of 20 randomly selected papers. For these, 54 (3.6%), 786 (51.8%) and 676 (44.6%) were voted for by people with < 10 years, 10–25 years and > 25 years of experience, respectively. In other words, voters were more often males (82%) and on average highly experienced, as could be expected from a sample of editorial members in our highly gender-biased system10.

The distribution of the nominated papers shows that reviews do not dominate the must-read articles, but case studies are more common and conceptual papers make up approximately one-sixth of all papers (Fig. 2). Similarly, for the classification of different approaches, modelling studies take up the largest proportion of all nominated papers, the second being argumentation papers, generally corresponding to reviews and opinions (Fig. 2 and Supplementary Table 1). The distribution of papers in different ecological fields shows a predominance of community ecology, biodiversity distribution and population ecology and, to a lesser extent, evolutionary ecology, conservation biology and functional ecology (Fig. 2).

Fig. 2: Top 100 ‘must-read’ articles according to their type.
Fig. 2

a, Type. b, Field. c, Approach.

The article rankings differed markedly depending on whether they were read or not. Most notably, the positive relationship between article age and its rank disappeared when considering read-only papers, as did the relationship with the mean annual citation rate. The median age of the 100 top-ranked papers (known and/or read) was 38 years (95% confidence interval: 11–80 years), but only 24 years for the read-only list (95% confidence interval: 7–60 years). On average, 42% of the 20 randomly selected papers in each selection were scored in each attempt, but only an average 20% of the papers in each selection were ‘not known’ (Supplementary Fig. 4). Only 10% of papers were both scored and ‘not known’ on average across all random selections of 20 papers.

Discussion

It could be considered counter-intuitive to suggest a ‘must-read’ list for any student in a scientific field as vast as ecology. The initial number of papers suggested individually by editorial members was higher than we had anticipated (544), confirming the diversity of our respondents and wide span of this discipline, but also its wealth of important papers. However, this is put into better perspective when compared with the nearly half a million papers published in the field of ecology according to the Web of Knowledge database (http://webofknowledge.com). Another indication of this richness and breadth is the absence of a clearly emerging set of papers with disproportionately high scores, which could be due to the large and diverse community of scientists in our field.

Although our aim was to provide ecology scientists—especially those early in their career—with a compilation of essential ecology articles that they might have otherwise overlooked, our analyses revealed some important limitations. First, the list of the 100 most highly ranked papers contains many that are several decades old. Some of these pioneering papers describe landmark results or ideas, some are elegant in the concepts they present and some simply have not yet been made obsolete. This is despite the possibility that some historically important papers have been updated, improved, overturned or adequately summarized elsewhere since their publication, and that many of the latter probably did not make it to the submitted list. This means that the list clearly cannot be used as an exclusive reading source to replace comprehensive reading in one’s discipline. In an age of fast-evolving knowledge and techniques, it is tempting to be sceptical of the interest of reading such older papers; however, that older scientific articles are still deemed to be important by the ecological community suggests that ecologists still value them for acquiring a solid knowledge and understanding (and perhaps even culture) of the discipline. Older papers remain a security against repeating errors already made or proposing ideas and hypotheses that have already received sufficient research attention.

Although some fields of ecology are more represented than others in our lists, especially community ecology and biodiversity distribution, 17 different fields were present in the final 100 papers, showing the rich diversity of this science. They also showed a rather balanced pool of scientific approaches and article types, with modelling papers, in particular, dominating. Most recommended papers were not published in the highest-ranking journals, nor did they have the highest number of mean annual citations, showing the limitations of using such citation-based indices as metrics of article or researcher impact11. Interestingly, the two lists we provide have only one paper in common with the 100 most-cited articles in ecology according to the Web of Knowledge database (http://esi.incites.thomsonreuters.com), confirming that citation-based criteria are inadequate for selecting background reading, according to acknowledged experts in ecology.

Another striking outcome is that the ranked list of articles differed substantially depending on the stringent criterion of the respondents having actually read them. Overall, only 23% of the 100 top-ranked papers in the all-article list were also in the top 100 of the read-only list. A remarkable example is the top-ranked paper in the all-article list12, which is entirely absent in the read-only top 100 (in fact, it was in 325th place in the latter ranking). The 77% difference between the two lists obviously does not imply that only 23% of the top-ranked papers had been read, since many respondents had read them; it means that enough respondents had not read them to change the final ranking substantially. It is likely that those articles recommended by scientists who have not actually read them would still be recommended as ‘must-read’, but with a lower ranking than other read papers. The 14 year difference in the median age of the two lists potentially emphasizes the ‘classic’ nature and high reputation of many articles in the primary (that is, including both read and not-read articles) list. The implication is that many senior ecologists recommended papers that they had not actually read, instead relying on the paper’s perceived reputation. Alternatively, even though many of the recommended papers had not been read per se, the proponents possibly knew enough of their content or main message via partial readings, discussions, related readings or their mentors’ previous recommendations. Instead of viewing the ranking anomaly between the two lists as problematic, we interpret it as a clear demonstration that defining essential-reading lists is not a futile exercise because it highlights what even the most-experienced researchers should ideally read.

Our approach explicitly targeted ecology articles and not evolution per se; although we did consider evolutionary ecology, the same exercise unambiguously targeting evolution would undoubtedly yield a different ranked list. Although we could clearly attribute some papers to particular fields, there is also an element of subjectivity in this choice, such that other authors would probably have classified some of them differently. The difference in the representation of the different ecology fields (and the under-representation of some of them) might have more to do with a discipline bias of editorial members in the journals we targeted for respondents (or of those among them who responded to the survey), even though we strived to restrict our choice to journals in general ecology.

As such, these lists proposed and ranked through the collective recommendation of several hundred experienced researchers in ecology probably do not represent an ‘ultimate’, invariant list. This is due to limitations of the approach and specificities of the respondents; however, they contain many high-quality articles that are undoubtedly worth reading whatever the specific field of interest in ecology. Furthermore, digging into this already compiled list of important articles could unearth other important articles that have been overlooked in this exercise. We contend, then, that our endeavour has resulted in identifying important lists of articles to foster understanding, knowledge and inspiration, as well as lower the probability of re-inventing ecological wheels13. Two previous lists are worth mentioning in this regard: a book collating 40 ‘classic’ papers from 1887 to 1974 (ref. 14) and a celebration of the British Ecological Society’s centenary through 100 ‘landmark’ papers published in the society’s five journals15. Although the objectives—and therefore the contents—of these two lists differ from our own, students would certainly find complementary, valuable readings therein.

Being provided with such a long list might be daunting for students starting research. However, it is important to realize early that reading is essential for many aspects of research and is a major activity of scholars8. Following the increase in availability of reading materials, the average number of readings per year and per science faculty member has increased over the past three decades, with an average of 150 articles read in 1977, 250–300 in 2005 and 468 in 2012 (refs 8,16). Meanwhile, the average time spent reading has decreased by one-third8, in part because strategic reading and 'flicking-bouncing' is increasingly deployed17. Overall, this amounts to an estimated 448 h year–1 spent reading—equivalent to 56 eight-hour days every year16 or about six months over three years. The same authors report that researchers in life sciences estimate spending 15.3 h per week reading scholarly content18.

The digitalization of older publications and the increased online availability of nearly all peer-reviewed articles today mean that scientists now have quick access to many more articles than they did even a few decades ago17. Ironically, such a profusion of available articles has shifted how scientists select their primary reading material to using pre-defined and personally oriented search terms rather than thematically based searches. This rarefaction of library browsing and perusal could lead to a paucity of lateral exploration of secondarily (or even loosely) connected topics, and thus of potential findings that are unexpectedly relevant19.

It has also been suggested that the current use of massively available online articles might favour consensus towards a restricted number of more recent studies, thus narrowing the search field and the consequent ideas on which to base our own research19. Both phenomena argue for reading the older literature, as well as articles that are not directly related to one specific topic. Returning to our brick-wall metaphor, increasing specialization in ecological fields and the ever-increasing numbers of journals and published articles might therefore act to lay more ‘bricks’, without actually increasing the height, breadth or strength of the wall of knowledge. Our recommended papers are therefore the foundation of the wall, so without reading and understanding them, the quality of successive bricks will inevitably decrease such that the wall will lose robustness over time. We therefore hope the lists we have generated with the generous contribution of our peers will help in this regard.

Methods

To generate a list of ‘must-read’ papers, we faced two major challenges. (1) How does one define whether a published article is ‘important’? (2) How can we compare such articles objectively? The importance of scientific articles is difficult to assess and requires experience and knowledge; it is also a subjective definition by nature and requires refraining from biasing choices towards one’s own, necessarily restricted field of expertise, despite familiarity being a necessary precursor to selection. For these reasons, we decided to rely on the expertise of acknowledged experts in ecology and asked them directly, as a community, which scientific articles they deemed most ‘important’ in the context described above. We thus contacted the editorial members of some of the most renowned journals in general ecology (those with the highest impact factors and avoiding journals that are either specialized or multidisciplinary). We contacted all the editorial members of the following journals: Trends in Ecology and Evolution, Ecology Letters, Ecology, Oikos, The American Naturalist, Ecology and Evolution and Ecography. We also contacted all the members of the Faculty of 1000 Ecology section (f1000.com/prime/thefaculty/ecol). The common point of all these scientists is that they have normally been selected as editors for their wide knowledge of ecology and their ability to assess the novelty, importance and potential disciplinary impact of submitted ecology research papers; by virtue of their appointment to such editorial boards, these people are ipso facto ecology ‘experts’.

We contacted all 665 of these editors by email to describe the project and ask them first to send us the details of three to five peer-reviewed papers (or more if they wished). This selection was based on the criterion that these scientists “deemed each postgraduate student in ecology—regardless of their particular topic—should read by the time they finish their dissertation”, and that “any ecologist should also probably read”. We also specified that these could include “any type of research paper”, and that they need not be strictly ‘ecological’ if still deemed essential to a general knowledge in ecology.

Collectively, the editorial members (147 respondents of the 665 contacted) nominated 544 different articles to include in the primary list (that is, 3.70 articles on average suggested by each person who replied). Once we obtained the list of nominated articles, we asked these same 665 experts to vote on each of them to obtain a ranking provided collegially by the community. As there were so many papers to assess and score, participants could not reasonably be requested to examine all 544 proposed articles and suggest a relative rank for each. This trade-off necessitated a resampling approach (see Analysis) to tally the relative rank of each article. Therefore, we provided each voter with a randomly generated sample of 20 papers from all the nominated papers in the original list. We asked surveyed scientists to vote on the papers provided in at least one randomly generated sample of 20 papers and preferably on five or more randomly generated samples of 20 papers. Participants could vote on as many papers as they wanted in each sample. In the randomly generated samples, each paper was presented with its full reference, an abstract (available by hovering the curser over the entry) and a downloadable PDF of the full article (see Supplementary Fig. 1). The Ethics Committee of the Centre National de la Recherche Scientifique agreed that no ethics approval was deemed necessary for such a voluntary and anonymous survey.

We requested that the voter first provide for each of the 20 papers an ‘importance’ score, assigning each to one of four categories: Top 10, Between 11–25, Between 26–100 or Not in the top “100”. We also instructed respondents to provide information on how well they knew each paper via the responses “Read it”, “Know it” or “Don’t know it”. For each voter, we also asked her or his gender, country of education and scientific experience ( < 10 years, between 10 and 25 years or > 25 years). We gave one point for each selection of the Top 10 category, two points for Between 11–25, three points for Between 26–100 and four points for Not in the top “100”.

We also classified each of the 544 proposed papers into one of six types (review, case study, methodology, concept, career or opinion), one of 17 fields (general ecology, biodiversity distribution, community ecology, conservation biology, functional ecology, evolutionary ecology, population ecology, palaeoecology, molecular ecology/microbiology/genetics, behavioural ecology, chemical ecology, ecophysiology, landscape/spatial ecology, soil ecology, aquatic ecology, plant ecology, or macroecology/biogeography) and one of six approaches (laboratory experiment, field experiment, modelling, argumentation, data analysis or observation). Of course, some papers could belong to several types, fields or approaches, so we allowed repeat categories (see Results).

Analyses

We first averaged scores across all randomly sampled sets of submitted votes for each paper and then applied a simple rank to these (ties averaged). This provided a rank of the top- (1) to least-voted (544) articles. Thus, the final rank avoids any contrived magnitude of the differences between arbitrary score values (that is, 1 to 4 base scores).

To test for correlations between different rankings, or between rankings and the article age, citation rate and so on, we developed a resampling approach that avoided assumptions of normality, homoscedasticity or linearity. In brief, we took the raw, average scores for each article (independent variable) and compared them with randomized orders of the corresponding correlate (dependent variable) for each test. For each randomized order over 10,000 iterations, we calculated a root mean-squared error (RMSErandom) and compared this with the observed RMSE between the two variables. When the probability that randomizations produced a RMSE less than or equal to the observed RMSE was small (that is, the number of times (RMSErandom ≤ RMSEobserved) ÷ 10,000 iterations 0.05), we concluded that there was evidence of a correlation.

Life Sciences Reporting Summary

Further information on experimental design and reagents is available in the Life Sciences Reporting Summary.

Code availability

All R code needed to reproduce the analyses and results is given in the following repository: https://github.com/cjabradshaw/HIPE.

Data availability

All data generated or analysed during this study are included in this article (and its Supplementary Information files). All data files needed to reproduce the analyses and results are given in the following repository: https://github.com/cjabradshaw/HIPE.

Additional Information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Change history

  • Correction 16 November 2017

    This Article originally suggested that readers may be able to source PDF files of the papers analysed via a website that is involved in litigation over breach of copyright. This suggestion and the related URL have now been removed.

References

  1. 1.

    Ball, P. The mathematics of science’s broken reward system. Nature https://doi.org/10.1038/nature.2016.20987 (2016).

  2. 2.

    Ware, M. & Mabe, M. The STM Report: An Overview of Scientific and Scholarly Journal Publishing 4th edn (International Association of Scientific, Technical and Medical Publishers, The Hague, 2015).

  3. 3.

    Jinha, A. Article 50 million: an estimate of the number of scholarly articles in existence. Learn. Publ. 23, 258–263 (2010).

  4. 4.

    Landhuis, E. Scientific literature: information overload. Nature 535, 457–458 (2016).

  5. 5.

    Laurance, W. F., Useche, D. C., Laurance, S. G. & Bradshaw, C. J. A. Predicting publication success for biologists. Bioscience 63, 817–823 (2013).

  6. 6.

    Wray, K. B. Rethinking scientific specialization. Soc. Stud. Sci. 35, 151–164 (2005).

  7. 7.

    Hollingsworth, R. The snare of specialization. Bull. At. Sci. 40, 34–37 (1984).

  8. 8.

    Tenopir, C., King, D. W., Edwards, S. & Wu, L. Electronic journals and changes in scholarly article seeking and reading patterns. Aslib Proc. 61, 5–32 (2009).

  9. 9.

    McBride, B. B.. Brewer, C. A., Berkowitz, A. R. & Borrie, W. T. Environmental literacy, ecological literacy, ecoliteracy: what do we mean and how did we get here? Ecosphere 4, 1–20 (2013).

  10. 10.

    Helmer, M., Schottdorf, M., Neef, A. & Battaglia, D. Gender bias in scholarly peer review. Elife 6, 1–18 (2017).

  11. 11.

    Bradshaw, C. J. A. & Brook, B. W. How to rank journals. PLoS ONE 11, e0149852 (2016).

  12. 12.

    Darwin, C. R. & Wallace, A. R. On the tendency of species to form varieties; and on the perpetuation of varieties and species by natural means of selection. Zool. J. Linn. Soc. 3, 45–62 (1858).

  13. 13.

    Holt, R. D. Cultural amnesia in the ecological sciences. Isr. J. Ecol. Evol. 53, 121–128 (2007).

  14. 14.

    Real, L. A. & Brown, J. H. Foundations of Ecology: Classic Papers with Commentaries (Univ. Chicago Press, Chicago, IL, 1991).

  15. 15.

    Anderson, M. C. et al. 100 Influential Papers Published in 100 Years of the British Ecological Society Journals (British Ecological Society, 2014).

  16. 16.

    Tenopir, C., Volentine, R. & King, D. W. Scholarly reading and the value of academic library collections: results of a study in six UK universities. Insights 25, 130–149 (2012).

  17. 17.

    Renear, A. H. & Palmer, C. L. Strategic reading, ontologies, and the future of scientific publishing. Science 325, 828–832 (2009).

  18. 18.

    Tenopir, C., Mays, R. & Wu, L. Journal article growth and reading patterns. New Rev. Inf. Network. 16, 4–22 (2011).

  19. 19.

    Evans, J. A. Electronic publication and the narrowing of science and scholarship. Science 321, 395–399 (2008).

Download references

Acknowledgements

We are grateful to the many participating editorial members, as well as to C. Albert and G. M. Luque for help with the survey and article management. We are also grateful to the members of B. Holt’s 2017 postgraduate seminar class ‘Advanced Community Ecology’ for their input to the paper. F.C. was supported by BNP Paribas and Agence Nationale de la Recherche (Invacost) grants, and C.J.A.B. was supported by BNP Paribas and Australian Research Council grants.

Author information

Affiliations

  1. Ecologie, Systématique et Evolution, Univ. Paris-Sud, CNRS, AgroParisTech, Université Paris-Saclay, Paris, 91400, Orsay, France

    • Franck Courchamp
    •  & Corey J. A. Bradshaw
  2. Global Ecology, College of Science and Engineering, Flinders University, GPO Box 2100, Bedford Park, SA, 5001, Australia

    • Corey J. A. Bradshaw

Authors

  1. Search for Franck Courchamp in:

  2. Search for Corey J. A. Bradshaw in:

Contributions

F.C. conceived and designed the study and collected the data. C.J.A.B. performed the analyses. F.C. wrote the original draft of the paper. F.C. and C.J.A.B. reviewed and edited the paper.

Competing interests

The authors declare no competing financial interests.

Corresponding author

Correspondence to Franck Courchamp.

Electronic supplementary material

  1. Supplementary Information

    List of 75 additional articles of the “read” list, Supplementary Figures 1–4, Supplementary Table 1–2 Summary, Supplementary Note.

  2. Life Sciences Reporting Summary

  3. Supplementary Table 1

    Ranking of all the papers according to the category of ‘type’ (case study, review, concept, opinion, methodology, career).

  4. Supplementary Table 2

    Ranking of all the papers with the various variables, including the final rank, the average score, the number of votes (nVot), number of times proposed (nProp), the Impact Factor of the Journal (Ifjrnl), the number of citations in Web of Knowledge (citWoK) and Google Citation (citGoog) and the yearly number of citations in Web of Knowledge (citWoKyr) and Google Citation (citGoogyr).