One of the most universal trends in science and technology today is the growth of large teams in all areas, as solitary researchers and small teams diminish in prevalence1,2,3. Increases in team size have been attributed to the specialization of scientific activities3, improvements in communication technology4,5, or the complexity of modern problems that require interdisciplinary solutions6,7,8. This shift in team size raises the question of whether and how the character of the science and technology produced by large teams differs from that of small teams. Here we analyse more than 65 million papers, patents and software products that span the period 1954–2014, and demonstrate that across this period smaller teams have tended to disrupt science and technology with new ideas and opportunities, whereas larger teams have tended to develop existing ones. Work from larger teams builds on more-recent and popular developments, and attention to their work comes immediately. By contrast, contributions by smaller teams search more deeply into the past, are viewed as disruptive to science and technology and succeed further into the future—if at all. Observed differences between small and large teams are magnified for higher-impact work, with small teams known for disruptive work and large teams for developing work. Differences in topic and research design account for a small part of the relationship between team size and disruption; most of the effect occurs at the level of the individual, as people move between smaller and larger teams. These results demonstrate that both small and large teams are essential to a flourishing ecology of science and technology, and suggest that, to achieve this, science policies should aim to support a diversity of team sizes.
Subscribe to Journal
Get full journal access for 1 year
only $3.90 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.
Rent or Buy article
Get time limited or full article access on ReadCube.
All prices are NET prices.
Data are available at http://lingfeiwu.github.io/smallTeams. Other related, relevant data are available from the corresponding author upon reasonable request.
Guimerà, R., Uzzi, B., Spiro, J. & Amaral, L. A. N. Team assembly mechanisms determine collaboration network structure and team performance. Science 308, 697–702 (2005).
Wuchty, S., Jones, B. F. & Uzzi, B. The increasing dominance of teams in production of knowledge. Science 316, 1036–1039 (2007).
Hunter, L. & Leahey, E. Collaborative research in sociology: trends and contributing factors. Am. Sociol. 39, 290–306 (2008).
Jones, B. F., Wuchty, S. & Uzzi, B. Multi-university research teams: shifting impact, geography, and stratification in science. Science 322, 1259–1262 (2008).
Xie, Y. “Undemocracy”: inequalities in science. Science 344, 809–810 (2014).
Milojević, S. Principles of scientific research team formation and evolution. Proc. Natl Acad. Sci. USA 111, 3984–3989 (2014).
Falk-Krzesinski, H. J. et al. Mapping a research agenda for the science of team science. Res. Eval. 20, 145–158 (2011).
Committee on the Science of Team Science. Enhancing the Effectiveness of Team Science (National Academies Press, Washington DC, 2015).
Leahey, E. From sole investigator to team scientist: trends in the practice and study of research collaboration. Annu. Rev. Sociol. 42, 81–100 (2016).
Paulus, P. B., Kohn, N. W., Arditti, L. E. & Korde, R. M. Understanding the group size effect in electronic brainstorming. Small Group Res. 44, 332–352 (2013).
Lakhani, K. R. et al. Prize-based contests can provide solutions to computational biology problems. Nat. Biotechnol. 31, 108–111 (2013).
Barber, S. J., Harris, C. B. & Rajaram, S. Why two heads apart are better than two heads together: multiple mechanisms underlie the collaborative inhibition effect in memory. J. Exp. Psychol. Learn. Mem. Cogn. 41, 559–566 (2015).
Minson, J. A. & Mueller, J. S. The cost of collaboration: why joint decision making exacerbates rejection of outside information. Psychol. Sci. 23, 219–224 (2012).
Greenstein, S. & Zhu, F. Open content, Linus’ law, and neutral point of view. Inf. Syst. Res. 27, 618–635 (2016).
Christensen, C. M. The Innovator’s Dilemma: The Revolutionary Book That Will Change the Way You Do Business (Harper Business, New York, 2011).
Klug, M. & Bagrow, J. P. Understanding the group dynamics and success of teams. R. Soc. Open Sci. 3, 160007 (2016).
Bak, P., Tang, C. & Wiesenfeld, K. Self-organized criticality: an explanation of the 1/f noise. Phys. Rev. Lett. 59, 381–384 (1987).
Davis, K. B. et al. Bose–Einstein condensation in a gas of sodium atoms. Phys. Rev. Lett. 75, 3969–3973 (1995).
Bose, S. N. Plancks Gesetz und Lichtquantenhypothese. Z. Physik 26, 178–181 (1924).
Einstein, A. Quantentheorie des einatomigen idealen Gases. Sitzungsberichte der Preussischen Akademie der Wissenschaften 1, 3 (1925).
March, J. G. Exploration and exploitation in organizational learning. Organ. Sci. 2, 71–87 (1991).
Funk, R. J. & Owen-Smith, J. A dynamic network measure of technological change. Manage. Sci. 63, 791–817 (2017).
Moody, J. The structure of a social science collaboration network: disciplinary cohesion from 1963 to 1999. Am. Sociol. Rev. 69, 213–238 (2004).
Ke, Q., Ferrara, E., Radicchi, F. & Flammini, A. Defining and identifying Sleeping Beauties in science. Proc. Natl Acad. Sci. USA 112, 7426–7431 (2015).
Wang, D., Song, C. & Barabási, A.-L. Quantifying long-term scientific impact. Science 342, 127–132 (2013).
Evans, J. A. Electronic publication and the narrowing of science and scholarship. Science 321, 395–399 (2008).
Gerow, A., Hu, Y., Boyd-Graber, J., Blei, D. M. & Evans, J. A. Measuring discursive influence across scholarship. Proc. Natl Acad. Sci. USA 115, 3308–3313 (2018).
Uzzi, B., Mukherjee, S., Stringer, M. & Jones, B. Atypical combinations and scientific impact. Science 342, 468–472 (2013).
Kuhn, T. S. The function of measurement in modern physical science. Isis 52, 161–193 (1961).
Collins, D. Organizational Change: Sociological Perspectives (Routledge, New York, 1998).
Jones, B. F. The burden of knowledge and the ‘death of the Renaissance man’: is innovation getting harder? Rev. Econ. Stud. 76, 283–317 (2009).
Alcácer, J., Gittleman, M. & Sampat, B. Applicant and examiner citations in U.S. patents: an overview and analysis. Res. Policy 38, 415–427 (2009).
Schulz, C., Mazloumian, A., Petersen, A. M., Penner, O. & Helbing, D. Exploiting citation networks for large-scale author name disambiguation. EPJ Data Sci. 3, 11 (2014).
Mutz, R., Bornmann, L. & Daniel, H.-D. Cross-disciplinary research: What configurations of fields of science are found in grant proposals today? Res. Eval. 24, 30–36 (2015).
Le, Q. & Mikolov, T. Distributed representations of sentences and documents. In Proc. 31st International Conference on Machine Learning (eds Xing, E. P. & Jebara, T.) 1188–1196 (PLMR, Beijing, 2014).
Correia, S. A feasible estimator for linear models with multi-way fixed effects. Preprint at http://scorreia.com/research/hdfe.pdf (2016).
Full text of Alfred Nobel’s will, available at https://www.nobelprize.org/alfred-nobel/full-text-of-alfred-nobels-will/ (accessed 25 September 2018).
We are grateful for support from AFOSR grants FA9550-15-1-0162 and FA9550-17-1-0089, the John Templeton Foundation’s grant to the Metaknowledge Network, DARPA’s Big Mechanism program grant 14145043, National Science Foundation grant SBE 1158803, 1829344 and 1829366. We thank the University of Chicago Organizations and Markets seminar, the Swarma Club (Beijing), and Clarivate Analytics for supplying the Web of Science data.
Nature thanks L. Bornmann, S. Wuchty and the other anonymous reviewer(s) for their contribution to the peer review of this work.
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data figures and tables
a, Citation tree visualization that illustrates the visual influence of focal papers, drawing on past work and passing ideas onto future work. ‘Roots’ are references and citations to them, with depth scaled to their publication date; ‘branches’ on the tree are citing articles, with height scaled to publication date and length scaled to the number of future citations. Branches curve downward (brown) if citing articles also cite the focal paper’s references, and upward (green) if they ignore them. b, Two articles (the Bose–Einstein condensation and BWK-model articles) of the same impact scale represented as citation trees, to illustrate how disruption distinguishes different contributions to science and technology. c, Citation tree visualization that characterizes the visual influence of eleven focal papers from teams of different sizes. Disruption (D), citations (N), published year (Y) and team size (m) of papers are shown in the bottom left corner of each tree.
We select 27,728,266 WOS papers of at least one citation published between 1954 and 2014. a, b, The distribution of disruption changes with team size (a); magnified versions of the grey area shown in b. c, We test differences in the distribution of disruption between each pair of team sizes from one to ten using a two-sample t-test. The t-statistics are given in green cells and the darkness of green is proportional to the size of each t-statistic. Asterisks under the numbers indicate P values. *P ≤ 0.05, **P ≤ 0.01, ***P ≤ 0.001. All pairs of tested disruption distributions significantly differ from one another. d, e, The distribution of citation changes with team size (d); magnified versions of the grey area shown in e. All figures clearly demonstrate how small teams oversample more disruptive and less impactful work. f, We test differences in the distribution of citations between team sizes using two-sample Kolmogorov–Smirnov tests, which are recommended for long-tailed distributed data. Numbers in cells show Kolmogorov–Smirnov statistics and the underlying asterisks indicate P values. All pairs of the tested citation distributions significantly differ from one another. Comparing disruption distributions with the Kolmogorov–Smirnov test reveals the same patterns of difference.
Extended Data Fig. 3 Decreasing disruption is robust across years, topics, authors, time periods and windows of disruption.
a, For research articles (24,174,022 WOS articles published between 1954 and 2014), patents (2,548,038 US patents assigned between 2002 and 2014) and software (26,900 GitHub repositories uploaded between 2011 and 2014), median citations (red curves, indexed by right y axis) increase with team size from 1 to 100 (rather than 1 to 10 as in Figs. 2a–c, 4a–c), whereas the average disruption percentile (green curves, indexed by left y axis) decreases with team size. For all datasets, we present work with one or more citations. Green dotted lines show the point at which D = 0, the transition from development to disruption. Bootstrapped 95% confidence intervals are shown as grey zones. b, Plot of the regression coefficients of disruption (rather than disruption percentile as in Fig. 3c) on team size, from linear regressions controlling for publication year, topics and author. The regression is based on the 96,386,516 WOS research articles (articles are counted repeatedly if they appear across the publication records of different scholars) contributed by 38,000,470 name-disambiguated scholars. c, The negative correlation between disruption and team size holds across time periods. In contrast to the main body of the paper, which renders disruption in terms of percentile change, here we measure it in the native metric of disruption to highlight the shift with time. Earlier cohorts (red curves) are more disruptive than later cohorts. Nevertheless, with changes in team size, each cohort of papers traverses a majority of the total variation of disruption for that cohort. d–h, Decreasing disruption percentile and increasing citations with growing team size are robust to changes in the width of the time-window of observation from 5 years to 40 years for 166,310 WOS articles published in 1970. i–m, As in d–h, but using 24,174,022 WOS papers published between 1954–2014; we observe the same pattern.
a–c, Weighted moving average technique for data smoothing. The relationship between team size and disruption may be noisy owing to lack of data when we analyse WOS articles from the same journal. As shown in a, less than 1% of articles in ‘Artificial intelligence’ (a subfield of ‘Computer and Information Technology’) have more than six authors, but these articles contribute to substantial variance in the data. We use the moving average technique to limit noise in the data. More specifically, we define a parameter k, which provides the threshold value of mk for team size m such that P(m > mk) < k. For any data point with a team size greater than mk, its disruption percentile DPm is updated to be the average between its current value and the value of its left neighbour, DPm −1, weighted by corresponding sample sizes (the number of articles for a given team size). Panel a shows curves for the subfield ‘Artificial Intelligence’ before (blue dashed curve) and after (red curve) smoothing, in which the size of blue circles is proportional to sample size. Panels b and c show how smoothing depends on the value of k across ten randomly selected subfields. In d–l, each curve corresponds to a journal (only journals with more than three data points are shown) and each panel corresponds to a subfield. There are 15,146 journals, 258 subfields and 10 major fields represented in our WOS data. Owing to the limited figure size, only four subfields are shown for each field. Curves are smoothed by setting the smoothing parameter k = 0.2. The darkness of curves is equally proportional to sample size and the absolute value of the regression coefficient examining the impact of disruption percentile on team size, such that journals with more articles and that display stronger (both negative and positive) relationships are more distinguishable from the background.
Extended Data Fig. 5 Decreasing disruption is robust when controlling for task, institution, platform, project scale and alternative disruption measures.
a, b, Comparison between theoretical and empirical articles (a) and review and non-review articles (b). a, We separate 4,258 papers from www.arXiv.org published between 1992 and 2003 into two groups on the basis of the number of figures they contain; this grouping comprised 1,502 articles without figures and 2,756 articles with figures. The assumption is that empirical papers tend to contain more figures than theoretical papers23. We match these articles to the WOS datasets and observe that for both theoretical and empirical articles, the disruption percentile decreases with the growth of team size. b, We select two groups of WOS articles on the basis of journal name; 22,672 reviewing articles published across 48 journals that have both ‘annual’ and ‘review’ in the title, and their 1,338,808 references (reviewed articles). For both reviewing and reviewed articles, the disruption percentile decreases with team size. c, d, Comparison of US patents across classes and owners. We plot the disruption percentile against team size for the seven most popular classes of patents (92,175 patents) (c) and the top five companies legally assigned the most patents (21,261 patents) (d) from 2002 to 2009. We observe that the decrease in disruption and increase in team size holds broadly across classes and owners. The moving average technique used in Extended Data Fig. 4 is used to smooth the curve (smoothing parameter k = 0.1). As sample size decreases rapidly with team size in the patent data, we assigned equal weights across team sizes in applying the smoothing technique. e, f, Comparison of GitHub software projects across programming languages and code-base sizes. We plot the disruption percentile against team size for the seven most popular programming languages (18,702) (e) and four scales of code-base sizes (24,853 code-bases) (f) from 2011 to 2014. The decrease in disruption with growth of team size holds broadly across programming languages and code-base sizes. g, Simplified citation networks comprising focal papers (blue diamonds), references (grey circles) and subsequent work (rectangles). Subsequent work may cite: (1) only the focal work (i, green), (2) only its references (k, black) or (3) both focal work and references (j, brown). A reference identified as popular is coloured in red, and self-citations are shown by dashed lines (with corresponding subsequent work coloured in light brown). Five definitions of disruption are provided for comparison. D0 is the definition of disruption used in the main text. D1is defined the same way as D0, but with self-citations excluded. D2 is defined the same way as D0, but only considers popular references. We identified references as popular that received citations within the top quartile of the total citation distribution (≥24 citations). D3 simplifies D0 by only measuring the fraction of papers that cite the focal paper and not its references, among all papers citing the focal paper, which equals ni/(ni + nj). D4 is similar to D3, but considers the number of citations and not papers cited in calculating the fraction (for example, if a single referenced paper is cited five times, then it receives a count of five rather than one in this measure). h, A citation network copied from g, with one additional citation edge (brown curve) added. As a consequence, some—but not all—disruption measure variants change. i, All disruption measures decrease with team size. D0 and D1 are indexed by the right y axis and other disruption measures are indexed by the left y axis. One hundred thousand randomly selected WOS papers (97,188 papers remained after excluding missing data) are used to calculate these disruption values.
a, We select 1,127,518 WOS articles published in 2010 and find that the probability of observing reference j of age t decreases exponentially with t, such that P(t) ~ e−λt. For larger teams P(t) decreases faster with t, suggesting that λ is determined by team size m. b, The relationship between m and λ (orange circles) can be fitted as λ ~ m0.07 (red curve). c, From a and b, we can derive the dependency of E(t), the expected value of t, on m by integrating P(t) from zero to maximum t. This gives E(t) ~ 1/λ ~ m−0.07. Empirical data (blue rectangles) are consistent with this prediction (red curve). d, Probability of observing reference j with k citations decreases with k, supporting the relationship P(k) ~ k−α. To control the time window, we include only references published in 2005. For larger teams P(k) decreases more slowly with k, suggesting that α is affected by m. e, The empirical relationship between m and α (purple circles) and the fitting function as α ~ m−0.05 (red curve). f, From d and e, we can derive the dependency of E(k), the expected value of k, on m by integrating P(k) from minimum to maximum k. This gives E(k) ~ 1 + 1/(α − 2) ~ 1 + 1/(m−0.05 − 2). The empirical data (green triangles) are consistent with this prediction (red curve).
a, b, The decay of citations to WOS articles changes with team size and disruption. We selected 95,474 papers with 200–300 citations from 1954 to 2014, and plot the probability of being cited against article age. Longer delays in citation are observed in smaller (a) and more disrupting (b) teams. In b, purple (37,805 papers), blue (4,931 papers) and green (26,698 papers) curves correspond to 0–10, 55–65 and 90–100 percentiles of disruption, respectively. In both panels, curves are smoothed by a running average with a time window of five years. The coloured area shows one standard deviation of these averages. c, d, The Sleeping Beauty index24 captures a delayed burst of attention by calculating convexity in the citation distribution of a particular work over time. The index is highest when a paper is not cited for some substantial period before receiving its maximum (which corresponds to belated appreciation), zero if the paper is cited linearly in the years following publication, and negative if citations chart a concave function with time (which traces early fame diminishing thereafter). We observe that the Sleeping Beauty index percentile decreases markedly with team size (c) and increases with disruption (d) across fields. e, f, The negative correlation between disruption percentile and impact in the short term (within 10 years) turns positive in the long term (over 30 years) for the 166,310 papers published in 1970 (e). The same pattern is observed when all 22,174,022 papers from 1954 to 2014 are used (f). g, h, Achieving substantial citation attention for disruptive work occurs over the long term, if at all, whereas the risk of failure from disruption occurs over both the short and long term. Arrows trace the distance between the mean of future citation success (g) or failure (f) from developing to disrupting work produced by teams of each specified size. The probability of becoming one of the top 1% most-cited articles is higher for developing teamwork (negative disruption, the origin of arrows) within 20 years and higher for disrupting teamwork (positive disruption, the target of arrows) over 30 years across team sizes (g). The probability of becoming one of the tail 10% least-cited articles is almost always higher for disrupting teamwork than developing teamwork across team sizes and time windows (h).
a–f, The decline of small teams. a, b, Evolution of team-size distributions over time for WOS articles (a) and US patents (b). The distributions skew towards large teams over time. c, d, Average team size of articles increased from 2 to 5.5 between 1954 and 2014, and for patents team size increased from 1.7 to 2.7 between 1976 and 2014. e, f, Percentage of small teams (in which the number of team members m ≤ 3) decreased from 91% to 37% for articles, and from 94% to 74% for patents during the period of observation. g, The ripple effect. We select 2,640 small teams (m ≤ 3) from WOS articles that are among the top 1% in number of citations they received, as well those among the top 1% within the Sleeping Beauty index distribution24. We analyse the citations to these articles and find that the fraction of large teams (m > 3) increases over time. The red curve shows the average fraction of citations from large teams and the pink area spans one standard deviation. The selected 2,640 small-team articles are eventually cited by 657,946 large-team articles.
a, b, Changes in journal-based combinatorial novelty with team size from WOS articles. We calculate the pairwise combinational novelty of journals in the references of an article using a previously published novelty measure28. This novelty measure is computed as the tenth percentile value of z-scores for the likelihood that reference sources combine, so a lower value of this index indicates higher novelty28. Here we convert this measure to percentiles and subtract from 100 to improve readability, such that a higher score indicates greater novelty. It seems natural that a larger team would provide access to a wider span of literature. We find that novelty does increase with team size, but with diminishing marginal increases to novelty with each additional team member. Beyond a team size of ten, novelty decreases sharply (a). The probability of observing papers within the top 5% of the novelty distribution increases, and then decreases, with team size. The dotted line shows the null model that the probability of high novelty is invariant to team size (b). c, d, Calculation of combinatoral novelty in a different way. We select 241,648 papers published in American Physical Society Journals, 1990–2010, and analyse the probability of two-way (pairwise) and three-way combinations of the ‘Physics and Astronomy Classification Scheme’ codes using the Jaccard index. Similar to the novelty measure used in a and b, in the Jaccard index a lower value indicates higher novelty; we therefore convert it into percentiles and subtract from 100 such that a higher score indicates greater novelty. Again, we observe diminishing marginal increases to novelty with the growth of team size. e, f, We select 8,232,630 PubMed papers from between 1990 and 2010 and analyse the probability of two-way and three-way combinations of medical subject headings using Jaccard indices. The diminishing marginal increases to novelty effect are also observed in this context.
Extended Data Fig. 10 Small, disruptive teams contribute disproportionately to Nobel Prizes and are underrepresented with government funding.
a, Underfunded small-team, disruptive research. Disruption percentile versus team size for WOS papers either not annotated as funded, or as funded by the largest government agencies around the world. The 477,702 funded papers cover the time period 2004–2014, and include 198,103 for NSF, 80,448 for NSFC, 81,296 for ERC and EC, 75,881 for DFG and 58,275 for JSPS. These papers are published across 7,325 journals, and a paper may be funded by multiple agencies. The average disruption of these papers is −0.0024, ranking in the tail 31.0% of all WOS papers in the same period. We select 5,305,534 papers without any funding annotations from the same 7,325 journals and same time period (2004–2014) as a control group (dashed curve). The dashed grey line shows the mean disruption percentile for the control group. b, We select 191,717 papers published between 2008 and 2014 that acknowledged NSF with a grant number and retrieved grant size from the NSF website, including 140,972 papers for less than or equal to 1 million US dollars, 24,370 papers for 1–5 million US dollars and 26,375 papers for more than 5 million US dollars. The green and red zones mark two regions of interest: small-team (three or fewer members) disruptive (positive disruption) papers in green and large-team developing work in red. The probability of observing small-team disruptive papers in NSF granted papers is almost half that of observing them in the control group. c, We select 877 Nobel-Prize-winning papers that cover the time period 1902–2009, including 316 papers in Physiology or Medicine, 284 papers in Physics and 277 papers in Chemistry. We select 3,372,570 papers from the same 178 journals and same time period (1902–2009) as a control group (dashed curve). The average disruption of the Nobel-prize-winning papers is 0.10, ranking among the top 2% of all WOS papers from the same period. d, The probability of observing small-team disruptive papers is nearly three times as high in Nobel-Prize-winning papers as in the control group.
About this article
Cite this article
Wu, L., Wang, D. & Evans, J.A. Large teams develop and small teams disrupt science and technology. Nature 566, 378–382 (2019). https://doi.org/10.1038/s41586-019-0941-9
Nature Biotechnology (2021)
Nature Communications (2021)
Scientific Reports (2021)
Nature Machine Intelligence (2021)