Abstract
Humans communicate using systems of interconnected stimuli or concepts—from language and music to literature and science—yet it remains unclear how, if at all, the structure of these networks supports the communication of information. Although information theory provides tools to quantify the information produced by a system, traditional metrics do not account for the inefficient ways that humans process this information. Here, we develop an analytical framework to study the information generated by a system as perceived by a human observer. We demonstrate experimentally that this perceived information depends critically on a system’s network topology. Applying our framework to several real networks, we find that they communicate a large amount of information (having high entropy) and do so efficiently (maintaining low divergence from human expectations). Moreover, we show that such efficient communication arises in networks that are simultaneously heterogeneous, with highdegree hubs, and clustered, with tightly connected modules—the two defining features of hierarchical organization. Together, these results suggest that many communication networks are constrained by the pressures of information transmission, and that these pressures select for specific structural features.
Access options
Subscribe to Journal
Get full journal access for 1 year
$59.00
only $4.92 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.
Rent or Buy article
Get time limited or full article access on ReadCube.
from$8.99
All prices are NET prices.
Data availability
Source data for Fig. 1, Supplementary Figs. 3–5 and Supplementary Tables 1–11 are provided in Supplementary Data File 1. Source data for Fig. 2 and Supplementary Fig. 1 are provided in Supplementary Data File 2. Source data for the networks in Fig. 3, Table 1 and Supplementary Figs. 6–9 are either publicly available or provided in Supplementary Data File 3 (see Supplementary Table 12 for details).
Code availability
The code that supports the findings of this study is available from the corresponding author upon reasonable request.
Change history
15 February 2021
A Correction to this paper has been published: https://doi.org/10.1038/s4156702009857
References
 1.
Shannon, C. E. A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948).
 2.
BarHillel, Y. & Carnap, R. Semantic information. Br. J. Phil. Sci. 4, 147–157 (1953).
 3.
Dretske, F. I. Knowledge and the Flow of Information (MIT Press, 1981).
 4.
Cohen, J. E. Information theory and music. Behav. Sci. 7, 137–163 (1962).
 5.
Rosvall, M. & Bergstrom, C. T. Maps of random walks on complex networks reveal community structure. Proc. Natl Acad. Sci. USA 105, 1118–1123 (2008).
 6.
GómezGardeñes, J. & Latora, V. Entropy rate of diffusion processes on complex networks. Phys. Rev. E 78, 065102 (2008).
 7.
LibenNowell, D. & Kleinberg, J. Tracing information flow on a global scale using Internet chainletter data. Proc. Natl Acad. Sci. USA 105, 4633–4638 (2008).
 8.
Rosvall, M., Trusina, A., Minnhagen, P. & Sneppen, K. Networks and cities: an information perspective. Phys. Rev. Lett. 94, 028701 (2005).
 9.
Cover, T. M. & Thomas, J. A. Elements of Information Theory (John Wiley & Sons, 2012).
 10.
Hilbert, M. Toward a synthesis of cognitive biases: how noisy information processing can bias human decision making. Psychol. Bull. 138, 211–237 (2012).
 11.
Laming, D. R. J. Information Theory of Choicereaction Times (Academic Press, 1968).
 12.
Koechlin, E. & Hyafil, A. Anterior prefrontal function and the limits of human decisionmaking. Science 318, 594–598 (2007).
 13.
Saffran, J. R., Aslin, R. N. & Newport, E. L. Statistical learning by 8monthold infants. Science 274, 1926–1928 (1996).
 14.
Dehaene, S., Meyniel, F., Wacongne, C., Wang, L. & Pallier, C. The neural representation of sequences: from transition probabilities to algebraic patterns and linguistic trees. Neuron 88, 2–19 (2015).
 15.
Schapiro, A. C., Rogers, T. T., Cordova, N. I., TurkBrowne, N. B. & Botvinick, M. M. Neural representations of events arise from temporal community structure. Nat. Neurosci. 16, 486–492 (2013).
 16.
Kahn, A. E., Karuza, E. A., Vettel, J. M. & Bassett, D. S. Network constraints on learnability of probabilistic motor sequences. Nat. Hum. Behav. 2, 936–947 (2018).
 17.
Lynn, C. W., Kahn, A. E., Nyema, N. & Bassett, D. S. Abstract representations of events arise from mental errors in learning and memory. Nat. Commun. 11, 2313 (2020).
 18.
Lynn, C. W. & Bassett, D. S. How humans learn and represent networks. Proc. Natl Acd. Sci. USA (in the press).
 19.
Karuza, E. A., Kahn, A. E. & Bassett, D. S. Human sensitivity to community structure is robust to topological variation. Complexity https://doi.org/10.1155/2019/8379321 (2019).
 20.
Meyniel, F., Maheu, M. & Dehaene, S. Human inferences about sequences: a minimal transition probability model. PLoS Comput. Biol. 12, e1005260 (2016).
 21.
Tompson, S. H., Kahn, A. E., Falk, E. B., Vettel, J. M. & Bassett, D. S. Individual differences in learning social and nonsocial network structures. J. Exp. Psychol. Learn. Mem. Cogn. 45, 253–271 (2019).
 22.
Howard, M. W. & Kahana, M. J. A distributed representation of temporal context. J. Math. Psychol. 46, 269–299 (2002).
 23.
Dayan, P. Improving generalization for temporal difference learning: the successor representation. Neural Comput. 5, 613–624 (1993).
 24.
Gershman, S. J., Moore, C. D., Todd, M. T., Norman, K. A. & Sederberg, P. B. The successor representation and temporal context. Neural Comput. 24, 1553–1568 (2012).
 25.
Garvert, M. M., Dolan, R. J. & Behrens, T. E. A map of abstract relational knowledge in the human hippocampalentorhinal cortex. Elife 6, e17086 (2017).
 26.
Estrada, E. & Hatano, N. Communicability in complex networks. Phys. Rev. E 77, 036111 (2008).
 27.
Estrada, E., Hatano, N. & Benzi, M. The physics of communicability in complex networks. Phys. Rep. 514, 89–119 (2012).
 28.
Maslov, S. & Sneppen, K. Specificity and stability in topology of protein networks. Science 296, 910–913 (2002).
 29.
Derex, M. & Boyd, R. The foundations of the human cultural niche. Nat. Commun. 6, 8398 (2015).
 30.
Momennejad, I., Duker, A. & Coman, A. Bridge ties bind collective memories. Nat. Commun. 10, 1578 (2019).
 31.
Milo, R. et al. Superfamilies of evolved and designed networks. Science 303, 1538–1542 (2004).
 32.
Foster, J. G., Foster, D. V., Grassberger, P. & Paczuski, M. Edge direction and the structure of networks. Proc. Natl Acad. Sci. USA 107, 10815–10820 (2010).
 33.
Burda, Z., Duda, J., Luck, J.M. & Waclaw, B. Localization of the maximal entropy random walk. Phys. Rev. Lett. 102, 160602 (2009).
 34.
Cancho, R. F. I. & Solé, R. V. The small world of human language. Proc. R. Soc. Lond. B 268, 2261–2265 (2001).
 35.
Barabási, A.L. & Albert, R. Emergence of scaling in random networks. Science 286, 509–512 (1999).
 36.
Newman, M. E. The structure of scientific collaboration networks. Proc. Natl Acad. Sci. USA 98, 404–409 (2001).
 37.
Stumpf, M. P. & Porter, M. A. Critical truths about power laws. Science 335, 665–666 (2012).
 38.
Girvan, M. & Newman, M. E. Community structure in social and biological networks. Proc. Natl Acad. Sci. USA 99, 7821–7826 (2002).
 39.
Motter, A. E., De Moura, A. P., Lai, Y.C. & Dasgupta, P. Topology of the conceptual network of language. Phys. Rev. E 65, 065102 (2002).
 40.
Eriksen, K. A., Simonsen, I., Maslov, S. & Sneppen, K. Modularity and extreme edges of the Internet. Phys. Rev. Lett. 90, 148701 (2003).
 41.
Ravasz, E. & Barabási, A.L. Hierarchical organization in complex networks. Phys. Rev. E 67, 026112 (2003).
 42.
Deacon, T. W. The Symbolic Species: The Coevolution of Language and the Brain (WW Norton, 1998).
 43.
Dix, A. Human–Computer Interaction (Springer, 2009).
 44.
Hayes, A. F. Statistical Methods for Communication Science (Routledge, 2009).
 45.
Brown, P. F., Desouza, P. V., Mercer, R. L., Pietra, V. J. D. & Lai, J. C. Classbased ngram models of natural language. Comput. Linguist. 18, 467–479 (1992).
 46.
Pachet, F., Roy, P. & Barbieri, G. Finitelength Markov processes with constraints. In TwentySecond International Joint Conference on Artificial Intelligence (ed. Walsh, T.) 635–642 (AAAI, 2011).
 47.
Meyniel, F. & Dehaene, S. Brain networks for confidence weighting and hierarchical inference during probabilistic learning. Proc. Natl Acad. Sci. USA 114, E3859–E3868 (2017).
 48.
Goh, K.I., Kahng, B. & Kim, D. Universal behavior of load distribution in scalefree networks. Phys. Rev. Lett. 87, 278701 (2001).
 49.
Liu, Y.Y., Slotine, J.J. & Barabási, A.L. Controllability of complex networks. Nature 473, 167–173 (2011).
 50.
Schall, R. Estimation in generalized linear models with random effects. Biometrika 78, 719–727 (1991).
Acknowledgements
We thank E. Horsley, H. Ju, D. LydonStaley, S. Patankar, P. Srivastava and E. Teich for feedback on earlier versions of the manuscript. We thank D. Zhou for providing the code used to parse the texts. D.S.B., C.W.L. and A.E.K. acknowledge support from the John D. and Catherine T. MacArthur Foundation, the Alfred P. Sloan Foundation, the ISI Foundation, the Paul G. Allen Family Foundation, the Army Research Laboratory (W911NF1020022), the Army Research Office (BassettW911NF1410679, GraftonW911NF1610474, DCIST W911NF1720181), the Office of Naval Research, the National Institute of Mental Health (2R01DC00920911, R01MH112847, R01MH107235, R21M MH106799), the National Institute of Child Health and Human Development (1R01HD08688801), National Institute of Neurological Disorders and Stroke (R01 NS099348) and the National Science Foundation (BCS1441502, BCS1430087, NSF PHY1554488 and BCS1631550). L.P. is supported by an NSF Graduate Research Fellowship. The content is solely the responsibility of the authors and does not necessarily represent the official views of any of the funding agencies.
Author information
Affiliations
Contributions
C.W.L. and D.S.B. conceived the project. C.W.L. designed the framework and performed the analysis. C.W.L. and A.E.K. performed the human experiments. C.W.L. wrote the manuscript and Supplementary Information. L.P., A.E.K. and D.S.B. edited the manuscript and Supplementary Information.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extended data
Extended Data Fig. 1 Distributions of network effects over individual subjects.
ae, Distributions over subjects of the different reaction time effects: the entropic effect (n = 177), or the increase in reaction times for increasing produced information (a); the extended crosscluster effects (n = 173), or the difference in reaction times between internal and crosscluster transitions (b), between boundary and crosscluster transitions (c), and between internal and boundary transitions (d) in the modular graph; and the modular effect (n = 84), or the difference in reaction times between the modular network and random k4 networks (e). fj, Distributions over subjects of the different effects on error rates: the entropic effect (f), the extended crosscluster effects (gi), and the modular effect (j).
Extended Data Fig. 2 Correlations between different network effects across subjects.
a, Pearson correlations between the entropic and extended crosscluster effects on reaction times (n = 142 subjects). b, Pearson correlations between the entropic and extended crosscluster effects on error rates (n = 142 subjects). In a and b, the modular effects on reaction times and error rates are not shown because they were measured in a different population of subjects. c, Pearson correlation between the impact on reaction time and error rate for the entropic effect (n = 177 subjects), extended crosscluster effects (n = 173 subjects), and the modular effect (n = 84 subjects). Statistically significant correlations are indicated by pvalues less than 0.001 (***), less than 0.01 (**), and less than 0.05 (*).
Extended Data Fig. 3 KL divergence of real networks for different values of η.
a, KL divergence of fully randomized versions of the real networks listed in Table S12 (\({D}_{\,\text{KL}}^{\text{rand}\,}\)) compared with the true value (\({D}_{\,\text{KL}}^{\text{real}\,}\)) as η varies from zero to one. Every real networks maintains lower KL divergence than the corresponding randomized network across all values of η. b, Difference between the KL divergence of real and fully randomized networks as a function of η. c, KL divergence of degreepreserving randomized versions of the real networks (\({D}_{\,\text{KL}}^{\text{deg}\,}\)) compared with \({D}_{\,\text{KL}}^{\text{real}\,}\) as η varies from zero to one. The real networks display lower KL divergence than the degreepreserving randomized versions across all values of η. d, Difference between the KL divergence of real and degreepreserving randomized networks as a function of η. All networks are undirected, and each line is calculated using one randomization of the corresponding real network.
Extended Data Fig. 4 KL divergence of real networks under the powerlaw model of human expectations.
a, KL divergence of fully randomized versions of the real networks listed in Table S12 (\({D}_{\,\text{KL}}^{\text{rand}\,}\)) compared with the true value (\({D}_{\,\text{KL}}^{\text{real}\,}\)). Expectations \(\hat{P}\) are defined as in Eq. (9) with f(t) = (t+1)^{−α}, and we allow α to vary between 1 and 10. The real networks maintain lower KL divergence than the randomized network across all values of α. b, Difference between the KL divergence of real and fully randomized networks as a function of α. c, KL divergence of degreepreserving randomized versions of the real networks (\({D}_{\,\text{KL}}^{\text{deg}\,}\)) compared with \({D}_{\,\text{KL}}^{\text{real}\,}\) as α varies from 1 to 10. The real networks display lower KL divergence than the degreepreserving randomized versions across all values of α. d, Difference between the KL divergence of real and degreepreserving randomized networks as a function of α. All networks are undirected, and each line is calculated using one randomization of the corresponding real network.
Extended Data Fig. 5 KL divergence of real networks under the factorial model of human expectations.
a, KL divergence of fully randomized versions of the real networks listed in Table S12 (\({D}_{\,\text{KL}}^{\text{rand}\,}\)) compared with the exact value (\({D}_{\,\text{KL}}^{\text{real}\,}\)). Expectations \(\hat{P}\) are defined as in Eq. (9) with f(t) = 1/t!. b, KL divergence of degreepreserving randomized versions of the real networks (\({D}_{\,\text{KL}}^{\text{deg}\,}\)) compared with \({D}_{\,\text{KL}}^{\text{real}\,}\). In both cases, the real networks maintain lower KL divergence than the randomized versions. Data points and error bars (standard deviations) are estimated from 10 realizations of the randomized networks.
Extended Data Fig. 6 Entropy and KL divergence of directed versions of real networks.
a, Entropy of directed versions of the real networks listed in Table S12 (S^{real}) compared with fully randomized versions (S^{rand}). Entropy is calculated directly from Eq. (2) with the stationary distribution \(\pi\) calculated numerically. b KL divergence of directed versions of the real networks (\({D}_{\,\text{KL}}^{\text{real}\,}\)) compared with fully randomized versions (\({D}_{\,\text{KL}}^{\text{rand}\,}\)). Expectations \(\hat{P}\) are defined as in Eq. (10) with η set to the average value 0.80 from our human experiments. c, Entropy of randomized versions of directed real networks with in and outdegrees preserved (S^{deg}) compared with S^{real}. d, KL divergence of degreepreserving randomized versions of directed real networks (\({D}_{\,\text{KL}}^{\text{deg}\,}\)) compared with \({D}_{\,\text{KL}}^{\text{real}\,}\). Data points and error bars (standard deviations) are estimated from 100 realizations of the randomized networks.
Extended Data Fig. 7 Entropy and KL divergence of temporally evolving versions of real networks.
Entropy of temporally evolving versions of the real networks listed in Table S12 (S^{real}) compared with fully randomized versions (S^{rand}). Each line represents a sequence of growing networks and each symbol represents the final version of the network. b, KL divergence of evolving versions of the real networks (\({D}_{\,\text{KL}}^{\text{real}\,}\)) compared with fully randomized versions (\({D}_{\,\text{KL}}^{\text{rand}\,}\)). Expectations \(\hat{P}\) are defined as in Eq. (10) with η set to the average value 0.80 from our human experiments. c, Entropy of temporally evolving versions of real networks (S^{real}) compared with degreepreserving randomized versions (S^{deg}). d, KL divergence of temporally evolving versions of real networks (\({D}_{\,\text{KL}}^{\text{real}\,}\)) compared with degreepreserving randomized versions (\({D}_{\,\text{KL}}^{\text{deg}\,}\)). Across all panels, each point along the lines represents an average over five realizations of the randomized networks.
Extended Data Fig. 8 Evolution of the difference in entropy and KL divergence between real networks and randomized versions.
a, Difference between the entropy of temporally evolving real networks (S^{real}) and the entropy of fully randomized versions of the same networks (S^{rand}) plotted as a function of the fraction of the final network size. Each line represents a sequence of growing networks that culminates in one of the communication networks studied in the main text. b, Difference between the KL divergence of temporally evolving real networks (\({D}_{\,\text{KL}}^{\text{real}\,}\)) and that of fully randomized versions (\({D}_{\,\text{KL}}^{\text{rand}\,}\)) plotted as a function of the fraction of the final network size. When calculating the KL divergences, the expectations \(\hat{P}\) are defined as in Eq. (10) with η set to the average value 0.80 from our human experiments. Across both panels, each point along the lines represents an average over five realizations of the randomized networks.
Extended Data Fig. 9 Comparison of directed citation and language networks.
a, Outdegrees \({k}_{i}^{\,\text{out}\,}={\sum }_{j}{G}_{ij}\) of nodes in the arXiv HepTh citation network compared with the indegrees \({k}_{i}^{\,\text{in}\,}={\sum }_{j}{G}_{ji}\) of the same nodes; we find a weak Spearman’s correlation of r_{s} = 0.18. b, Outdegrees compared with indegrees of nodes in the Shakespeare language (noun transition) network; we find a strong correlation r_{s} = 0.92. c, Entries in the stationary distribution π_{i} for different nodes in the citation network compared with the nodelevel entropy S_{i}; we find a weakly negative Spearman’s correlation r_{s} = − 0.09. d, Entries in the stationary distribution compared with nodelevel entropies in the language network; we find a strong Spearman’s correlation r_{s} = 0.87.
Extended Data Fig. 10 Comparison of allword transition networks and noun transition networks.
ab, Difference between the KL divergence of language (word transition) networks (\({D}_{\,\text{KL}}^{\text{real}\,}\)) and degreepreserving randomized versions of the same networks (\({D}_{\,\text{KL}}^{\text{deg}\,}\)). We consider networks of transitions between all words (a) and networks of transitions between nouns (b). cd, Difference between the average clustering coefficient of language networks (CC^{real}) and degreepreserving randomized versions of the same networks (CC^{deg}), where transitions are considered between all words (c) or only nouns (d). In all panels, data points and error bars (standard deviations) are estimated from 100 realizations of the randomized networks, and the networks are undirected.
Supplementary information
Supplementary Information
Supplementary discussion, Figs. 1–21 and Tables 1–12.
41567_2020_924_MOESM2_ESM.pdf
Reporting Summary
Source data
Source Data Fig. 1
Source data for Fig. 1, Supplementary Figs. 3–5 and Supplementary Tables 1–11.
Source Data Fig. 2
Source data for Fig. 2 and Supplementary Fig. 1.
Source Data Fig. 3
Source data for the networks in Fig. 3, Table 1 and Supplementary Figs. 6–9.
Rights and permissions
About this article
Cite this article
Lynn, C.W., Papadopoulos, L., Kahn, A.E. et al. Human information processing in complex networks. Nat. Phys. 16, 965–973 (2020). https://doi.org/10.1038/s4156702009247
Received:
Accepted:
Published:
Issue Date:
Further reading

Construction of a model as an information channel between the physical phenomenon and observer
Journal of the Association for Information Science and Technology (2021)

Multiscale statistical physics of the panviral interactome unravels the systemic nature of SARSCoV2 infections
Communications Physics (2021)

Heterogeneity across neural populations: Its significance for the dynamics and functions of neural circuits
Physical Review E (2021)

On the Dual Nature of Adoption Processes in Complex Networks
Frontiers in Physics (2021)

The growth and form of knowledge networks by kinesthetic curiosity
Current Opinion in Behavioral Sciences (2020)