Skip to main content

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Neurocomputational mechanism of real-time distributed learning on social networks


Social networks shape our decisions by constraining what information we learn and from whom. Yet, the mechanisms by which network structures affect individual learning and decision-making remain unclear. Here, by combining a real-time distributed learning task with functional magnetic resonance imaging, computational modeling and social network analysis, we studied how humans learn from observing others’ decisions on seven-node networks with varying topological structures. We show that learning on social networks can be approximated by a well-established error-driven process for observational learning, supported by an action prediction error encoded in the lateral prefrontal cortex. Importantly, learning is flexibly weighted toward well-connected neighbors, according to activity in the dorsal anterior cingulate cortex, but only insofar as social observations contain secondhand, potentially intertwining, information. These data suggest a neurocomputational mechanism of network-based filtering on the sources of information, which may give rise to biased learning and the spread of misinformation in an interconnected society.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Task schematic.
Fig. 2: DeGroot learning model.
Fig. 3: Behavioral evidence.
Fig. 4: rLPFC tracks the value estimate of aPE in S1, S2 and S3.
Fig. 5: Activity in the dACC correlates with RD values in S2 and S3.
Fig. 6: Activity in the dACC does not correlate with RD values in S1.
Fig. 7: VMPFC tracks the value estimate of updated Enew at the time of observation in S1, S2 and S3.

Similar content being viewed by others

Data availability

Data underlying the findings of this study are available at Open Science Framework:

Code availability

Code supporting the findings of this study is available at Open Science Framework:


  1. Wasserman, S. & Faust, K. Social Network Analysis (Cambridge Univ. Press, 1994).

  2. Borgatti, S. P., Mehra, A., Brass, D. J. & Labianca, G. Network analysis in the social sciences. Science 323, 892–895 (2009).

    Article  CAS  PubMed  Google Scholar 

  3. Momennejad, I. Collective minds: social network topology shapes collective cognition. Phil. Trans. R. Soc. B 377, 20200315 (2022).

  4. Jackson, M. O. Social and Economic Networks (Princeton Univ. Press, 2010).

  5. Toelch, U. & Dolan, R. J. Informational and normative influences in conformity from a neurocomputational perspective. Trends Cogn. Sci. 19, 579–589 (2015).

    Article  PubMed  Google Scholar 

  6. Lazer, D. M. J. et al. The science of fake news. Science 359, 1094–1096 (2018).

    Article  CAS  PubMed  Google Scholar 

  7. Burke, C. J., Tobler, P. N., Baddeley, M. & Schultz, W. Neural mechanisms of observational learning. Proc. Natl Acad. Sci. USA 107, 14431–14436 (2010).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Suzuki, S. et al. Learning to simulate others’ decisions. Neuron 74, 1125–1137 (2012).

    Article  CAS  PubMed  Google Scholar 

  9. Dunne, S. & O’Doherty, J. P. Insights from the application of computational neuroimaging to social neuroscience. Curr. Opin. Neurobiol. 23, 387–392 (2013).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  10. Behrens, T. E. J., Hunt, L. T., Woolrich, M. W. & Rushworth, M. F. S. Associative learning of social value. Nature 456, 245–249 (2008).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Zhu, L., Mathewson, K. E. & Hsu, M. Dissociable neural representations of reinforcement and belief prediction errors underlie strategic learning. Proc. Natl Acad. Sci. USA 109, 1419–1424 (2012).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  12. Jiang, Y., Wu, H.-T., Mi, Q. & Zhu, L. Neurocomputations of strategic behavior: from iterated to novel interactions. Wiley Interdiscip. Rev. Cogn. Sci. 13, e1598 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  13. Sutton, R. S. & Barto, A. G. Reinforcement Learning (MIT, 1998).

  14. Zhang, L. & Gläscher, J. A brain network supporting social influences in human decision-making. Sci. Adv. 6, eabb4159 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  15. McPherson, M., Smith-Lovin, L. & Cook, J. M. Birds of a feather: homophily in social networks. Annu. Rev. Sociol. 27, 415–444 (2001).

    Article  Google Scholar 

  16. Cinelli, M., De Francisci Morales, G., Galeazzi, A., Quattrociocchi, W. & Starnini, M. The echo chamber effect on social media. Proc. Natl Acad. Sci. USA 118, e2023301118 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  17. Mobius, M. & Rosenblat, T. Social learning in economics. Annu. Rev. Econ. 6, 827–847 (2014).

    Article  Google Scholar 

  18. Kulahci, I. G. & Quinn, J. L. Dynamic relationships between information transmission and social connections. Trends Ecol. Evol. 34, 545–554 (2019).

    Article  PubMed  Google Scholar 

  19. Basyouni, R. & Parkinson, C. Mapping the social landscape: tracking patterns of interpersonal relationships. Trends Cogn. Sci. 26, 204–221 (2022).

    Article  PubMed  Google Scholar 

  20. Molavi, P., Tahbaz-Salehi, A. & Jadbabaie, A. A theory of non-Bayesian social learning. Econometrica 86, 445–490 (2018).

    Article  Google Scholar 

  21. DeGroot, M. H. Reaching a consensus. J. Am. Stat. Assoc. 69, 118–121 (1974).

    Article  Google Scholar 

  22. Golub, B. & Jackson, M. O. Naïve learning in social networks and the wisdom of crowds. Am. Econ. J. Microecon. 2, 112–149 (2010).

    Article  Google Scholar 

  23. Dayan, P., Kakade, S. & Montague, P. R. Learning and selective attention. Nat. Neurosci. 3, 1218–1223 (2000).

    Article  CAS  PubMed  Google Scholar 

  24. Behrens, T. E. J., Woolrich, M. W., Walton, M. E. & Rushworth, M. F. S. Learning the value of information in an uncertain world. Nat. Neurosci. 10, 1214–1221 (2007).

    Article  CAS  PubMed  Google Scholar 

  25. Radulescu, A., Shin, Y. S. & Niv, Y. Human representation learning. Annu. Rev. Neurosci. 44, 253–273 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  26. Leong, Y. C., Radulescu, A., Daniel, R., DeWoskin, V. & Niv, Y. Dynamic interaction between reinforcement learning and attention in multidimensional environments. Neuron 93, 451–463 (2017).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  27. Anderson, L. R. & Holt, C. A. Information cascades in the laboratory. Am. Econ. Rev. 87, 847–862 (1997).

    Google Scholar 

  28. Grimm, V. & Mengel, F. Experiments on belief formation in networks. J. Eur. Econ. Assoc. 18, 49–82 (2018).

  29. Chandrasekhar, A. G., Larreguy, H. & Xandri, J. P. Testing models of social learning on networks: evidence from two experiments. Econometrica 88, 1–32 (2020).

    Article  Google Scholar 

  30. Paluck, E. L., Shepherd, H. & Aronow, P. M. Changing climates of conflict: a social network experiment in 56 schools. Proc. Natl Acad. Sci. USA 113, 566–571 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  31. Paluck, E. L. & Shepherd, H. The salience of social referents: a field experiment on collective norms and harassment behavior in a school social network. J. Pers. Soc. Psychol. 103, 899–915 (2012).

  32. Friedkin, N. E. A Structural Theory of Social Influence (Cambridge Univ. Press, 1998).

  33. Ramsey, R., Kaplan, D. M. & Cross, E. S. Watch and learn: the cognitive neuroscience of learning from othersʼ actions. Trends Neurosci. 44, 478–491 (2021).

    Article  CAS  PubMed  Google Scholar 

  34. Niv, Y. Reinforcement learning in the brain. J. Math. Psychol. 53, 139–154 (2009).

    Article  Google Scholar 

  35. Heilbronner, S. R. & Hayden, B. Y. Dorsal anterior cingulate cortex: a bottom-up view. Annu. Rev. Neurosci. 39, 149–170 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  36. Sheth, S. A. et al. Human dorsal anterior cingulate cortex neurons mediate ongoing behavioural adaptation. Nature 488, 218–221 (2012).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  37. Jocham, G., Neumann, J., Klein, T. A., Danielmeier, C. & Ullsperger, M. Adaptive coding of action values in the human rostral cingulate zone. J. Neurosci. 29, 7489–7496 (2009).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  38. O'Reilly, J. X. et al. Dissociable effects of surprise and model update in parietal and anterior cingulate cortex. Proc. Natl Acad. Sci. USA 110, E3660–E3669 (2013).

    CAS  PubMed  PubMed Central  Google Scholar 

  39. Yarkoni, T., Poldrack, R. A., Nichols, T. E., Van Essen, D. C. & Wager, T. D. Large-scale automated synthesis of human functional neuroimaging data. Nat. Methods 8, 665–670 (2011).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  40. Parkinson, C., Kleinbaum, A. M. & Wheatley, T. Spontaneous neural encoding of social network position. Nat. Hum. Behav. 1, 0072 (2017).

    Article  Google Scholar 

  41. Zerubavel, N., Bearman, P. S., Weber, J. & Ochsner, K. N. Neural mechanisms tracking popularity in real-world social networks. Proc. Natl Acad. Sci. USA 112, 15072–15077 (2015).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  42. Morelli, S. A., Leong, Y. C., Carlson, R. W., Kullar, M. & Zaki, J. Neural detection of socially valued community members. Proc. Natl Acad. Sci. USA 15, 201712811–201712816 (2018).

    Google Scholar 

  43. Kriegeskorte, N., Simmons, W. K., Bellgowan, P. S. F. & Baker, C. I. Circular analysis in systems neuroscience: the dangers of double dipping. Nat. Neurosci. 12, 535–540 (2009).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  44. Cascio, C. N., Scholz, C. & Falk, E. B. Social influence and the brain: persuasion, susceptibility to influence and retransmission. Curr. Opin. Behav. Sci. 3, 51–57 (2015).

    Article  Google Scholar 

  45. Sutton, J. et al. A cross-hazard analysis of terse message retransmission on Twitter. Proc. Natl Acad. Sci. USA 112, 14793–14798 (2015).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  46. Grill-Spector, K., Henson, R. & Martin, A. Repetition and the brain: neural models of stimulus-specific effects. Trends Cogn. Sci. 10, 14–23 (2006).

    Article  PubMed  Google Scholar 

  47. Almaatouq, A., Noriega-Campero, A., Alotaibi, A., Krafft, P. M. & Pentland, A. Adaptive social networks promote the wisdom of crowds. Proc. Natl Acad. Sci. USA 117, 11379–11386 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  48. Aral, S. & Walker, D. Identifying influential and susceptible members of social networks. Science 337, 337–341 (2012).

    Article  CAS  PubMed  Google Scholar 

  49. Bossaerts, P. & Murawski, C. Computational complexity and human decision-making. Trends Cogn. Sci. 21, 917–929 (2017).

    Article  PubMed  Google Scholar 

  50. Freeman, L. C. Centrality in social networks conceptual clarification. Soc. Netw. 1, 215–239 (1978).

    Article  Google Scholar 

  51. Watts, D. J. Collaborative learning in networks. Proc. Natl Acad. Sci. USA 109, 764–769 (2012).

    Article  PubMed  Google Scholar 

  52. Watts, D. J. Small Worlds (Princeton Univ. Press, 2018).

  53. Centola, D. The spread of behavior in an online social network experiment. Science 329, 1194–1197 (2010).

    Article  CAS  PubMed  Google Scholar 

  54. Daunizeau, J., Adam, V. & Rigoux, L. VBA: a probabilistic treatment of nonlinear models for neurobiological and behavioural data. PLoS Comput. Biol. 10, e1003441 (2014).

  55. Chandrasekhar, A. G., Larreguy, H. & Xandri, J. P. Testing Models of Social Learning on Networks: Evidence from a Lab Experiment in the Field Working Paper Series No. 21468 (National Bureau of Economic Research, 2015).

  56. Acemoglu, D., Dahleh, M. A., Lobel, I. & Ozdaglar, A. Bayesian learning in social networks. Rev. Econ. Stud. 78, 1201–1236 (2011).

    Article  Google Scholar 

  57. Gale, D. & Kariv, S. Bayesian learning in social networks. Games Econ. Behav. 45, 329–346 (2003).

    Article  Google Scholar 

  58. Burt, R. S., Kilduff, M. & Tasselli, S. Social network analysis: foundations and frontiers on advantage. Annu. Rev. Psychol. 64, 527–547 (2013).

    Article  PubMed  Google Scholar 

  59. O'Doherty, J. P., Hampton, A. & Kim, H. Model-based fMRI and its application to reward learning and decision making. Ann. N. Y. Acad. Sci. 1104, 35–53 (2007).

    Article  PubMed  Google Scholar 

  60. Gläscher, J. P. & O’Doherty, J. P. Model-based approaches to neuroimaging: combining reinforcement learning theory with fMRI data. Wiley Interdiscip. Rev. Cogn. Sci. 1, 501–510 (2010).

    Article  PubMed  Google Scholar 

  61. Friston, K. J., Penny, W. D. & Glaser, D. E. Conjunction revisited. NeuroImage 25, 661–667 (2005).

    Article  PubMed  Google Scholar 

  62. Cockburn, J., Man, V., Cunningham, W. A. & O’Doherty, J. P. Novelty and uncertainty regulate the balance between exploration and exploitation through distinct mechanisms in the human brain. Neuron 110, 2691–2702.e8 (2022).

    Article  CAS  PubMed  Google Scholar 

  63. Suzuki, S., Jensen, E. L. S., Bossaerts, P. & O’Doherty, J. P. Behavioral contagion during learning about another agent’s risk-preferences acts on the neural representation of decision-risk. Proc. Natl Acad. Sci. USA 113, 3755–3760 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

Download references


We thank Y. Yin, Y. Wang and Y. Dong for assistance with intranet setup, data collection and code validation. We also thank the National Center for Protein Sciences and the high-performance computing platform at the Center for Life Sciences at Peking University for facilitating data acquisition and computation. This work is supported by NSFC (32071095 to L.Z.), STI2030-Major Projects (2022ZD0205104 to L.Z.), China Postdoctoral Science Foundation (2022TQ0013 to Q.M.) and Center for Life Sciences at Peking University.

Author information

Authors and Affiliations



Y.J. and L.Z. designed the study. Y.J. conducted the experiments. Y.J., Q.M. and L.Z. analyzed the data. Y.J. and L.Z. wrote the paper.

Corresponding author

Correspondence to Lusha Zhu.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Neuroscience thanks A. Bhandari, S. Sul and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Stimuli.

(a) Network structures and private signals. Blue/yellow represents the color of the private signal at a specific node. Circle/square represents the location of a behavioral/fMRI participant, which was assigned pseudo-randomly (Methods). Network structures were displayed using the ‘force-directed’ algorithm implemented in MATLAB (R2017a), with minor adjustments to node coordinates for avoiding overlapping edges. During the experiment, the network structure was presented to all participants, but one’s private signal was only known to the particular subject. (b) The empirical distribution of discriminability indices calculated from the pre-experiment simulation based on the DeGroot, Baseline, and Bayesian learning models (Methods). The radar plot illustrates the level of discriminability between any two candidate models. Each axis represents a model pair. For any point on an axis, the distance from the center of the plot corresponds to the discriminability of the given model pair (ranging from 0 to 1). The dotted, dashed, and solid lines in black represent, respectively, the 99th, 95th, and 90th percentiles of discriminability indices computed from all 128 × 853 candidate stimuli. The red line represents the discriminability averaged over 40 sets of stimuli employed in the experiment. The discriminability index is defined as the proportion of choices simulated by one model, given a network structure and private signals on the network, that disagree with the simulation from the other model, and is averaged over decisions in S2 and S3 and across 7 agents simulated from different network locations (Methods). (c) Selected networks demonstrate reasonable distribution ranges in the topological features relevant for information transmission (Methods). Each gray/red histogram depicts the distribution of the network parameter computed by pooling candidate/selected stimuli.

Extended Data Fig. 2 The DeGroot learning model explains multiple aspects of behavior across stages.

The actual vs. model-predicted choice consensus (a) and choice accuracy (b). Choice consensus is defined as the proportion of participants embedded on the same network who choose the same (dominant) option at a given stage in a game. Choice accuracy is defined as the proportion of participants on the same network whose choices are consistent with the most likely underlying state given the distribution of private signals in the game. (c) Model-predicted choice difficulty vs. reaction time (RT), a widely used empirical measure for choice difficulty. Model-derived choice difficulty is defined as the entropy of the softmax action probability calculated from the DeGroot learning model (that is, more difficult when the model-derived action probability is closer to 0.5). To control for the influences of the initial guesses (which are affected by the distribution of private signals), effects of initial guesses (S0) are subtracted from all the above measures. Each dot represents the average value across all subject groups given a stage and a game, colored by the stage (N = 40 games).

Extended Data Fig. 3 Model comparisons based on (a) individual-level and (b) network-level estimations.

Goodness-of-fit comparisons are based on the in-sample BIC scores (left), out-of-sample prediction power based on a five-fold cross-validation procedure (middle), and model frequencies and exceedance probability (Pexc) calculated by Bayesian model selection (right; Methods). Two variants of the Bayesian learning models (Noisy-Bayes and Baseline-Bayes) are considered (see Methods and Supplementary Note 2). Error bars in the middle panels of (a) represent mean ± intersubject SEM, of (b) represents mean ± intergame SEM, and in the right panels of (a, b) represent estimated model frequencies ± SD of the Dirichlet distribution. Individual-level analysis are based on N = 209 behavioral subjects, and network-level analysis are based on N = 40 games. *** P < 0.001, two-sided paired t-tests, all Bonferroni-corrected.

Extended Data Fig. 4 Activity in the right LPFC is correlated with the observed action and belief expectation estimate, with opposing signs.

(a) Neural betas with respect to two components in an action prediction error (aPE) signal, the observed actions (1 if observation matches the observer’s prior choice; otherwise, −1) and belief expectation estimates (Eold) associated with the observer’s prior decision. The beta values were separately extracted from the rLPFC cluster as identified by aPE estimates in Fig. 4a. (b) Visualization of rLPFC responses to observed actions and belief expectation estimates. Consistent with the aPE assumption, the mean rLPFC activity is higher when an observation is consistent with the observer’s prior decision (observation = 1) than when the observation differs from the observer’s prior decision (observation = −1). Also, the mean rLPFC activity demonstrates a negative main effect for high- vs. low-value estimates of belief expectation (Eold), based on median splits on belief expectation estimates for each fMRI participant. Each dot in violin plots represents a subject. Error bars represent intersubject SEM in the fMRI sample (N = 25). * P < 0.05, *** P < 0.001, two-sided t-tests.

Extended Data Fig. 5 Behavioral and neural evidence that learning is not limited to the most-connected (MC) neighbor but also to the other (non-MC) neighbors.

(a) Mixed-effects logistic regression for each separate learning stage. The regression analyses are similar to those in Fig. 3a but with the following 3 regressors, serially orthogonalized: (i) the unweighted sum of observations of all neighbors, (ii) action by the MC neighbor scaled by her degree centrality, and (iii) the ND-weighted sum of observations for all non-MC neighbors. Serial orthogonalization ensures the regression coefficient for the third regressor reflects only the variances in choice behavior that can be uniquely explained by this last regressor (see also Methods and Supplementary Table 4 for corresponding model-based analyses). (b) Whole-brain analyses show significant neural responses at the onsets of observations from non-MC neighbors (all thresholded at cluster-wise FWE-corrected P < 0.05, with cluster-forming threshold Punc. < 0.001). (c) ROI analyses comparing effects between the MC and non-MC neighbors. Mean fMRI activity are separately extracted for the MC and non-MC neighbors from respective ROIs and binned by the corresponding estimate values. *** P < 0.001, two-sided z-tests. Error bars in (a) represent the SE of fixed-effect estimates in the logistic regression of the behavioral sample (N = 209), error bars in (c) represent the intersubject SEM of the fMRI sample (N = 25).

Extended Data Fig. 6 Evidence for neighbor-by-neighbor neural representations.

(a) ROI analyses comparing neural responses between the 1st vs. 2nd, odd- vs. even-numbered, 1st half vs. 2nd half observations within a learning stage. The violin plots in the upper panels show the effect sizes within each corresponding ROI (two-sided t-tests, all P < 0.05, uncorrected). The lower panels visualize the effects by plotting the mean BOLD activity extracted from each ROI against bins of ascending values. (b) Whole-brain analyses demonstrating that the observed neural activation could not be entirely attributed to the game-to-game variations. Statistical parametric maps show neural responses at observation onsets to learning variables demeaned within each game (clusters in red), overlaid with the activation with respect to the original values (clusters in yellow). All thresholded at cluster-wise FWE-corrected P < 0.05, with cluster-forming threshold Punc. < 0.001, except for RD-related activation in S2 and S3, which is thresholded at Punc. < 0.001 with cluster size K > 20. The reduced RD-related activation following the removing of between-game variations is consistent with the definition of RD, as it contains both neighbor-by-neighbor (neighbor degree) and game-by-game (total local degree) variances. Error bars represent intersubject SEM in the fMRI sample (N = 25). n.s., not significant; * P < 0.05; all two-sided paired t-tests, uncorrected.

Extended Data Fig. 7 Choice-related neural activity at decision time.

(a) BOLD activity in the orbitofrontal cortex (OFC) is positively correlated with the value estimate of belief expectation for the chosen option at the time of decision submission (cluster-wise FWE-corrected P < 0.05, with cluster-forming threshold Punc. < 0.001). (b) BOLD activity in the anterior cingulate cortex (ACC) and neighboring medial prefrontal cortex (MPFC) at decision submission reflects the model-derived tendency of modifying one’s prior estimation. Left: BOLD activity at decision time in the ACC/MPFC, precuneus, and inferior parietal lobule (not shown) is higher when a subject revises her previous decision than when the subject sticks to the same decision (cluster-wise FWE-corrected P < 0.05, with cluster-forming threshold Punc. < 0.001). No region shows a decreased response to switch vs. stay at choice time at the same statistical threshold. Right: Results of mixed-effects linear regression show that the mean fMRI signal extracted from the identified ACC/MPFC cluster (peak voxel MNI coordinates: x, y, z = −6, 44, 17; as shown in the left panel) at the decision time is negatively correlated with the amount of change in the model-derived belief expectation from the beginning to the end of the corresponding learning stage (that is, belief change estimates). The x-axis represents the model-derived belief change within a learning stage, rounded to the nearest integer for illustration. Error bars represent intersubject SEM in the fMRI sample (N = 25).

Extended Data Fig. 8 Robustness and specificity of the encoding of RD values in the dACC in S2 and S3.

(a) Extent of the dACC responses to RD values in S2 and S3. Yellow clusters reflect the maximal extent of activation to RD, without controlling for any decision variables. Red clusters reflect the activation to the residuals of RD, after being orthogonalized against 10 variables-of-no-interest as parametric modulators (GLM3, Methods). All thresholded at cluster-wise FWE-corrected P < 0.05, with cluster-forming threshold Punc. < 0.001. (b–c) The visual cortex, but not the dACC, tracks the observees’ visual centralities in the network display (b) and the visual distance between the observer and observee’s locations (c). Visual centrality is defined as the Euclidean distance between the observee’s network position and the centroid of the network display. All thresholded at cluster-wise FWE-corrected P < 0.05, with cluster-forming threshold Punc. < 0.001. (d) Example networks where the observees are associated with the same visual centralities (left) or the same visual distance between the observer and observee (right), yet the dACC activity varies with the RD values. (e) Overlay of RD-related activation in S1, S2, and S3 in a single map. For illustration purpose, all maps are shown at Punc. < 0.001 with K > 10. (f) Whole-brain conjunction analyses for RD correlates. Left: A three-way conjunction on RD correlates among S1, S2, and S3 identified no significant overlap in either the positive or negative responses to RD (Methods). Right: A two-way conjunction analysis between S2 and S3 on RD correlates identified a significant overlap for the positive correlations with RD in the dACC between S2 and S3. No overlap was identified in the whole-brain conjunction analysis for the negative correlation with RD between S2 and S3 at the same threshold. All thresholded at cluster-wise FWE-corrected P < 0.05, with cluster-forming threshold Punc. < 0.001. Error bars represent intersubject SEM in the fMRI sample (N = 25).

Extended Data Fig. 9 Whole-brain and ROI analyses for RD correlates at observation onsets in S1.

Brain regions where BOLD activity correlates with RD values at observation onsets in S1 (cluster-wise FWE-corrected P < 0.05, with cluster-forming threshold Punc. < 0.001; same as Fig. 6a but shown in different cuts; GLM1, Methods). (b–c) ROI analyses of each region in (a), with respect to neighbor’s degree (numerator of RD) and total local degree (TLD; denominator of RD), respectively. Unlike the dACC, where fMRI signals correlated with both neighbor degree and TLD in S2 and S3, the neural encoding of RD identified in S1 is driven by one of the two components in an RD signal. That is, a cluster in the visual cortex tracks only neighbor degree, whereas clusters in the precuneus, posterior cingulate cortex (PCC), and visual cortex track only TLD. Effect sizes in these regions in S2 and S3 are also included for completeness. Violin plot with color represents significant effects. Each dot represents a subject (N = 25). * P < 0.05, ** P < 0.05, *** P < 0.001, two-sided t-tests, Bonferroni-corrected.

Extended Data Fig. 10 Whole-brain psychophysiological interaction (PPI) analysis testing differential functional connectivity with the VMPFC in S2 and S3 vs. S1.

(a) Left: Increased functional connectivity between the seed region in the VMPFC (6-mm sphere around the peak activation as identified in Fig. 7a) and a cluster in the anterior cingulate cortex (ACC) at observation onsets in S2 and S3, relative to S1 (cluster-wise FWE-corrected P < 0.05, with cluster-forming threshold Punc. < 0.001; Methods). Right: No systematic difference in the effect sizes of functional coupling between S2 and S3 (two-sided paired t-test, t24 = −1.20, P = 0.242), as revealed by the PPI betas extracted from the significant cluster in the ACC as identified in the left panel. Each dot represents a subject (N = 25). (b) Overlay of the whole-brain PPI activation (green; as in (a)), RD correlates in S2 and S3 (blue; as in Fig. 5a), the overlap between the first two activation maps (red), and the result of a formal whole-brain conjunction analysis between regions correlating with RD values in S2 and S3 and regions demonstrating differential functional connectivity with the VMPFC seed region in S2 and S3 vs. S1 (yellow; cluster-wise FWE-corrected Pconj < 0.05, with cluster-forming threshold Punc. < 0.001).

Supplementary information

Supplementary Information

Supplementary Figs. 1–6, Tables 1–5 and Notes 1–3.

Reporting Summary

Animated illustration of the real-time distributed learning game.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jiang, Y., Mi, Q. & Zhu, L. Neurocomputational mechanism of real-time distributed learning on social networks. Nat Neurosci 26, 506–516 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing