Article | Open | Published:

Population Structure Promotes the Evolution of Intuitive Cooperation and Inhibits Deliberation

Scientific Reportsvolume 8, Article number: 6293 (2018) | Download Citation

Abstract

Spatial structure is one of the most studied mechanisms in evolutionary game theory. Here, we explore the consequences of spatial structure for a question which has received considerable empirical and theoretical attention in recent years, but has not yet been studied from a network perspective: whether cooperation relies on intuitive predispositions or deliberative self-control. We examine this question using a model which integrates the “dual-process” framework from cognitive science with evolutionary game theory, and considers the evolution of agents who are embedded within a social network and only interact with their neighbors. In line with past work in well-mixed populations, we find that selection favors either the intuitive defector strategy which never deliberates, or the dual-process cooperator strategy which intuitively cooperates but uses deliberation to switch to defection when doing so is payoff-maximizing. We find that sparser networks (i.e., smaller average degree) facilitate the success of dual-process cooperators over intuitive defectors, while also reducing the level of deliberation that dual-process cooperators engage in; and that these results generalize across different kinds of networks. These observations demonstrate the important role that spatial structure can have not just on the evolution of cooperation, but on the co-evolution of cooperation and cognition.

Introduction

Understanding the evolution of cooperation, which is collectively beneficial but individually costly, is a major focus of research in a wide range of fields including computer science, psychology, economics, and evolutionary biology. To that end, a great deal of work has illuminated various mechanisms which can promote the evolution of cooperative behavior1. In recent years, work on the evolution of cooperation has begun to consider not just cooperative or non-cooperative choices, but also the cognitive processes underlying these choices2,3,4 (for a mini-review, see ref.5). This work has explored cognition using the “dual-process” framework6,7,8,9, in which decisions are made based on two different cognitive processes: (1) Automatic, intuitive and relatively effortless yet inflexible processes; versus (2) controlled, deliberate and relatively effortful but flexible processes.

Motivating the theoretical investigation of intuition, deliberation and the evolution of cooperation is a body of empirical work using economic game experiments10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31. In these studies, participants make incentivized choices about paying costs to benefit others, and are experimentally induced to rely relatively more on intuition or deliberation. For example, participants may be placed under time pressure, made to complete another cognitively demanding task while making their decision, or simply asked to respond using their intuition or careful reasoning. A meta-analysis of studies examining positively non-zero-sum cooperation games of the type typically studied in evolutionary game theory models (e.g., the Prisoner’s Dilemma, PD) found that intuitive gut responses tend to support cooperation, while deliberation undermines cooperation in games where defection is strictly payoff-maximizing (e.g., 1-shot PDs) but supports cooperation in games where cooperation can pay off (e.g., repeated games)32. [Note that while a subsequent multi-lab pre-registered replication project raised questions about a causal effect of time pressure on cooperation in 1-shot social dilemmas11, 66% of participants in the time pressure condition of those experiments did not respond within the allotted time12, and a more recent pre-registered study solved this non-compliance problem and confirmed prior conclusions that cooperation was higher under time pressure than time delay24]. Similar results regarding intuitive cooperation were also found in a field experiment on real-world helping behavior33, and when analyzing interviews with people who risked their lives to save strangers34. This pattern of results was explained by a verbal theory, the Social Heuristics Hypothesis (SHH)16,35. The SHH postulates that typically advantageous (i.e., long-run payoff maximizing) behaviors become automatized as intuitive default responses, whereas deliberation can override these intuitive defaults to better match the strategic details of the current situation at hand.

Evolutionary game theoretic models of the co-evolution of cognition and cooperation sought to explore this question formally, conceptualizing intuition versus deliberation as a trade-off between ease and flexibility, and asking what intuitive and deliberation behaviors would be favored by natural selection2,3,4. The results have indicated that when future consequences are sufficiently likely, natural selection favors a “dual-process cooperation” strategy that accords with the experimental results: this strategy is intuitively predisposed to cooperate, but uses deliberation to overrule that predisposition and instead defect when doing so is payoff-maximizing (e.g., in 1-shot anonymous interactions).

This prior work on the co-evolution of cooperation and cognition, however, has only considered well-mixed populations. A separate (and much older) line of work examining structured populations has shown that the topology of interaction affects the evolution of cooperation36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53. In particular, non-random interactions can facilitate the evolution of cooperation, even allowing cooperation to succeed in 1-shot Prisoner’s Dilemma games. Experimental work has also explored the role of structure, and although some studies have found little impact of structure on cooperation54,55,56,57,58,59,60, it has been shown that structure does promote cooperation when particular theoretically-derived conditions are satisfied61.

None of this work on population structure, however, has considered the role of intuition versus deliberation. Here we bridge these two approaches to investigate the effect of interaction structure on the co-evolution of cognition and cooperation. We explore conditions (network structures and frequency of one-shot versus repeated games) under which natural selection favors costly deliberation over intuition, as well as cooperative over selfish intuitive responses. In doing so, we shed light on the role of network structure in shaping not only our actions, but also the thought processes that give rise to those actions.

To do so, we adapt a model of the co-evolution of cooperation and cognition proposed for well-mixed populations2. In each generation, agents play a series of Prisoner Dilemma (PD) games in which they can either choose to always defect (ALLD) or to play the reciprocal strategy tit-for-tat (TFT) which cooperates in the first period and then copies the partner’s move from the previous period. The PD games come in one of two types: with probability 1-p it is a one-shot anonymous game (in which defecting is strictly payoff-maximizing); whereas with probability p it is an infinitely repeated game (such that it is payoff-maximizing to play the same strategy as the partner). Cognition is modeled as follows. In each game, each agent can either choose her strategy using a generalized intuitive strategy that is independent of the game type; or she can pay a cost (stochastically sampled decision-by-decision) to deliberate and tailor her strategy choice to whether the game is one-shot or repeated. Thus, each agent has a strategy vector that contains the following four elements: S i , the probability of TFT when the agent decides intuitively and is agnostic to the game type; S 1 , the probability of TFT in one-shot PDs when the agent deliberates and tailors her strategy; S r , the probability of TFT in repeated PDs when the agent deliberates and tailors her strategy; and T, the maximum cost which the agent is willing to pay to deliberate. In each interaction, the agent’s cost of deliberation (d*) is drawn from a uniform distribution [0,1]. The cognitive processing mode (intuition versus deliberation) is then determined by the cost threshold T: if d* ≤ T, the agent pays the cost and deliberates, and if d* > T, the agent plays both games with the same generalized strategy S i .

Our key contribution is to add interaction structure to this model. We do so by specifying a network where agents are represented as nodes and only interact with their immediate neighbors, and where agents update their strategies using the death-birth process with exponential fitness. In this evolutionary dynamic, each generation every agent has a fixed strategy vector and accumulates payoffs across games with all of her neighbors; and then at the end of each generation, an agent is randomly selected to update, and her strategy is replaced with a neighbor selected proportional to an exponential function of the neighbors’ game scores (or, with probability u, a mutation occurs and a randomly drawn strategy is substituted instead)62. This process can represent either genetic evolution, in which case the updating agent dies and the replacing agent reproduces, or cultural evolution/social learning, in which case the updating agent imitates the replacing agent’s strategy. Within this model setup, we examine the influence of population structure on the coevolution of cooperation and cognition by determining the impact of varying the average number of neighbors on the evolutionary outcomes.

Results

We begin by examining the evolutionary outcomes on cycles, networks in which each agent is connected to k/2 neighbors on each side (for a total of k neighbors). In particular, we consider the average value of each strategy parameter in steady state, and ask how these values vary based on p (probability of repeated games) and k (number of neighbors). Figure 1A shows the average value of S i (intuitive response) as a function of p and k. To summarize the impact of k on S i , we fit a Sigmoid function to the S i curve for each value of k and then use that to find the critical value of p at which S i equals 0.5 (which we refer to as p*) - representing the probability of repeated games at which the predominant intuitive strategy transitions from defection to cooperation (Fig. 1B). Figure 2A shows the value of the deliberation cost threshold T as a function of p for different values of k, and Fig. 2B summarizes the results by showing the maximum value of T over all p’s (T max ) for each value of k.

Figure 1
Figure 1

Network structure promotes the evolution of intuitive cooperation: As the density of network connections decreases, it becomes easier for selection to favor intuitive cooperators. Shown is the average intuitive response (S i , probability of playing TFT) across different values of p (probability of repeated games) for cycles with different number of neighbors (k) for each node. (A) S i for six representative values of k across the full range of  p. (B) Critical value of p at which S i  = 0.5 across the full range of k. As k decreases, the transition from S i  = 0 to S i  = 1 occurs at lower values of p.

Figure 2
Figure 2

Network structure reduces the amount of deliberation: As the density of network connections decreases, the maximum cost agents are willing to pay to deliberation decreases. (A) Cost threshold of deliberation (T) for six representative values of k across the full range of probabilities of repeated game (p), (B) maximum value of cost threshold of deliberation over all p values (T max ) as a function of number of neighbors in the cycle (k). As k decreases, dual process cooperators engage in less deliberation (T max decreases).

We see that for high values of k (highly connected networks) the results match those found previously in well-mixed populations2: For low values of p, intuitive defectors (ID) who never deliberate (S i  = 0, T = 0) are dominant (deliberative strategies S 1 and S r rarely used and thus dominated by neutral drifted around 0.5 - Fig. 3); but once p becomes sufficiently high, the dual-process cooperator (DC) strategy (S i  = 1, S r  = 1, S 1  = 0, T = c(1-p)) dominates (and as T approaches 0, S 1 and S r are used less and less, and thus get pulled back towards 0.5 by neutral drift - Fig. 3).

Figure 3
Figure 3

Network structure has little qualitative impact on deliberative responses in repeated games, but promotes deliberative cooperation in one-shot games. (A) Deliberative response in repeated games (S r ) and (B) Deliberative response in 1-shot games (S 1 ) across different values of probability of repeated game (p) for different number of neighbors in the cycle (k).

As the number of neighbors in the network (as thus density of connections) decreases, however, we observe marked impacts on both S i and T: the emergence of the dual-process cooperator strategy (S i  = 1) takes place at lower values of p, and these dual-process cooperators engage in less deliberation (T max decreases). These results are visualized more fully in Fig. 4, which shows heatmaps of S i and T as a function of k and p. In sum, we see that as in the well-mixed population, there are only two dominant strategies: for high values of k and low values of p the intuitive defectors (ID) who never deliberate are dominant, while for low values of k and high values of p the dual-process cooperator (DC) strategy dominates. Rather than introducing new successful strategies, interaction structure makes it easier for the DC strategy to succeed, and reduces the amount of deliberation DC agents are willing to engage in.

Figure 4
Figure 4

Dual-process cooperators evolve when it is sufficiently likely that games are repeated and/or the network is sufficiently sparse. (A) Probability of intuitive cooperation (S i ) and (B) cost threshold of deliberation (T) as a function of number of neighbors in the cycle (k) and probability of repeated game (p). The black lines in both panels represent the value of k and p at which the dominant strategy transitions from ID to DC. There are only two dominant strategies: for high values of k and low values of p the intuitive defectors (ID) who never deliberate are dominant, while for low values of k and high values of p the dual-process cooperator (DC) strategy dominates. Lower value of k makes it easier for the DC strategy to succeed, and reduces the amount of deliberation DC agents are willing to engage in.

So far, we looked at the evolutionary dynamics on cycles with homogeneous structure. We now demonstrate the robustness of our results to considering various network structures that are heterogeneous (i.e., not all agents have the same number of neighbors).To do so, we generated heterogeneous networks using the following network models: Watts-Strogatz Small-World63, Barabási-Albert Scale-Free64, and Erdős-Rény65 random networks. Figure 5 summarizes how changing average degree k influences the evolutionary outcomes for each network structure. We see that p* and Tmax follow an extremely similar pattern across all network structures, showing that our results are robust to heterogeneous networks. Furthermore, the extreme level of similarity suggests that the effect of structure on the coevolution of cooperation and cognition is mainly driven by the sparsity of connections within the network, rather than other properties of the network structure.

Figure 5
Figure 5

Similar evolutionary dynamics are observed across varying network structures. (A) Critical value of probability of repeated game p at which S i  = 0.5 and, (B) Maximum value of cost threshold of deliberation T over all values of p, across average network degree k for Cycle, Watts-Strogatz Small-World, Barabási-Albert Scale-Free, and Erdős-Rényi random networks. Results obtained for cycles are robust to networks with heterogeneous degree suggesting that the effect of structure on the coevolution of cooperation and cognition is mainly driven by the sparsity of connections within the network.

Discussion

Our results demonstrate how network topology, and in particular sparsity of connections, can have an important impact on the co-evolution of cooperation and cognition: a small number of neighbors per agent results in higher cooperation even when repeated interactions is rare; it also increases the tendency of agents to rely more on intuitive impulses. This was true across a range of different network structures. More broadly, our results show the robustness of dual-process cooperation to relaxing the unrealistic assumption of random matching: even in structured populations, evolution only ever favors agents who (i) always intuitively defect, or (ii) are intuitively predisposed to cooperate but who, when deliberating, switch to defection if it is in their self-interest to do so. However, the specific conditions under which this transition between the two strategies occurs depends strongly on the network structure, with more spare networks allowing dual-process cooperation to dominate for lower probabilities of repeated games; and the specifics of the dual-process strategy also varying with population structure, such that more sparse networks lead to less willingness to deliberate.

Why is this so? Reducing the number of neighbors (k) in the cycle leads to greater assortment66: When agents only interact with their neighbors in a sparse network, the emergence of clusters in which agents interact with other agents who have similar strategies is facilitated. Formation of clusters helps cooperators to more likely interact with other cooperators and collect mutual benefits of cooperation. This increases cooperators’ payoffs relative to defectors and helps stabilize cooperation. Hence, decreasing k makes cooperation more beneficial in the 1-shot game and results in transition to DC for lower values of p. For very low values of k (k < b/c, where b and c are, respectively, benefit and cost of cooperation42) the favored strategy is always cooperation across all p values. This increase in assortment also reduces the value of deliberation (and thus reduces the cost that agents are willing to pay to deliberate T), because as the likelihood that your partner has the same strategy as you increases, it becomes less beneficial to switch to defection in 1-shot games: strategies that switch to defection will wind up interacting with other strategies who also defect. The results we present here using structured populations correspond nicely to prior work2 who vary assortment mathematically, without explicitly modeling population structure.

In our model, we made the simplifying assumption that the cost of deliberation was drawn from a uniform distribution, which has been shown to influence the evolutionary dynamics3: other distributions allow the success of a strategy which intuitively defects and uses deliberation to cooperate in repeated games. Considering the impact of network structure with alternative distributions of deliberation costs is therefore an important direction for future research. Similarly, we assumed that intuition was totally insensitive to context whereas deliberation was perfectly sensitive. As previous work has shown that allowing intuition to be somewhat sensitive to context can allow the success of a “dual-process attender” strategy which distinguishes between one-shot and repeated games even when using intuition4, the interaction between context-sensitive intuition and spatial structure is also an important direction for future work.

Another promising extension is to consider a case where the probability of repeated game is not the same for all agents, but it is a function of the way agents are connected through the network: In real world settings, people who are interacting within a densely connected community are more likely to be engaged in interactions that carry future consequences while people who are socially distanced from each other are less concerned about their future interactions. Hence, making the value of p heterogeneous as a function of agents’ local structure can be a natural future direction. It would also be informative to consider the coevolution of cooperation and cognition on dynamic networks, where there structure is not fixed but instead can evolve over time or can be altered by the agents67. Finally, our findings regarding the impact of interaction structure on the co-evolution of cognition and cooperation also suggest that extending models of the evolution of intuition and deliberation in anti-coordination contexts (e.g., using snowdrift games) and in non-cooperative contexts68,69,70 to include non-random interactions will be a valuable direction for future work. The topology of interaction is a key feature of our world, and has important impacts on both how we act and how we think.

Model

Our results are produced using agent-based simulations. In our simulations, agents play with and adopt strategies of their immediate neighbors in the network with population size N = 100. In each encounter between two agents, the players can choose to either play TFT or ALLD. With probability p the game is a 1-shot PD with payoff matrix \([\begin{array}{cc}R=(b-c) & S=-c\\ T=b & P=0\end{array}]\) and with probability (1-p) they play an infinite repeated PD where agents play the stage-game from the 1-shot PD each period, yielding average payoff per round of \([\begin{array}{cc}R=(b-c) & S=0\\ T=0 & P=0\end{array}]\) where b = 4 and c = 1 in our simulations. The strategy of agent \(i\in \{1,\ldots ,N\}\) is characterized by \({s}_{i}=({S}_{i}^{i},{S}_{r}^{i},{S}_{1}^{i},{T}^{i})\) specifying the behavior of the agent when she decides intuitively (TFT with probability \({S}_{i}^{i}\) independent from the game type) and when she decides deliberately (TFT with probability \({S}_{1}^{i}\) in the 1-shot game and TFT with probability \({S}_{r}^{i}\) in the repeated game). \({T}^{i}\) determines the maximum cost which the agent is willing to pay to deliberate: In each interaction, a random cost d* is drawn from uniform distribution [0,1], if d* ≤ T i the agent deliberates and tailors her strategies and plays TFT in the 1-shot game with probability \({S}_{1}^{i}\) and plays TFT in the repeated game with probability \({S}_{r}^{i}\), otherwise she uses the intuitive strategy and plays TFT with the same probability \({S}_{i}^{i}\) regardless of game type. Once all agents have completed interactions with all their neighbors, payoffs are calculated and evolution occurs. We use Moran death-birth process with exponential payoff function. In each generation, an agent i is randomly selected to change strategy. The agent i adopts the strategy of another agent j in the neighborhood who is selected with probability \(W({s}_{j}\,\to {s}_{i})=\exp (w{\pi }_{j})\) where \({{\rm{\pi }}}_{j}\,\,\)is the averaged expected payoff of agent j by her degree and w is the selection intensity. With probability u mutation occurs and instead a random strategy is chosen. In our simulations, we used w = 4 for the selection intensity (strong selection) and u = 0.01 for mutation rate.

To generate homogenous networks, we used cycles where the number of neighbors for each node \({\rm{k}}\in [2,4,\ldots ,40]\). To generate heterogeneous network structures with varying average degree, we used the following network models: Watts-Strogatz Small-World networks where the number of neighbors for each node in the ring structure \({\rm{k}}\in [2,4,\ldots ,40]\) and rewiring probability \({{\rm{p}}}_{{\rm{rw}}}=0.2,\,\,\)Barabási-Albert Scale-Free networks where the number of edges to add in each time step \(m\in [1,2,\ldots ,23]\), and Erdős-Rényi random networks where probability of edge creation \({{\rm{p}}}_{{\rm{ec}}}\in [0.02,0.04,\ldots ,0.4]\). We ran the simulations for different values of p (discretized with resolution 0.1) on each network structure. At the beginning of each simulation run, each agent’s strategy is initialized from independent uniform distributions. Each simulation run continued over a number of generations until no more than one agent updates its strategy for a consecutive window of 104 generations, as in prior work71,72. To enhance convergence, we considered a noise threshold of ε = 0.05 for strategies difference below which agents do not adopt a new strategy. All results are averaged over 1000 initializations. The simulations were done in parallel using the Yale computing cluster.

Additional information

Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. 1.

    Nowak, M. A. Five rules for the evolution of cooperation. Science 314, 1560–1563 (2006).

  2. 2.

    Bear, A. & Rand, D. G. Intuition, deliberation, and the evolution of cooperation. Proceedings of the National Academy of Sciences 113, 936–941 (2016).

  3. 3.

    Jagau, S. & van Veelen, M. A general evolutionary framework for the role of intuition and deliberation in cooperation. 1, 0152 (2017).

  4. 4.

    Bear, A., Kagan, A. & Rand, D. G. Co-Evolution of Cooperation and Cognition: The Impact of Imperfect Deliberation and Context-Sensitive Intuition. Proc Roy Soc B 284 (2017).

  5. 5.

    Bear, A. & Rand, D. G. The value of information. Nature Human Behavior 1, 1–2, https://doi.org/10.1038/s41562-017-0156 (2017).

  6. 6.

    Sloman, S. A. The empirical case for two systems of reasoning. Psychological Bulletin 119, 3 (1996).

  7. 7.

    Kahneman, D. A perspective on judgment and choice: Mapping bounded rationality. American Psychologist 58, 697–720 (2003).

  8. 8.

    Evans, J. S. B. & Stanovich, K. E. Dual-process theories of higher cognition advancing the debate. Perspectives on Psychological Science 8, 223–241 (2013).

  9. 9.

    Stanovich, K. E. & West, R. F. Individual Differences in Rational Thought. Journal of Experimental Psychology: General. 127, 161–188 (1998).

  10. 10.

    Rand, D. G. Social dilemma cooperation (unlike Dictator Game giving) is intuitive for men as well as women. Journal of experimental social psychology 73, 164–168 (2017).

  11. 11.

    Bouwmeester, S. et al. Registered Replication Report: Rand, Greene, and Nowak (2012). Perspectives on Psychological Science, 1745691617693624 (2017).

  12. 12.

    Rand, D. G. Reflections on the Time-Pressure Cooperation Registered Replication Report. Perspectives on Psychological Science, 1745691617693625 (2017).

  13. 13.

    Dickinson, D. L. & McElroy, T. Sleep restriction and circadian effects on social decisions. European Economic Review 97, 57–71 (2017).

  14. 14.

    Rand, D. G., Newman, G. E. & Wurzbacher, O. Social context and the dynamics of cooperative choice. Journal of Behavioral Decision Making 28, 159–166 (2015).

  15. 15.

    Ma, Y., Liu, Y., Rand, D. G., Heatherton, T. F. & Han, S. Opposing Oxytocin Effects on Intergroup Cooperative Behavior in Intuitive and Reflective Minds. Neuropsychopharmacology 40, 2379–2387 (2015).

  16. 16.

    Rand, D. G. et al. Social Heuristics Shape Intuitive Cooperation. Nature Communications 5, 3677 (2014).

  17. 17.

    Rand, D. G. & Kraft-Todd, G. T. Reflection Does Not Undermine Self-Interested Prosociality. Frontiers in Behavioral Neuroscience 8, 300 (2014).

  18. 18.

    Verkoeijen, P. P. J. L. & Bouwmeester, S. Does Intuition Cause Cooperation? PLoS ONE 9, e96654 (2014).

  19. 19.

    Cone, J. & Rand, D. G. Time Pressure Increases Cooperation in Competitively Framed Social Dilemmas. PLoS ONE 9, e115756 (2014).

  20. 20.

    Rand, D. G., Greene, J. D. & Nowak, M. A. Spontaneous giving and calculated greed. Nature 489, 427–430 (2012).

  21. 21.

    Tinghög, G. et al. Intuition and cooperation reconsidered. Nature 497, E1–E2 (2013).

  22. 22.

    Lotz, S. Spontaneous Giving Under Structural Inequality: Intuition Promotes Cooperation in Asymmetric Social Dilemmas. PLoS ONE 10, e0131562 (2015).

  23. 23.

    Lohse, J. Smart or Selfish - When Smart Guys Finish Nice. University of Heidelberg Department of Economics Discussion Paper Series (2014).

  24. 24.

    Everett, J., Ingbretsen, Z., Cushman, F. A. & Cikara, M. Deliberation erodes cooperative behaviour – even towards competitive outgroups, even when using a control condition, and even when controlling for sample bias. Journal of Experimental Social Psychology 73, 76–81 (2017).

  25. 25.

    Rand, D. G., Brescoll, V. L., Everett, J. A. C., Capraro, V. & Barcelo, H. Social heuristics and social roles: Intuition favors altruism for women but not for men. Journal of Experimental Psychology: General 145, 389–396 (2016).

  26. 26.

    Capraro, V. & Cococcioni, G. Rethinking spontaneous giving: Extreme time pressure and ego-depletion favor self-regarding reactions. Sci. Rep. 6, 27219 (2016).

  27. 27.

    Capraro, V. & Cococcioni, G. Social setting, intuition, and experience in laboratory experiments interact to shape cooperative decision-making. Proc Roy Soc B (2015).

  28. 28.

    Schulz, J. F., Fischbacher, U., Thöni, C. & Utikal, V. Affect and fairness: Dictator games under cognitive load. Journal of Economic Psychology 41, 77–87 (2014).

  29. 29.

    Cornelissen, G., Dewitte, S. & Warlop, L. Are Social Value Orientations Expressed Automatically? Decision Making in the Dictator Game. Personality and Social Psychology Bulletin 37, 1080–1090 (2011).

  30. 30.

    Døssing, F., Piovesan, M. & Wengstrom, E. Cognitive Load and Cooperation. Review of Behavioral Economics 4, 69–81 (2017).

  31. 31.

    Strømland, E., Tjøtta, S. & Torsvik, G. Cooperating, fast and slow: Testing the social heuristics hypothesis. CESifo Working Paper Series No. 5875. Available at SSRN: http://ssrn.com/abstract=2780877 (2016).

  32. 32.

    Rand, D. G. Cooperation, fast and slow: Meta-analytic evidence for a theory of social heuristics and self-interested deliberation. Psychological Science 27, 1192–1206 (2016).

  33. 33.

    Artavia-Mora, L., Bedi, A. S. & Rieger, M. Intuitive Help and Punishment in the Field. European Economic Review 92, 133–145 (2017).

  34. 34.

    Rand, D. G. & Epstein, Z. G. Risking Your Life Without a Second Thought: Intuitive Decision-Making and Extreme Altruism. PLoS ONE 9, e109687 (2014).

  35. 35.

    Peysakhovich, A. & Rand, D. G. Habits of Virtue: Creating Norms of Cooperation and Defection in the Laboratory. Management Science 62, 631–647 (2016).

  36. 36.

    Nowak, M. A. & May, R. M. Evolutionary games and spatial chaos. Nature 359, 826–829 (1992).

  37. 37.

    Ellison, G. Learning Local Interaction, and Coordination. Econometrica 61, 1047–1071 (1993).

  38. 38.

    Lindgren, K. & Nordahl, M. G. Evolutionary dynamics of spatial games. Physica D 75, 292–309 (1994).

  39. 39.

    Killingback, T. & Doebeli, M. Spatial Evolutionary Game Theory: Hawks and Doves Revisited. Proceedings: Biological Sciences 263, 1135–1144 (1996).

  40. 40.

    Nakamaru, M., Matsuda, H. & Iwasa, Y. The Evolution of Cooperation in a Lattice-Structured Population. Journal of theoretical biology 184, 65–81 (1997).

  41. 41.

    Hauert, C. & Doebeli, M. Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature 428, 643–646 (2004).

  42. 42.

    Ohtsuki, H., Hauert, C., Lieberman, E. & Nowak, M. A. A simple rule for the evolution of cooperation on graphs and social networks. Nature 441, 502–505 (2006).

  43. 43.

    Szabo, G. & Fath, G. Evolutionary games on graphs. Physics Reports 446, 97–216 (2007).

  44. 44.

    Helbing, D. & Yu, W. The outbreak of cooperation among success-driven individuals under noisy conditions. Proceedings of the National Academy of Sciences 106, 3680–3685 (2009).

  45. 45.

    Nowak, M. A., Tarnita, C. E. & Antal, T. Evolutionary dynamics in structured populations. Philosophical Transactions of the Royal Society B: Biological Sciences 365, 19–30 (2010).

  46. 46.

    Tarnita, C. E., Ohtsuki, H., Antal, T., Fu, F. & Nowak, M. A. Strategy selection in structured populations. Journal of Theoretical Biology 259, 570 (2009).

  47. 47.

    Taylor, P. D., Day, T. & Wild, G. Evolution of cooperation in a finite homogeneous graph. Nature 447, 469–472 (2007).

  48. 48.

    Hauert, C. & Szabó, G. Game theory and physics. American Journal of Physics 73, 405–414 (2005).

  49. 49.

    Allen, B., Traulsen, A., Tarnita, C. E. & Nowak, M. A. How mutation affects evolutionary games on graphs. Journal of theoretical biology 299, 97–105 (2011).

  50. 50.

    Perc, M. et al. Statistical physics of human cooperation. Physics Reports 687, 1–51 (2017).

  51. 51.

    Perc, M. Phase transitions in models of human cooperation. Physics Letters A 380, 2803–2808 (2016).

  52. 52.

    Szolnoki, A. & Perc, M. Resolving social dilemmas on evolving random networks. EPL (Europhysics Letters) 86, 30007 (2009).

  53. 53.

    Szolnoki, A. & Perc, M. Evolutionary dynamics of cooperation in neutral populations. New Journal of Physics 20, 013031 (2018).

  54. 54.

    Grujic, J. et al. A comparative analysis of spatial Prisoner’s Dilemma experiments: Conditional cooperation and payoff irrelevance. Sci. Rep. 4, 4615 (2014).

  55. 55.

    Grujić, J., Röhl, T., Semmann, D., Milinski, M. & Traulsen, A. Consistent Strategy Updating in Spatial and Non-Spatial Behavioral Experiments Does Not Promote Cooperation in Social Networks. PLoS ONE 7, e47718 (2012).

  56. 56.

    Traulsen, A., Semmann, D., Sommerfeld, R. D., Krambeck, H.-J. & Milinski, M. Human strategy updating in evolutionary games. Proceedings of the National Academy of Sciences 107, 2962–2966 (2010).

  57. 57.

    Suri, S. & Watts, D. J. Cooperation and Contagion in Web-Based, Networked Public Goods Experiments. PLoS ONE 6, e16836 (2011).

  58. 58.

    Gracia-Lázaro, C. et al. Heterogeneous networks do not promote cooperation when humans play a Prisoner’s Dilemma. Proceedings of the National Academy of Sciences 109, 12922–12926 (2012).

  59. 59.

    Grujić, J., Fosco, C., Araujo, L., Cuesta, J. A. & Sánchez, A. Social Experiments in the Mesoscale: Humans Playing a Spatial Prisoner’s Dilemma. PLoS ONE 5, e13749 (2010).

  60. 60.

    Rand, D. G., Arbesman, S. & Christakis, N. A. Dynamic social networks promote cooperation in experiments with humans. Proceedings of the National Academy of Sciences 108, 19193–19198 (2011).

  61. 61.

    Rand, D. G., Nowak, M. A., Fowler, J. H. & Christakis, N. A. Static Network Structure Can Stabilize Human Cooperation. Proceedings of the National Academy of Sciences (2014).

  62. 62.

    Antal, T., Nowak, M. A. & Traulsen, A. Strategy abundance in 2 × 2 games for arbitrary mutation rates. Journal of theoretical biology 257, 340–344 (2009).

  63. 63.

    Watts, D. J. & Strogatz, S. H. Collective dynamics of ‘small-world’networks. nature 393, 440–442 (1998).

  64. 64.

    Barabási, A.-L. & Albert, R. Emergence of scaling in random networks. science 286, 509–512 (1999).

  65. 65.

    Erdos, P. & Rényi, A. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci 5, 17–60 (1960).

  66. 66.

    Fletcher, J. A. & Doebeli, M. A simple and general explanation for the evolution of altruism. Proceedings of the Royal Society of London B: Biological Sciences 276, 13–19 (2009).

  67. 67.

    Perc, M. & Szolnoki, A. Coevolutionary games–A mini review. Biosystems 99, 109–125 (2010).

  68. 68.

    Toupo, D. F. P., Strogatz, S. H., Cohen, J. D. & Rand, D. G. Evolutionary game dynamics of controlled and automatic decision-making. Chaos 25, 073120 (2015).

  69. 69.

    Rand, D., Tomlin, D., Bear, A., Ludvig, E. A. & Cohen, J. Cyclical population dynamics of automatic versus controlled processing. Psychological Review 124, 626–642 (2017).

  70. 70.

    Tomlin, D. A., Rand, D. G., Ludvig, E. & Cohen, J. D. The evolution and devolution of cognitive control: The costs of deliberation in a competitive world. Sci. Rep. 5, 11002 (2015).

  71. 71.

    Gianetto, D. A. & Heydari, B. Network Modularity is essential for evolution of cooperation under uncertainty. Scientific reports 5 (2015).

  72. 72.

    Mosleh, M. & Heydari, B. Fair Topologies: Community Structures and Network Hubs Drive Emergence of Fairness Norms. Scientific Reports 7 (2017).

Download references

Acknowledgements

We gratefully acknowledge helpful comments from Adam Bear, and funding the Templeton World Charity Foundation (grant no. TWCF0209), the Defense Advanced Research Projects Agency NGS2 program (grant no. D17AC00005), and the National Institutes of Health (grant no. P30-AG034420).

Author information

Affiliations

  1. Department of Psychology, Yale University, New Haven, CT, 06511, USA

    • Mohsen Mosleh
    •  & David G. Rand
  2. Department of Economics, Yale University, New Haven, CT, 06511, USA

    • David G. Rand
  3. School of Management, Yale University, New Haven, CT, 06511, USA

    • David G. Rand

Authors

  1. Search for Mohsen Mosleh in:

  2. Search for David G. Rand in:

Contributions

M.M. and D.G.R. designed the research, M.M. performed the research and analyzed the data, and M.M. and D.G.R. wrote the paper.

Competing Interests

The authors declare no competing interests.

Corresponding authors

Correspondence to Mohsen Mosleh or David G. Rand.

About this article

Publication history

Received

Accepted

Published

Issue Date

DOI

https://doi.org/10.1038/s41598-018-24473-1

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.