Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

A unified framework of direct and indirect reciprocity

Abstract

Direct and indirect reciprocity are key mechanisms for the evolution of cooperation. Direct reciprocity means that individuals use their own experience to decide whether to cooperate with another person. Indirect reciprocity means that they also consider the experiences of others. Although these two mechanisms are intertwined, they are typically studied in isolation. Here, we introduce a mathematical framework that allows us to explore both kinds of reciprocity simultaneously. We show that the well-known ‘generous tit-for-tat’ strategy of direct reciprocity has a natural analogue in indirect reciprocity, which we call ‘generous scoring’. Using an equilibrium analysis, we characterize under which conditions either of the two strategies can maintain cooperation. With simulations, we additionally explore which kind of reciprocity evolves when members of a population engage in social learning to adapt to their environment. Our results draw unexpected connections between direct and indirect reciprocity while highlighting important differences regarding their evolvability.

Access options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Fig. 1: A unifying framework for direct and indirect reciprocity.
Fig. 2: An equilibrium analysis reveals when direct or indirect reciprocity can sustain cooperation.
Fig. 3: Evolutionary dynamics of direct and indirect reciprocity.
Fig. 4: Impact of mutations on either direct or indirect reciprocity.
Fig. 5: Co-evolution of conditional cooperation and information use.

Data availability

The raw data generated for the main text, which was used to create Figs. 35, are available at https://osf.io/brnvx/?view_only=4adc0b791a3640df88c94362d0f164e6!. The raw data for the Extended Data Figures is available from the authors upon request.

Code availability

All simulations and numerical calculations were performed with MATLAB R2014A and Python 2.7. The Python scripts used to simulate the game dynamics, numerically calculate the players’ expected payoffs and simulate the evolutionary process are available online at https://osf.io/brnvx/?view_only=4adc0b791a3640df88c94362d0f164e6!.

References

  1. 1.

    Trivers, R. L. The evolution of reciprocal altruism. Q. Rev. Biol. 46, 35–57 (1971).

    Article  Google Scholar 

  2. 2.

    Sugden, R. The Economics of Rights, Co-operation and Welfare (Blackwell, 1986).

    Google Scholar 

  3. 3.

    Nowak, M. A. Five rules for the evolution of cooperation. Science 314, 1560–1563 (2006).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  4. 4.

    Sigmund, K. The Calculus of Selfishness (Princeton Univ. Press, 2010).

    Book  Google Scholar 

  5. 5.

    Axelrod, R. & Hamilton, W. D. The evolution of cooperation. Science 211, 1390–1396 (1981).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  6. 6.

    Nowak, M. A. & Sigmund, K. Tit for tat in heterogeneous populations. Nature 355, 250–253 (1992).

    Article  Google Scholar 

  7. 7.

    Hauert, C. & Schuster, H. G. Effects of increasing the number of players and memory size in the iterated prisoner’s dilemma: a numerical approach. Proc. R. Soc. B 264, 513–519 (1997).

    Article  Google Scholar 

  8. 8.

    Press, W. H. & Dyson, F. D. Iterated prisoner’s dilemma contains strategies that dominate any evolutionary opponent. Proc. Natl Acad. Sci. USA 109, 10409–10413 (2012).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  9. 9.

    Hilbe, C., Nowak, M. A. & Sigmund, K. The evolution of extortion in iterated prisoner’s dilemma games. Proc. Natl Acad. Sci. USA 110, 6913–6918 (2013).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  10. 10.

    Stewart, A. J. & Plotkin, J. B. Collapse of cooperation in evolving games. Proc. Natl Acad. Sci. USA 111, 17558–17563 (2014).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  11. 11.

    Szolnoki, A. & Perc, M. Evolution of extortion in structured populations. Phys. Rev. E 89, 022804 (2014).

    Article  CAS  Google Scholar 

  12. 12.

    Akin, E. in Ergodic Theory, Advances in Dynamics (ed. Assani, I) 77–107 (de Gruyter, 2016).

  13. 13.

    Pan, L., Hao, D., Rong, Z. & Zhou, T. Zero-determinant strategies in iterated public goods game. Sci. Rep. 5, 13096 (2015).

  14. 14.

    Hao, D., Rong, Z. & Zhou, T. Extortion under uncertainty: zero-determinant strategies in noisy games. Phys. Rev. E 91, 052803 (2015).

    Article  CAS  Google Scholar 

  15. 15.

    McAvoy, A. & Hauert, C. Autocratic strategies for iterated games with arbitrary action spaces. Proc. Natl Acad. Sci. USA 113, 3573–3578 (2016).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  16. 16.

    Ichinose, G. & Masuda, N. Zero-determinant strategies in finitely repeated games. J. Theor. Biol. 438, 61–77 (2018).

    PubMed  Article  PubMed Central  Google Scholar 

  17. 17.

    Hilbe, C., Chatterjee, K. & Nowak, M. A. Partners and rivals in direct reciprocity. Nat. Hum. Behav. 2, 469–477 (2018).

    PubMed  Article  PubMed Central  Google Scholar 

  18. 18.

    García, J. & van Veelen, M. No strategy can win in the repeated prisoner’s dilemma: linking game theory and computer simulations. Front. Robot. AI 5, 102 (2018).

    PubMed  PubMed Central  Article  Google Scholar 

  19. 19.

    Reiter, J. G., Hilbe, C., Rand, D. G., Chatterjee, K. & Nowak, M. A. Crosstalk in concurrent repeated games impedes direct reciprocity and requires stronger levels of forgiveness. Nat. Commun. 9, 555 (2018).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  20. 20.

    Nowak, M. A. & Sigmund, K. Evolution of indirect reciprocity by image scoring. Nature 393, 573–577 (1998).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  21. 21.

    Leimar, O. & Hammerstein, P. Evolution of cooperation through indirect reciprocity. Proc. R. Soc. B 268, 745–753 (2001).

    CAS  PubMed  Article  Google Scholar 

  22. 22.

    Ohtsuki, H. & Iwasa, Y. How should we define goodness? – Reputation dynamics in indirect reciprocity. J. Theor. Biol. 231, 107–20 (2004).

    PubMed  Article  Google Scholar 

  23. 23.

    Santos, F. P., Santos, F. C. & Pacheco, J. M. Social norm complexity and past reputations in the evolution of cooperation. Nature 555, 242–245 (2018).

    CAS  PubMed  Article  Google Scholar 

  24. 24.

    Sigmund, K. Moral assessment in indirect reciprocity. J. Theor. Biol. 299, 25–30 (2012).

    PubMed  PubMed Central  Article  Google Scholar 

  25. 25.

    Nax, H. H., Perc, M., Szolnoki, A. & Helbing, D. Stability of cooperation under image scoring in group interactions. Sci. Rep. 5, 1–7 (2015).

    Google Scholar 

  26. 26.

    Fischbacher, U., Gächter, S. & Fehr, E. Are people conditionally cooperative? Evidence from a public goods experiment. Econ. Lett. 71, 397–404 (2001).

    Article  Google Scholar 

  27. 27.

    Grujic, J. et al. A comparative analysis of spatial prisoner’s dilemma experiments: conditional cooperation and payoff irrelevance. Sci. Rep. 4, 4615 (2014).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  28. 28.

    Wedekind, C. & Milinski, M. Cooperation through image scoring in humans. Science 288, 850–852 (2000).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  29. 29.

    Okada, I., Yamamoto, H., Sato, Y., Uchida, S. & Sasaki, T. Experimental evidence of selective inattention in reputation-based cooperation. Sci. Rep. 8, 14813 (2018).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  30. 30.

    Molleman, L., van den Broek, E. & Egas, M. Personal experience and reputation interact in human decisions to help reciprocally. Proc. R. Soc. B 28, 20123044 (2013).

    Article  Google Scholar 

  31. 31.

    Hilbe, C., Martinez-Vaquero, L. A., Chatterjee, K. & Nowak, M. A. Memory-n strategies of direct reciprocity. Proc. Natl Acad. Sci. USA 114, 4715–4720 (2017).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  32. 32.

    Uchida, S. & Sasaki, T. Effect of assessment error and private information on stern-judging in indirect reciprocity. Chaos Solitons Fractals 56, 175–180 (2013).

    Article  Google Scholar 

  33. 33.

    Hilbe, C., Schmid, L., Tkadlec, J., Chatterjee, K. & Nowak, M. A. Indirect reciprocity with private, noisy, and incomplete information. Proc. Natl Acad. Sci. USA 115, 12241–12246 (2018).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  34. 34.

    Raub, W. & Weesie, J. Reputation and efficiency in social interactions: an example of network effects. Am. J. Sociol. 96, 626–654 (1990).

    Article  Google Scholar 

  35. 35.

    Pollock, G. & Dugatkin, L. A. Reciprocity and the emergence of reputation. J. Theor. Biol. 159, 25–37 (1992).

    Article  Google Scholar 

  36. 36.

    Roberts, G. Evolution of direct and indirect reciprocity. Proc. R. Soc. B 275, 173–179 (2007).

    Article  Google Scholar 

  37. 37.

    Nakamaru, M. & Kawata, M. Evolution of rumours that discriminate lying defectors. Evol. Ecol. Res. 6, 261–283 (2004).

    Google Scholar 

  38. 38.

    Seki, M. & Nakamaru, M. A model for gossip-mediated evolution of altruism with various types of false information by speakers and assessment by listeners. J. Theor. Biol. 407, 90–105 (2016).

    PubMed  Article  PubMed Central  Google Scholar 

  39. 39.

    Ohtsuki, H. Reactive strategies in indirect reciprocity. J. Theor. Biol. 227, 299–314 (2004).

    PubMed  Article  Google Scholar 

  40. 40.

    Nowak, M. A. & Sigmund, K. The dynamics of indirect reciprocity. J. Theor. Biol. 194, 561–574 (1998).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  41. 41.

    Berger, U. Learning to cooperate via indirect reciprocity. Games Econ. Behav. 72, 30–37 (2011).

    Article  Google Scholar 

  42. 42.

    Brandt, H. & Sigmund, K. The logic of reprobation: assessment and action rules for indirect reciprocation. J. Theor. Biol. 231, 475–486 (2004).

    PubMed  Article  PubMed Central  Google Scholar 

  43. 43.

    Uchida, S. Effect of private information on indirect reciprocity. Phys. Rev. E 82, 036111 (2010).

    Article  CAS  Google Scholar 

  44. 44.

    Martinez-Vaquero, L. A. & Cuesta, J. A. Evolutionary stability and resistance to cheating in an indirect reciprocity model based on reputation. Phys. Rev. E 87, 052810 (2013).

    Article  CAS  Google Scholar 

  45. 45.

    Nakamura, M. & Masuda, N. Indirect reciprocity under incomplete observation. PLoS Comput. Biol. 7, e1002113 (2011).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  46. 46.

    Tanabe, S., Suzuki, H. & Masuda, N. Indirect reciprocity with trinary reputations. J. Theor. Biol. 317, 338–347 (2013).

    PubMed  Article  PubMed Central  Google Scholar 

  47. 47.

    Szabó, G. & Tőke, C. Evolutionary prisoner’s dilemma game on a square lattice. Phys. Rev. E 58, 69–73 (1998).

    Article  Google Scholar 

  48. 48.

    Traulsen, A., Pacheco, J. M. & Nowak, M. A. Pairwise comparison and selection temperature in evolutionary game dynamics. J. Theor. Biol. 246, 522–529 (2007).

    PubMed  PubMed Central  Article  Google Scholar 

  49. 49.

    Fudenberg, D. & Imhof, L. A. Imitation processes with small mutations. Econ. Theory 131, 251–262 (2006).

    Article  Google Scholar 

  50. 50.

    Fudenberg, D., Nowak, M. A., Taylor, C. & Imhof, L. A. Evolutionary game dynamics in finite populations with strong selection and weak mutation. Theor. Popul. Biol. 70, 352–363 (2006).

    PubMed  PubMed Central  Article  Google Scholar 

  51. 51.

    Imhof, L. A. & Nowak, M. A. Stochastic evolutionary dynamics of direct reciprocity. Proc. R. Soc. B 277, 463–468 (2010).

    PubMed  Article  PubMed Central  Google Scholar 

  52. 52.

    Wu, B., Gokhale, C. S., Wang, L. & Traulsen, A. How small are small mutation rates? J. Math. Biol. 64, 803–827 (2012).

    PubMed  Article  PubMed Central  Google Scholar 

  53. 53.

    McAvoy, A. Comment on ‘Imitation processes with small mutations’. Econ. Theory 159, 66–69 (2015).

    Article  Google Scholar 

  54. 54.

    Imhof, L. A., Fudenberg, D. & Nowak, M. A. Evolutionary cycles of cooperation and defection. Proc. Natl Acad. Sci. USA 102, 10797–10800 (2005).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  55. 55.

    García, J. & Traulsen, A. The structure of mutations and the evolution of cooperation. PLoS ONE 7, e35287 (2012).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  56. 56.

    van Segbroeck, S., Pacheco, J. M., Lenaerts, T. & Santos, F. C. Emergence of fairness in repeated group interactions. Phys. Rev. Lett. 108, 158104 (2012).

  57. 57.

    Stewart, A. J. & Plotkin, J. B. From extortion to generosity, evolution in the iterated prisoner’s dilemma. Proc. Natl Acad. Sci. USA 110, 15348–15353 (2013).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

  58. 58.

    Stewart, A. J. & Plotkin, J. B. The evolvability of cooperation under local and non-local mutations. Games 6, 231–250 (2015).

    Article  Google Scholar 

  59. 59.

    Santos, F. P., Santos, F. C. & Pacheco, J. M. Social norms of cooperation in small-scale societies. PLoS Comput. Biol. 12, e1004709 (2016).

    PubMed  PubMed Central  Article  CAS  Google Scholar 

  60. 60.

    Hauser, O., Hilbe, C., Chatterjee, K. & Nowak, M. A. Social dilemmas among unequals. Nature 572, 524—527 (2019).

    PubMed  Article  CAS  Google Scholar 

  61. 61.

    Hauert, C., Traulsen, A., Brandt, H., Nowak, M. A. & Sigmund, K. Via freedom to coercion: the emergence of costly punishment. Science 316, 1905–1907 (2007).

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  62. 62.

    Sigmund, K., De Silva, H., Traulsen, A. & Hauert, C. Social learning promotes institutions for governing the commons. Nature 466, 861–863 (2010).

    CAS  PubMed  Article  Google Scholar 

  63. 63.

    García, J. & Traulsen, A. Leaving the loners alone: evolution of cooperation in the presence of antisocial punishment. J. Theor. Biol. 307, 168–173 (2012).

    PubMed  Article  Google Scholar 

  64. 64.

    Hauert, C. & Imhof, L. Evolutionary games in deme structured, finite populations. J. Theor. Biol. 299, 106–112 (2012).

    PubMed  Article  Google Scholar 

  65. 65.

    Lee, Y., Iwasa, Y., Dieckmann, U. & Sigmund, K. Social evolution leads to persistent corruption. Proc. Natl Acad. Sci. USA 116, 13276–13281 (2019).

    CAS  PubMed  Article  Google Scholar 

  66. 66.

    Brandt, H. & Sigmund, K. The good, the bad and the discriminator – errors in direct and indirect reciprocity. J. Theor. Biol. 239, 183–194 (2006).

    PubMed  Article  Google Scholar 

  67. 67.

    Selten, R. Reexamination of the perfectness concept for equilibrium points in extensive games. Int. J. Game Theory 4, 25–55 (1975).

    Article  Google Scholar 

  68. 68.

    Karlin, S. & Taylor, H. M. A. A First Course in Stochastic Processes 2nd edn (Academic, 1975).

  69. 69.

    Nowak, M. A., Sasaki, A., Taylor, C. & Fudenberg, D. Emergence of cooperation and evolutionary stability in finite populations. Nature 428, 646–650 (2004).

    CAS  PubMed  Article  PubMed Central  Google Scholar 

Download references

Acknowledgements

This work was supported by the European Research Council CoG 863818 (ForM-SMArt) (to K.C.), the European Research Council Start Grant 279307: Graph Games (to K.C.), and the European Research Council Starting Grant 850529: E-DIRECT (to C.H.). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.

Author information

Affiliations

Authors

Contributions

L.S., K.C., C.H. and M.A.N. all conceived the study, performed the analysis, discussed the results and wrote the manuscript.

Corresponding author

Correspondence to Laura Schmid.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Peer review informationNature Human Behaviour thanks Matjaz Perc, Alexander Stewart and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Schematic representation of the model.

a, We consider a population of size n. To illustrate the basic workings of our model, we focus on three arbitrary players that are fully interchangeable in all their abilities. b, Each player has a separate finite-state automaton with two possible states G and B for each co-player. The current state is marked in bold. In this example, player 1 considers player 2 as good and player 3 as bad. c, In each round, two players are chosen at random to interact in a prisoner’s dilemma. Players cooperate if they consider their co-player to be good and they defect otherwise. The other population members do not participate in the game, but they observe its outcome at no cost to themselves. d, After the interaction, both active players update their respective automata, depending on their strategy and on the co-player’s action. In addition, each observer independently updates her automata with respect to players 1 and 2 with probability λ each. eh, We can mathematically describe how player i’s automaton with respect to player j changes over time by distinguishing four possible events. First, player j is not chosen to interact, such that player i’s automaton remains unaffected (e); second, players i and j interact with each other and update their respective states accordingly (f); third, player j interacts with someone else, but player i does not take this interaction into account (g); fourth, player j interacts with someone else, and player i updates j’s state accordingly (h).

Extended Data Fig. 2 Competition between conditional cooperators and defectors.

We compare the performance of conditional cooperators with strategy (1, 1, 1/3, λ) in a population of defectors, (0, 0, 0, λ). We consider four scenarios, depending on whether players use direct (a,c) or indirect (b,d) reciprocity and depending on whether pairs interact only a few times (a,b) or often (c,d). Each panel shows the payoff of cooperators and defectors depending on how many of the 50 population members are cooperators, for b = 5 and c = 1. In all four cases we find bistability (as indicated by the arrows on the x-axis). That is, defectors have the higher payoff when there are few cooperators and the lower payoff when there are many cooperators. However, the threshold number of cooperators necessary to make cooperation beneficial differs. Indirect reciprocity has the lower threshold when there are only few rounds, because cooperators are better able to restrict the payoff of defectors (as indicated by the smaller slope of the red line in b compared to a). Direct reciprocity has the lower threshold when there are many rounds. Here, already a few cooperators suffice to invade the defectors. In contrast, for indirect reciprocity cooperators need to establish a critical mass because their payoffs increase nonlinearly.

Extended Data Fig. 3 Impact of different model parameters on the co-evolution of direct and indirect reciprocity.

We show how our evolutionary results in Fig. 5 are affected as we change different parameters of our model. In each panel, we vary one parameter and leave all others constant. We consider the same three scenarios as in Fig. 5a–c: few interactions and unreliable information (blue), intermediate interactions and reliable information (orange), and many interactions and unreliable information (green). We employ two complementary simulation techniques. In the upper half, each data point represents the average of a single simulation. This simulation was run for sufficiently long such that the averages converge and are independent of the initial condition. This typically happens after 107 mutant strategies have been introduced into the population. In the lower panels, each data point represents the average of 200 simulations with a random initial population. Here, each simulation only introduces 105 mutant strategies. For the parameters, we consider variation in the benefit-to-cost ratio (a,b), the population size (c,d), the selection strength (e,f), and the mutation rate (g,h). Our simulations suggest that each of these parameters can have a considerable impact on the evolving cooperation rates and the player’s propensity to adopt indirect reciprocity. For example, for the orange curve in panel e, we observe that the effect of selection strength on cooperation can be non-monotonic. We further discuss these dependencies in Extended Data Fig. 4 and SI Section 5. In general, however, we recover the following regularities from Fig. 5: (i) Substantial cooperation only evolves in the second and third scenario (that is, for the cooperation rates, the blue curve is systematically below the other curves). (ii) If cooperation evolves, players prefer indirect reciprocity when there are intermediately many interactions and outside information is reliable. They prefer direct reciprocity when there are many interactions and when outside information is noisy (that is, for the proportion of indirect reciprocity, the orange curve is systematically above the green curve).

Extended Data Fig. 4 Impact of selection strength on indirect reciprocity.

As shown in the upper panel of Extended Data Fig. 3e, selection can sometimes have a non-monotonic effect on cooperation. For intermediate interactions and reliable information (δ = 0.9, ε = 0.001, depicted by the orange curve in Extended Data Fig. 3e), we have observed that the evolving cooperation rate is 53.4% for β = 1, increases to 77.3% for β = 10, and reduces to 61.5% for β = 100. Here we present additional simulations to shed further light on this non-monotonicity. a,b, We considered initial resident populations that either adopt a defective strategy or a conditionally cooperative strategy. We recorded how long it takes the evolutionary process until the resident strategy is replaced, and what the cooperation rate of the invading strategy is. Dots show the outcome of individual simulations, and the curves represent averages. The results suggest that the non-monotonicity of cooperation is not due to a reduced stability of cooperative strategies. They remain highly robust even for large selection strengths. Moreover, when selection is strong, they are typically invaded only by other cooperative strategies. ce In a next step, we recorded the distribution of cooperation over time for three different selection strengths for the process considered in Extended Data Fig. 3e. We find that this distribution becomes more extreme with increasing selection strength: individuals either become highly cooperative or highly non-cooperative. However, the proportion of non-cooperative populations grows faster than the proportion of cooperative populations.

Extended Data Fig. 5 Evolution of cooperation for players with intermediate degrees of receptivity.

In the main text figures Fig. 3–Fig. 5, we explore situations in which individuals can choose strategies where they either only take direct information into account (λ = 0), or where they take all information into account (λ = 1). Here we repeat these simulations in a setup where intermediate values of λ are permitted. To this end, we define a quantity γ. This quantity is the probability that a player’s decision is based on the co-player’s behavior towards third parties, see Eq. (10) in Methods. For 0 ≤ λ ≤ 1 we obtain 0 ≤ γ ≤ γmax (n − 2)/(n − 1). a,b, We repeat the simulations in Fig. 3a,b for various values of γ. We observe that cooperation is never most likely to evolve for intermediate values of γ. Either most cooperation evolves for γ = γmax (in panel a), or for γ = 0 (in panel b). c,d, Similarly, we repeat the simulations in Fig. 4d,f for various values of γ. Again, the average cooperation rates for intermediate γ are strictly in between the results for γ = 0 and γ = γmax. eh, Finally, we repeat the simulations shown in Fig. 5a–d, allowing for mutant strategies (y, p, q, λ) that lead to arbitrary values of γ between 0 and γmax. Especially for larger error rates, we observe that the evolving cooperation rates are now smaller. Nevertheless, the general patterns of Fig. 5 remain: (i) When there are only few rounds and many observation errors, cooperation does not evolve. (ii) When there are intermediately many rounds and few errors, cooperation evolves and players tend to put more weight on indirect information (that is, γ tends to be larger than 1/2). In particular, strategies with γ ≈ γmax are most abundant. (iii) When there are many rounds and intermediately many errors, cooperation evolves and players tend to put more weight on direct information. Here, players are most likely to adopt a strategy with γ ≈ 0. See SI Section 5.4 for details.

Extended Data Fig. 6 Effect of different types of errors and incomplete information on cooperation.

a, To explore how sensitive our results are to different kinds of errors and incomplete information, we have repeated the rare mutation simulations shown in Fig. 5d, reproduced here. b, While the baseline model assumes that only indirect observations are subject to perception errors, here we explore the effects when direct observations are equally prone to errors. We find that cooperation is substantially reduced compared to the baseline scenario. Moreover, direct reciprocity is only favoured for very large continuation probabilities. c, We have also explored the effect of additional implementation errors on cooperation. To this end, we assume here that players mis-implement their intended action with fixed probability e = 0.01. Compared to the baseline model without such errors, we find that there is less cooperation and less direct reciprocity. d, To mimic the dynamics that arises when defectors strategically conceal their bad actions, we have also considered a model in which defective actions are misperceived with probability ε, whereas cooperative actions are always observed faithfully. Because this assumption reduces the total rate at which errors occur compared to the baseline scenario, we observe more cooperation and players are more reliant on indirect reciprocity. e, Here we assume that individuals observe third-party interactions only with probability ν = 0.01. Due to the scarcity of information, players who take any third-party information into account are almost indistinguishable from those players who do not. As a result, cooperation is largely independent of observation errors, and the region in which indirect reciprocity is favoured has vanished. Unless noted otherwise, all parameters are the same as in Fig. 5d.

Extended Data Fig. 7 Direct and indirect reciprocity for finite-state automata with three states.

In an extension of our model, we allow players to assign more nuanced reputations to their co-players. We illustrate this approach by considering finite state automata with three states - good (G), neutral (N) and bad (B), with G as the initial state. We assume n − 1 residents employ the respective finite-state automaton strategy, while the remaining player uses either ALLC or ALLD. We simulate the players’ payoffs for various values of λ [0, 1]. We consider three different automaton strategies employed by the residents. The automata differ in how they deal with co-players that are assigned a neutral reputation. a, Players with the first automaton A1 are fully cooperative when they encounter a co-player with neutral reputation. This strategy can sustain cooperation among itself. However, a single ALLC player obtains approximately the same payoff as the residents, and hence can invade by (almost) neutral drift (d). b, According to the second automaton A2, players cooperate against neutral opponents with 50% probability. This strategy can be invaded by ALLC for all λ > 0 (e). c, According to A3, players defect against co-players with a neutral reputation. This strategy is not stable against ALLC for λ > 0 (f), and residents fail to cooperate with each other altogether.

Extended Data Fig. 8 Evolutionary competition between finite state automata, ALLC, and ALLD.

We have explored the evolutionary dynamics when population members can choose between ALLC, ALLD, and one of the three finite-state automata introduced in Extended Data Fig. 7. ac, First, we have explored the limit of rare mutations, using the same game payoffs as in Extended Data Fig. 7, and a fixed receptivity λ = 0.1. The numbers in each circle denote how often the respective strategy is played on average. Arrows illustrate how likely a single mutant fixes in the respective resident population. Solid arrows indicate that the fixation probability is larger than the neutral 1/n, whereas for dotted arrows this probability is smaller than neutral. We find that only the first automaton A1 can outperform both ALLC and ALLD. df, In a next step, we have explored the same scenario for a positive mutation rate μ = 0.01. The triangles represent the possible population compositions. Each corner corresponds to a homogeneous population, whereas the center corresponds to a perfectly mixed population. The color code reflects how often we observe the respective population composition over the course of evolution. We find that most of the time, populations are either in the neighborhood of ALLD, or they represent some mixture between the automaton strategy and ALLC. gi, We have re-run the simulations in panels df, but now varying either the benefit of cooperation, the selection strength, or the mutation rate. In all cases, we observe that the first automaton is most favorable to cooperation. Interestingly, we observe the largest cooperation rate for intermediate mutation rates. This result, however, may be due to the fact that players can only choose from an unbalanced strategy space, as discussed in detail in SI Section 6.3.

Extended Data Fig. 9 Performance of leading-eight strategies under direct and indirect reciprocity.

a, Previous research has suggested that there are eight stable third-order strategies of indirect reciprocity that can sustain cooperation22, called the leading eight, L1L8. They consist of two components, an assessment rule and an action rule. The assessment rule determines how players evaluate each other’s actions, depending on the previous reputations of the involved players. The action rule determines how to interact in the game, depending on one’s own reputation and on the reputation of the co-player. bi, To explore the stability of these strategies, we consider a population in which n − 1 players adopt one of the leading-eight strategies. The remaining player either adopts ALLC or ALLD. Our results for λ > 0 reflect previous findings33: in the presence of perception errors, all leading-eight strategies are susceptible to invasion by either ALLC or ALLD. Only for λ = 0 (when perception errors are absent), the leading-eight strategies are stable against both mutant strategies.

Extended Data Fig. 10 Evolutionary dynamics of the leading-eight.

Similar to Extended Data Fig. 8 for finite state automata, this figure explores how each of the leading-eight fares in an evolutionary competition against ALLC and ALLD for a fixed receptivity λ = 0.1. ah, When mutations are rare, only ‘Judging’ (L8) is played in notable proportions. However, in the presence of perception errors, this strategy tends to assign a bad reputation to other players with the same strategy, such that everyone defects eventually33. ip, When mutations are more common, some of the leading-eight strategies can stably coexist with ALLC. We observe such cooperative coexistences for L1, L2, and L7. qs, These three strategies also yield substantial cooperation rates when we vary the benefit of cooperation, the selection strength, and the mutation rate. With respect to mutation, we again observe that intermediate mutation rates are most favorable to cooperation. However, this finding may not be robust, because the strategy space is again unbalanced. For a more detailed discussion, see SI Section 6.4.

Supplementary information

Supplementary Information

Supplementary Discussion, Supplementary References and Supplementary Figs. 1 and 2.

Reporting summary

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Schmid, L., Chatterjee, K., Hilbe, C. et al. A unified framework of direct and indirect reciprocity. Nat Hum Behav (2021). https://doi.org/10.1038/s41562-021-01114-8

Download citation

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing