Direct and indirect reciprocity are key mechanisms for the evolution of cooperation. Direct reciprocity means that individuals use their own experience to decide whether to cooperate with another person. Indirect reciprocity means that they also consider the experiences of others. Although these two mechanisms are intertwined, they are typically studied in isolation. Here, we introduce a mathematical framework that allows us to explore both kinds of reciprocity simultaneously. We show that the well-known ‘generous tit-for-tat’ strategy of direct reciprocity has a natural analogue in indirect reciprocity, which we call ‘generous scoring’. Using an equilibrium analysis, we characterize under which conditions either of the two strategies can maintain cooperation. With simulations, we additionally explore which kind of reciprocity evolves when members of a population engage in social learning to adapt to their environment. Our results draw unexpected connections between direct and indirect reciprocity while highlighting important differences regarding their evolvability.
Subscribe to Journal
Get full journal access for 1 year
only $8.25 per issue
All prices are NET prices.
VAT will be added later in the checkout.
Tax calculation will be finalised during checkout.
Rent or Buy article
Get time limited or full article access on ReadCube.
All prices are NET prices.
The raw data generated for the main text, which was used to create Figs. 3–5, are available at https://osf.io/brnvx/?view_only=4adc0b791a3640df88c94362d0f164e6!. The raw data for the Extended Data Figures is available from the authors upon request.
All simulations and numerical calculations were performed with MATLAB R2014A and Python 2.7. The Python scripts used to simulate the game dynamics, numerically calculate the players’ expected payoffs and simulate the evolutionary process are available online at https://osf.io/brnvx/?view_only=4adc0b791a3640df88c94362d0f164e6!.
Trivers, R. L. The evolution of reciprocal altruism. Q. Rev. Biol. 46, 35–57 (1971).
Sugden, R. The Economics of Rights, Co-operation and Welfare (Blackwell, 1986).
Nowak, M. A. Five rules for the evolution of cooperation. Science 314, 1560–1563 (2006).
Sigmund, K. The Calculus of Selfishness (Princeton Univ. Press, 2010).
Axelrod, R. & Hamilton, W. D. The evolution of cooperation. Science 211, 1390–1396 (1981).
Nowak, M. A. & Sigmund, K. Tit for tat in heterogeneous populations. Nature 355, 250–253 (1992).
Hauert, C. & Schuster, H. G. Effects of increasing the number of players and memory size in the iterated prisoner’s dilemma: a numerical approach. Proc. R. Soc. B 264, 513–519 (1997).
Press, W. H. & Dyson, F. D. Iterated prisoner’s dilemma contains strategies that dominate any evolutionary opponent. Proc. Natl Acad. Sci. USA 109, 10409–10413 (2012).
Hilbe, C., Nowak, M. A. & Sigmund, K. The evolution of extortion in iterated prisoner’s dilemma games. Proc. Natl Acad. Sci. USA 110, 6913–6918 (2013).
Stewart, A. J. & Plotkin, J. B. Collapse of cooperation in evolving games. Proc. Natl Acad. Sci. USA 111, 17558–17563 (2014).
Szolnoki, A. & Perc, M. Evolution of extortion in structured populations. Phys. Rev. E 89, 022804 (2014).
Akin, E. in Ergodic Theory, Advances in Dynamics (ed. Assani, I) 77–107 (de Gruyter, 2016).
Pan, L., Hao, D., Rong, Z. & Zhou, T. Zero-determinant strategies in iterated public goods game. Sci. Rep. 5, 13096 (2015).
Hao, D., Rong, Z. & Zhou, T. Extortion under uncertainty: zero-determinant strategies in noisy games. Phys. Rev. E 91, 052803 (2015).
McAvoy, A. & Hauert, C. Autocratic strategies for iterated games with arbitrary action spaces. Proc. Natl Acad. Sci. USA 113, 3573–3578 (2016).
Ichinose, G. & Masuda, N. Zero-determinant strategies in finitely repeated games. J. Theor. Biol. 438, 61–77 (2018).
Hilbe, C., Chatterjee, K. & Nowak, M. A. Partners and rivals in direct reciprocity. Nat. Hum. Behav. 2, 469–477 (2018).
García, J. & van Veelen, M. No strategy can win in the repeated prisoner’s dilemma: linking game theory and computer simulations. Front. Robot. AI 5, 102 (2018).
Reiter, J. G., Hilbe, C., Rand, D. G., Chatterjee, K. & Nowak, M. A. Crosstalk in concurrent repeated games impedes direct reciprocity and requires stronger levels of forgiveness. Nat. Commun. 9, 555 (2018).
Nowak, M. A. & Sigmund, K. Evolution of indirect reciprocity by image scoring. Nature 393, 573–577 (1998).
Leimar, O. & Hammerstein, P. Evolution of cooperation through indirect reciprocity. Proc. R. Soc. B 268, 745–753 (2001).
Ohtsuki, H. & Iwasa, Y. How should we define goodness? – Reputation dynamics in indirect reciprocity. J. Theor. Biol. 231, 107–20 (2004).
Santos, F. P., Santos, F. C. & Pacheco, J. M. Social norm complexity and past reputations in the evolution of cooperation. Nature 555, 242–245 (2018).
Sigmund, K. Moral assessment in indirect reciprocity. J. Theor. Biol. 299, 25–30 (2012).
Nax, H. H., Perc, M., Szolnoki, A. & Helbing, D. Stability of cooperation under image scoring in group interactions. Sci. Rep. 5, 1–7 (2015).
Fischbacher, U., Gächter, S. & Fehr, E. Are people conditionally cooperative? Evidence from a public goods experiment. Econ. Lett. 71, 397–404 (2001).
Grujic, J. et al. A comparative analysis of spatial prisoner’s dilemma experiments: conditional cooperation and payoff irrelevance. Sci. Rep. 4, 4615 (2014).
Wedekind, C. & Milinski, M. Cooperation through image scoring in humans. Science 288, 850–852 (2000).
Okada, I., Yamamoto, H., Sato, Y., Uchida, S. & Sasaki, T. Experimental evidence of selective inattention in reputation-based cooperation. Sci. Rep. 8, 14813 (2018).
Molleman, L., van den Broek, E. & Egas, M. Personal experience and reputation interact in human decisions to help reciprocally. Proc. R. Soc. B 28, 20123044 (2013).
Hilbe, C., Martinez-Vaquero, L. A., Chatterjee, K. & Nowak, M. A. Memory-n strategies of direct reciprocity. Proc. Natl Acad. Sci. USA 114, 4715–4720 (2017).
Uchida, S. & Sasaki, T. Effect of assessment error and private information on stern-judging in indirect reciprocity. Chaos Solitons Fractals 56, 175–180 (2013).
Hilbe, C., Schmid, L., Tkadlec, J., Chatterjee, K. & Nowak, M. A. Indirect reciprocity with private, noisy, and incomplete information. Proc. Natl Acad. Sci. USA 115, 12241–12246 (2018).
Raub, W. & Weesie, J. Reputation and efficiency in social interactions: an example of network effects. Am. J. Sociol. 96, 626–654 (1990).
Pollock, G. & Dugatkin, L. A. Reciprocity and the emergence of reputation. J. Theor. Biol. 159, 25–37 (1992).
Roberts, G. Evolution of direct and indirect reciprocity. Proc. R. Soc. B 275, 173–179 (2007).
Nakamaru, M. & Kawata, M. Evolution of rumours that discriminate lying defectors. Evol. Ecol. Res. 6, 261–283 (2004).
Seki, M. & Nakamaru, M. A model for gossip-mediated evolution of altruism with various types of false information by speakers and assessment by listeners. J. Theor. Biol. 407, 90–105 (2016).
Ohtsuki, H. Reactive strategies in indirect reciprocity. J. Theor. Biol. 227, 299–314 (2004).
Nowak, M. A. & Sigmund, K. The dynamics of indirect reciprocity. J. Theor. Biol. 194, 561–574 (1998).
Berger, U. Learning to cooperate via indirect reciprocity. Games Econ. Behav. 72, 30–37 (2011).
Brandt, H. & Sigmund, K. The logic of reprobation: assessment and action rules for indirect reciprocation. J. Theor. Biol. 231, 475–486 (2004).
Uchida, S. Effect of private information on indirect reciprocity. Phys. Rev. E 82, 036111 (2010).
Martinez-Vaquero, L. A. & Cuesta, J. A. Evolutionary stability and resistance to cheating in an indirect reciprocity model based on reputation. Phys. Rev. E 87, 052810 (2013).
Nakamura, M. & Masuda, N. Indirect reciprocity under incomplete observation. PLoS Comput. Biol. 7, e1002113 (2011).
Tanabe, S., Suzuki, H. & Masuda, N. Indirect reciprocity with trinary reputations. J. Theor. Biol. 317, 338–347 (2013).
Szabó, G. & Tőke, C. Evolutionary prisoner’s dilemma game on a square lattice. Phys. Rev. E 58, 69–73 (1998).
Traulsen, A., Pacheco, J. M. & Nowak, M. A. Pairwise comparison and selection temperature in evolutionary game dynamics. J. Theor. Biol. 246, 522–529 (2007).
Fudenberg, D. & Imhof, L. A. Imitation processes with small mutations. Econ. Theory 131, 251–262 (2006).
Fudenberg, D., Nowak, M. A., Taylor, C. & Imhof, L. A. Evolutionary game dynamics in finite populations with strong selection and weak mutation. Theor. Popul. Biol. 70, 352–363 (2006).
Imhof, L. A. & Nowak, M. A. Stochastic evolutionary dynamics of direct reciprocity. Proc. R. Soc. B 277, 463–468 (2010).
Wu, B., Gokhale, C. S., Wang, L. & Traulsen, A. How small are small mutation rates? J. Math. Biol. 64, 803–827 (2012).
McAvoy, A. Comment on ‘Imitation processes with small mutations’. Econ. Theory 159, 66–69 (2015).
Imhof, L. A., Fudenberg, D. & Nowak, M. A. Evolutionary cycles of cooperation and defection. Proc. Natl Acad. Sci. USA 102, 10797–10800 (2005).
García, J. & Traulsen, A. The structure of mutations and the evolution of cooperation. PLoS ONE 7, e35287 (2012).
van Segbroeck, S., Pacheco, J. M., Lenaerts, T. & Santos, F. C. Emergence of fairness in repeated group interactions. Phys. Rev. Lett. 108, 158104 (2012).
Stewart, A. J. & Plotkin, J. B. From extortion to generosity, evolution in the iterated prisoner’s dilemma. Proc. Natl Acad. Sci. USA 110, 15348–15353 (2013).
Stewart, A. J. & Plotkin, J. B. The evolvability of cooperation under local and non-local mutations. Games 6, 231–250 (2015).
Santos, F. P., Santos, F. C. & Pacheco, J. M. Social norms of cooperation in small-scale societies. PLoS Comput. Biol. 12, e1004709 (2016).
Hauser, O., Hilbe, C., Chatterjee, K. & Nowak, M. A. Social dilemmas among unequals. Nature 572, 524—527 (2019).
Hauert, C., Traulsen, A., Brandt, H., Nowak, M. A. & Sigmund, K. Via freedom to coercion: the emergence of costly punishment. Science 316, 1905–1907 (2007).
Sigmund, K., De Silva, H., Traulsen, A. & Hauert, C. Social learning promotes institutions for governing the commons. Nature 466, 861–863 (2010).
García, J. & Traulsen, A. Leaving the loners alone: evolution of cooperation in the presence of antisocial punishment. J. Theor. Biol. 307, 168–173 (2012).
Hauert, C. & Imhof, L. Evolutionary games in deme structured, finite populations. J. Theor. Biol. 299, 106–112 (2012).
Lee, Y., Iwasa, Y., Dieckmann, U. & Sigmund, K. Social evolution leads to persistent corruption. Proc. Natl Acad. Sci. USA 116, 13276–13281 (2019).
Brandt, H. & Sigmund, K. The good, the bad and the discriminator – errors in direct and indirect reciprocity. J. Theor. Biol. 239, 183–194 (2006).
Selten, R. Reexamination of the perfectness concept for equilibrium points in extensive games. Int. J. Game Theory 4, 25–55 (1975).
Karlin, S. & Taylor, H. M. A. A First Course in Stochastic Processes 2nd edn (Academic, 1975).
Nowak, M. A., Sasaki, A., Taylor, C. & Fudenberg, D. Emergence of cooperation and evolutionary stability in finite populations. Nature 428, 646–650 (2004).
This work was supported by the European Research Council CoG 863818 (ForM-SMArt) (to K.C.), the European Research Council Start Grant 279307: Graph Games (to K.C.), and the European Research Council Starting Grant 850529: E-DIRECT (to C.H.). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript.
The authors declare no competing interests.
Peer review information Nature Human Behaviour thanks Matjaz Perc, Alexander Stewart and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
a, We consider a population of size n. To illustrate the basic workings of our model, we focus on three arbitrary players that are fully interchangeable in all their abilities. b, Each player has a separate finite-state automaton with two possible states G and B for each co-player. The current state is marked in bold. In this example, player 1 considers player 2 as good and player 3 as bad. c, In each round, two players are chosen at random to interact in a prisoner’s dilemma. Players cooperate if they consider their co-player to be good and they defect otherwise. The other population members do not participate in the game, but they observe its outcome at no cost to themselves. d, After the interaction, both active players update their respective automata, depending on their strategy and on the co-player’s action. In addition, each observer independently updates her automata with respect to players 1 and 2 with probability λ each. e–h, We can mathematically describe how player i’s automaton with respect to player j changes over time by distinguishing four possible events. First, player j is not chosen to interact, such that player i’s automaton remains unaffected (e); second, players i and j interact with each other and update their respective states accordingly (f); third, player j interacts with someone else, but player i does not take this interaction into account (g); fourth, player j interacts with someone else, and player i updates j’s state accordingly (h).
We compare the performance of conditional cooperators with strategy (1, 1, 1/3, λ) in a population of defectors, (0, 0, 0, λ). We consider four scenarios, depending on whether players use direct (a,c) or indirect (b,d) reciprocity and depending on whether pairs interact only a few times (a,b) or often (c,d). Each panel shows the payoff of cooperators and defectors depending on how many of the 50 population members are cooperators, for b = 5 and c = 1. In all four cases we find bistability (as indicated by the arrows on the x-axis). That is, defectors have the higher payoff when there are few cooperators and the lower payoff when there are many cooperators. However, the threshold number of cooperators necessary to make cooperation beneficial differs. Indirect reciprocity has the lower threshold when there are only few rounds, because cooperators are better able to restrict the payoff of defectors (as indicated by the smaller slope of the red line in b compared to a). Direct reciprocity has the lower threshold when there are many rounds. Here, already a few cooperators suffice to invade the defectors. In contrast, for indirect reciprocity cooperators need to establish a critical mass because their payoffs increase nonlinearly.
Extended Data Fig. 3 Impact of different model parameters on the co-evolution of direct and indirect reciprocity.
We show how our evolutionary results in Fig. 5 are affected as we change different parameters of our model. In each panel, we vary one parameter and leave all others constant. We consider the same three scenarios as in Fig. 5a–c: few interactions and unreliable information (blue), intermediate interactions and reliable information (orange), and many interactions and unreliable information (green). We employ two complementary simulation techniques. In the upper half, each data point represents the average of a single simulation. This simulation was run for sufficiently long such that the averages converge and are independent of the initial condition. This typically happens after 107 mutant strategies have been introduced into the population. In the lower panels, each data point represents the average of 200 simulations with a random initial population. Here, each simulation only introduces 105 mutant strategies. For the parameters, we consider variation in the benefit-to-cost ratio (a,b), the population size (c,d), the selection strength (e,f), and the mutation rate (g,h). Our simulations suggest that each of these parameters can have a considerable impact on the evolving cooperation rates and the player’s propensity to adopt indirect reciprocity. For example, for the orange curve in panel e, we observe that the effect of selection strength on cooperation can be non-monotonic. We further discuss these dependencies in Extended Data Fig. 4 and SI Section 5. In general, however, we recover the following regularities from Fig. 5: (i) Substantial cooperation only evolves in the second and third scenario (that is, for the cooperation rates, the blue curve is systematically below the other curves). (ii) If cooperation evolves, players prefer indirect reciprocity when there are intermediately many interactions and outside information is reliable. They prefer direct reciprocity when there are many interactions and when outside information is noisy (that is, for the proportion of indirect reciprocity, the orange curve is systematically above the green curve).
As shown in the upper panel of Extended Data Fig. 3e, selection can sometimes have a non-monotonic effect on cooperation. For intermediate interactions and reliable information (δ = 0.9, ε = 0.001, depicted by the orange curve in Extended Data Fig. 3e), we have observed that the evolving cooperation rate is 53.4% for β = 1, increases to 77.3% for β = 10, and reduces to 61.5% for β = 100. Here we present additional simulations to shed further light on this non-monotonicity. a,b, We considered initial resident populations that either adopt a defective strategy or a conditionally cooperative strategy. We recorded how long it takes the evolutionary process until the resident strategy is replaced, and what the cooperation rate of the invading strategy is. Dots show the outcome of individual simulations, and the curves represent averages. The results suggest that the non-monotonicity of cooperation is not due to a reduced stability of cooperative strategies. They remain highly robust even for large selection strengths. Moreover, when selection is strong, they are typically invaded only by other cooperative strategies. c–e In a next step, we recorded the distribution of cooperation over time for three different selection strengths for the process considered in Extended Data Fig. 3e. We find that this distribution becomes more extreme with increasing selection strength: individuals either become highly cooperative or highly non-cooperative. However, the proportion of non-cooperative populations grows faster than the proportion of cooperative populations.
In the main text figures Fig. 3–Fig. 5, we explore situations in which individuals can choose strategies where they either only take direct information into account (λ = 0), or where they take all information into account (λ = 1). Here we repeat these simulations in a setup where intermediate values of λ are permitted. To this end, we define a quantity γ. This quantity is the probability that a player’s decision is based on the co-player’s behavior towards third parties, see Eq. (10) in Methods. For 0 ≤ λ ≤ 1 we obtain 0 ≤ γ ≤ γmax ≔ (n − 2)/(n − 1). a,b, We repeat the simulations in Fig. 3a,b for various values of γ. We observe that cooperation is never most likely to evolve for intermediate values of γ. Either most cooperation evolves for γ = γmax (in panel a), or for γ = 0 (in panel b). c,d, Similarly, we repeat the simulations in Fig. 4d,f for various values of γ. Again, the average cooperation rates for intermediate γ are strictly in between the results for γ = 0 and γ = γmax. e–h, Finally, we repeat the simulations shown in Fig. 5a–d, allowing for mutant strategies (y, p, q, λ) that lead to arbitrary values of γ between 0 and γmax. Especially for larger error rates, we observe that the evolving cooperation rates are now smaller. Nevertheless, the general patterns of Fig. 5 remain: (i) When there are only few rounds and many observation errors, cooperation does not evolve. (ii) When there are intermediately many rounds and few errors, cooperation evolves and players tend to put more weight on indirect information (that is, γ tends to be larger than 1/2). In particular, strategies with γ ≈ γmax are most abundant. (iii) When there are many rounds and intermediately many errors, cooperation evolves and players tend to put more weight on direct information. Here, players are most likely to adopt a strategy with γ ≈ 0. See SI Section 5.4 for details.
a, To explore how sensitive our results are to different kinds of errors and incomplete information, we have repeated the rare mutation simulations shown in Fig. 5d, reproduced here. b, While the baseline model assumes that only indirect observations are subject to perception errors, here we explore the effects when direct observations are equally prone to errors. We find that cooperation is substantially reduced compared to the baseline scenario. Moreover, direct reciprocity is only favoured for very large continuation probabilities. c, We have also explored the effect of additional implementation errors on cooperation. To this end, we assume here that players mis-implement their intended action with fixed probability e = 0.01. Compared to the baseline model without such errors, we find that there is less cooperation and less direct reciprocity. d, To mimic the dynamics that arises when defectors strategically conceal their bad actions, we have also considered a model in which defective actions are misperceived with probability ε, whereas cooperative actions are always observed faithfully. Because this assumption reduces the total rate at which errors occur compared to the baseline scenario, we observe more cooperation and players are more reliant on indirect reciprocity. e, Here we assume that individuals observe third-party interactions only with probability ν = 0.01. Due to the scarcity of information, players who take any third-party information into account are almost indistinguishable from those players who do not. As a result, cooperation is largely independent of observation errors, and the region in which indirect reciprocity is favoured has vanished. Unless noted otherwise, all parameters are the same as in Fig. 5d.
In an extension of our model, we allow players to assign more nuanced reputations to their co-players. We illustrate this approach by considering finite state automata with three states - good (G), neutral (N) and bad (B), with G as the initial state. We assume n − 1 residents employ the respective finite-state automaton strategy, while the remaining player uses either ALLC or ALLD. We simulate the players’ payoffs for various values of λ ∈ [0, 1]. We consider three different automaton strategies employed by the residents. The automata differ in how they deal with co-players that are assigned a neutral reputation. a, Players with the first automaton A1 are fully cooperative when they encounter a co-player with neutral reputation. This strategy can sustain cooperation among itself. However, a single ALLC player obtains approximately the same payoff as the residents, and hence can invade by (almost) neutral drift (d). b, According to the second automaton A2, players cooperate against neutral opponents with 50% probability. This strategy can be invaded by ALLC for all λ > 0 (e). c, According to A3, players defect against co-players with a neutral reputation. This strategy is not stable against ALLC for λ > 0 (f), and residents fail to cooperate with each other altogether.
We have explored the evolutionary dynamics when population members can choose between ALLC, ALLD, and one of the three finite-state automata introduced in Extended Data Fig. 7. a–c, First, we have explored the limit of rare mutations, using the same game payoffs as in Extended Data Fig. 7, and a fixed receptivity λ = 0.1. The numbers in each circle denote how often the respective strategy is played on average. Arrows illustrate how likely a single mutant fixes in the respective resident population. Solid arrows indicate that the fixation probability is larger than the neutral 1/n, whereas for dotted arrows this probability is smaller than neutral. We find that only the first automaton A1 can outperform both ALLC and ALLD. d–f, In a next step, we have explored the same scenario for a positive mutation rate μ = 0.01. The triangles represent the possible population compositions. Each corner corresponds to a homogeneous population, whereas the center corresponds to a perfectly mixed population. The color code reflects how often we observe the respective population composition over the course of evolution. We find that most of the time, populations are either in the neighborhood of ALLD, or they represent some mixture between the automaton strategy and ALLC. g–i, We have re-run the simulations in panels d–f, but now varying either the benefit of cooperation, the selection strength, or the mutation rate. In all cases, we observe that the first automaton is most favorable to cooperation. Interestingly, we observe the largest cooperation rate for intermediate mutation rates. This result, however, may be due to the fact that players can only choose from an unbalanced strategy space, as discussed in detail in SI Section 6.3.
a, Previous research has suggested that there are eight stable third-order strategies of indirect reciprocity that can sustain cooperation22, called the leading eight, L1–L8. They consist of two components, an assessment rule and an action rule. The assessment rule determines how players evaluate each other’s actions, depending on the previous reputations of the involved players. The action rule determines how to interact in the game, depending on one’s own reputation and on the reputation of the co-player. b–i, To explore the stability of these strategies, we consider a population in which n − 1 players adopt one of the leading-eight strategies. The remaining player either adopts ALLC or ALLD. Our results for λ > 0 reflect previous findings33: in the presence of perception errors, all leading-eight strategies are susceptible to invasion by either ALLC or ALLD. Only for λ = 0 (when perception errors are absent), the leading-eight strategies are stable against both mutant strategies.
Similar to Extended Data Fig. 8 for finite state automata, this figure explores how each of the leading-eight fares in an evolutionary competition against ALLC and ALLD for a fixed receptivity λ = 0.1. a–h, When mutations are rare, only ‘Judging’ (L8) is played in notable proportions. However, in the presence of perception errors, this strategy tends to assign a bad reputation to other players with the same strategy, such that everyone defects eventually33. i–p, When mutations are more common, some of the leading-eight strategies can stably coexist with ALLC. We observe such cooperative coexistences for L1, L2, and L7. q–s, These three strategies also yield substantial cooperation rates when we vary the benefit of cooperation, the selection strength, and the mutation rate. With respect to mutation, we again observe that intermediate mutation rates are most favorable to cooperation. However, this finding may not be robust, because the strategy space is again unbalanced. For a more detailed discussion, see SI Section 6.4.
About this article
Cite this article
Schmid, L., Chatterjee, K., Hilbe, C. et al. A unified framework of direct and indirect reciprocity. Nat Hum Behav (2021). https://doi.org/10.1038/s41562-021-01114-8