Article | Open | Published:

Coevolution of teaching ability and cooperation in spatial evolutionary games


Individuals with higher reputation are able to spread their social strategies easily. At the same time, one’s reputation is changing according to his previous behaviors, which leads to completely different teaching abilities for players. To explore the effect of the teaching ability influenced by reputation, we consider a coevolutionary model in which the reputation score affects the updating rule in spatial evolutionary games. More precisely, the updating probability becomes bigger if his/her partner has a positive reputation. Otherwise, the updating probability becomes smaller. This simple design describes the influence of teaching ability on strategy adoption effectively. Numerical results focus on the proportion of cooperation under different levels of the amplitude of change of reputation and the range of reputation. For this dynamics, the fraction of cooperators presents a growth trend within a wide range of parameters. In addition, to validate the generality of this mechanism, we also employ the snowdrift game. Moreover, the evolution of cooperation on Erdős-Rényi random graph is studied for the prisoner’s dilemma game. Our results may be conducive to understanding the emergence and sustainability of cooperation during the strategy adoptions in reality.


The prevalence of cooperative or altruistic behaviors is a ubiquitous phenomenon among selfish and unrelated agents, ranging from biological spheres to social communities1,2,3,4. Therefore, understanding the evolution of cooperation is one of the enduring conundrums of behavior sciences5,6,7. This has attracted plenty of attentions from biologists, economists and physicists. Evolutionary game theory is one of the fruitful frameworks, based on the so-called social dilemmas in which social conflicts are analogous to the competition among individuals8,9,10, to investigate this problem. Among these social dilemmas, the prisoner’s dilemma game (PDG) and the snowdrift game (SDG) are the most prominent metaphors for pairwise interactions11.

In the original game, two players (agents) make a decision simultaneously between cooperation (C) and defection (D). If both players cooperate, they will get the reward R equally, but only the punishment P when two defectors encounter. On the contrary, the defector could get the highest temptation T and the cooperator only obtains the sucker’s payoff S if two players have different actions. For the PDG, the payoff must satisfy the necessary rankings T > R > P > S and 2R > (T + S). Mutual defection is the solely stable state or Nash equilibrium of the game, therefore, it is the rational choice. In other words, although mutual cooperation yields the most collective payoff, rational players always defect regardless of what the opponent chooses12. For the SDG, the payoff ranking is T > R > S > P. This minor variation results in a significant change with the optimal strategy. Namely, there are two Nash equilibria in this game: defect when your opponent cooperates and cooperate when your opponent defects13. According to the Nash equilibria of the games, it is easy to see that SDG is more supportive for cooperation than the PDG. However, the Nash equilibria of the above two classical games are contradictory to the fact that cooperation is observed in nature widely. Consequently, some unknown mechanisms are maintaining cooperation in reality definitely.

Over the past decades, various specific mechanisms have been proposed to understand the emergence and maintenance of cooperation in many disciplines. Paradigmatic examples include kin selection14, direct and indirect reciprocity15,16,17, group selection18, noise19,20, extortion strategies21, reward and punishment22,23,24,25. Furthermore, some effective strategies are also used, such as the tit-for-tat26,27 and win-stay lose-shift28,29,30. In particular, spatial structure31 has been identified as one of the most effective factors to enhance cooperation. In spatial evolutionary game, players only interact with their nearest neighbors on a regular lattice. Cooperators can resist exploitation of defectors through forming clusters, which can protect those cooperators that are located in the interior of clusters. Following this pioneering work, a great number of promoting mechanisms have been studied. See, for example, the survey articles32,33. Complex networks, having a similar connectivity distribution with complex systems in reality, e.g., air transportation networks and the Internet, provide a uniform framework to understand the common cooperation behaviors34,35,36,37.

Teaching activity38,39 is an important process in the evolution of cooperation, which refers to the influence or reproduction rate of individuals. Players with high influence are more likely to reproduce than individuals with low influence, i.e., they have a higher teaching ability. In previous works38,39, teaching ability is a control variable, which is unchanged during the evolution. However, teaching ability is changing continuously in reality, as a consequence, we adopt a coevolutionary model in this paper. In addition, it is not hard to imagine that players who have higher reputation are able to spread their social strategies easily. For example, the companies will get higher and higher reputation if they always complete the production tasks on time on the basis of the contracts with other enterprises. Then, more and more firms are not only inclined to cooperate or deal with them, but also more likely to refer to and imitate their way of operational management or technologies. On the contrary, other companies are not willing to imitate them. Therefore, the logical assumption using reputation score to symbolize the teaching ability is reasonable. Reputation represents a class of individual information which is about one’s past behaviors. It will change according to one’s past behaviors. It has promoted the evolution of cooperation effectively in games of indirect reciprocity. As a classically theoretical model of reputation, image scoring has been studied extensively in which cooperative behaviors increase reputations and defective actions decrease the score by one unit40. It has been proved that cooperation can be enhanced evidently with the aid of reputation. Migration based on the reputation has been introduced into the spatial PDG41. Individuals can adjust their partnerships on the basis of local information about reputation42. The time scale of selection and updating will change if reputation is introduced43,44. Some coevolutionary models about time scale and cooperation are employed in previous works45,46 and the results show that cooperation can be promoted when an individual with a high payoff holds a successful strategy for a longer time. In the present paper, strategy updating and teaching ability have the same time scale. In addition, cognitive ability based on reputation is also studied, inferring that reputation mechanism can be seen as a universally applicable promoter of cooperation, which works on various interaction networks and in different types of evolutionary game47,48,49,50,51. However, one can not ignore partners’ reputation (teaching ability) when he updates his strategy because reputation includes a lot of information about the partner. Obviously, one’s teaching ability could affect partner’s decision directly. Generally speaking, one is perhaps more likely to adopt partner’s strategy with a good reputation and excludes the one with a bad reputation. For example, virtuous people usually spread their minds easily in reality. This form of connection between reputation and partner’s teaching ability is not studied in previous work. Therefore, a more realistic scenario will acknowledge that a player will make a decision by taking the teaching ability into consideration.

Based on the above facts, in the present paper, we propose a modified updating rule incorporated with partner’s reputation to describe the teaching ability. It is assumed that individuals acquire reputation without extra cost because reputation information can spread among neighbors by gossip. The PDG and SDG are employed to model social dilemmas, in which interactions are driven by complex topologies. In this paper, we consider the regular lattice (the neighborhood setups are the von Neumann neighborhood or the Moore neighborhood, in other words, the degree k is equal to 4 or 8 for each vertex, respectively) and the Erdős-Rényi (ER) random graph. For the ER random graph, the average degree \(\bar{k}\) is equal to 4. Simulation results show that a higher level of cooperation appears when teaching ability is in effect during the decision making process.


Teaching ability, represented by reputation score Ri, is introduced into the strategy updating rule to explore its influences on the emergence of cooperative behavior in spatial evolutionary games. The influences of one’s teaching ability change during the evolution of games. The change amplitude of Ri is δ (>0) every time. That is to say, choosing cooperation for player i will lead to Ri increases by δ. Otherwise, it decreases by δ. Additionally, Ri [−α, α] (α > 0), which means that the value of reputation has a saturation effect whether it is good or bad. Reputation score Ri has an important effect on the strategy updating for player i. From a qualitative point of view, the probability of strategy updating becomes more bigger if Ri is positive. Instead, it turns smaller. Furthermore, δ/α is the fluctuation ratio of reputation. It represents the intensity of teaching ability. In the following results, we set the size of the regular lattice from 100 × 100 to 200 × 200. The size of the ER random graph is 10000. And Monte Carlo (MC) simulation is repeated for 61000 times. The details of interactions between agents and their corresponding payoffs are summarized in the Methods section.

We start by examining the effect of the new strategy adoption rule on the persistence of cooperation. As shown in Fig. 1, two different sizes of neighbors are compared to analyze the impact of strategy selection on the evolution of cooperation on the regular lattice (200 × 200) with periodic boundary conditions. More concretely, panels (a) and (b) are corresponding to the results for the von Neumann neighborhood and the Moore neighborhood52, respectively.

Figure 1

Concentration of cooperators ρc vs the quotient of δ/α in the prisoner’s dilemma game, α = 1.0, 2.0 and 3.0, respectively. For both panels, the results are obtained on a 200 × 200 regular lattice. The results in panel (a) are acquired in the von Neumann neighborhood, while the results are according to the Moore neighborhood in panel (b). As the value of δ/α increases, the effect of teaching ability becomes obvious. Parameter b = 1.15.

δ = 0 means that the model degenerates to the traditional version and the normalized payoff difference (Pi − Pj)/k is the sole determinant factor for strategy updating. According to the previous research31, cooperators could form clusters to prevent defectors from invasion, which is called the network reciprocity. However, the cooperators located on the edge of the cooperative clusters are prone to revolting as the value of b increases, which results in the dissolution of cooperative clusters eventually (i.e., in the two traditional cases, the values of b, which make the cooperators vanish, are less than 1.1).

As shown in panels (a) and (b), once δ > 0, the evolution of the whole system becomes totally different because the teaching ability is considered. The normalized payoff difference (Pi − Pj)/k and the teaching ability Ri decide the strategy updating at the same time. Moreover, b is fixed to 1.15. For each curve in Fig. 1, the fraction of cooperation ρc monotonically increases with the increasing of the fluctuation ratio of reputation δ/α. Although the temptation to defect b is high, this mechanism can guide the players to select cooperation effectively so that cooperators survive in the system. It can be observed that cooperators could dominate the whole network in some cases. The introduction of teaching ability makes individuals update strategy depends on the normalized payoff difference and the teaching ability. And this updating rule seems to be reasonable and makes cooperation become the dominant strategy. Furthermore, the range of reputation α has a great impact on the evolution of cooperation. Obviously, the speed and intensity of cooperators appearing and spreading in the α = 3 condition are more remarkable than those of α = 1 or α = 2. However, the gap between the two curves of α = 2 and α = 3 is smaller than that between α = 1 and α = 2. This suggests that the effect of the range of reputation will not increase immensely. The above results clearly indicate that the evolution of cooperation is greatly promoted under the newly introduced mechanism.

It remains interesting to elucidate how this new mechanism promotes cooperation. To provide answers, we show some characteristic snapshots on a 100 × 100 square lattice (the von Neumann neighborhood) in Fig. 2 (the green and the red represent the cooperators and the defectors, respectively). The parameter b is given by a constant term in all snapshots (b = 1.17). First, looking at the upper row, the snapshots are given for t = 0, 5, 10, 100, 60000. As shown, cooperators and defectors uniformly scatter all over the lattice initially. As described earlier, cooperators will die out in the traditional version in this condition. However, the evolution is obviously different once the teaching ability (δ = 1.5 and α = 3.0) is incorporated. Compared with the traditional case, all players take more information (the teaching ability) into account when they make a decision. Cooperative clusters could protect cooperators granted that the value of b is high. Many a cooperator is able to survive at the stable stage even though the fraction of cooperators will fall at the beginning of the evolution.

Figure 2

Typical strategy distribution of cooperators (green) and defectors (red) on a 100 × 100 square lattice. Snapshots are given at t = 0, 5, 10, 100, 60000 steps in the upper row. The lower panels are the distribution of strategies at the 60000th step. The amplitudes of change of reputation are δ = 0, δ = 0.75, δ = 1.5, δ = 2.25 and δ = 3.0 from left to right, respectively. The parameters are b = 1.17, α = 3 and k = 4.

To compare with Fig. 1, we also explore the distribution of strategies at the 60000th Monte Carlo step (MCS) in the lower panels of Fig. 2. The parameters are δ = 0, δ = 0.75, δ = 1.5, δ = 2.25, δ = 3.0 from left to right and α = 3.0. The other configurations are consistent with the upper row. For δ = 0, cooperators still can not survive because the selection intensity is not enough to resist the temptation to defection. With the increasing of δ, many big and compact clusters form steadily when the mechanism works. For example, the territory of the defectors becomes more and more smaller when δ increases from 0 to 3. The increased teaching ability means that some information except the payoff becomes more and more important for individuals. Such a consideration is reasonable, e.g., one will consider a lot of things besides the profit when he makes a decision. These results illustrate that this mechanism can facilitate the network reciprocity remarkably. Based on this fact, it is not hard to understand that cooperators can dominate the whole system when δ reaches the maximum at the same condition. The simulated phenomena imply that the cooperators can survive or even thrive owing to the consideration of appropriate teaching ability.

Note that there are two major factors to affect the probability of strategy updating from the above analyses: the normalized payoff difference and the value of reputation. Therefore, it is necessary to study the individual’s preference for strategies. What follows is an observation about the temporal traits of strategy retention rate. As shown in Fig. 3, the parameter b is fixed to 1.2 in every panel and δ/α = 0.75 except the traditional case. Therein, ρcc represents the rate that cooperator is still a cooperator between two rounds. Analogously, ρdd is the rate that defectors retain defective strategies along with time. For the traditional case, ρcc reaches to 0 and ρdd becomes 1 fast since the temptation to defect is high. Consequently, cooperators become extinct soon in the background of the payoff of defectors being more than that of cooperators. However, cooperators could occupy a certain territory when teaching ability (α = 1.0) is taken into consideration. For α = 2.0, ρcc first drops and the trend of ρdd is opposite completely, which proves that the overall atmosphere is still unfavorable for the persistence of cooperation even though with the help of different teaching abilities in the early stage. After that stage, ρcc fast upward pulls and ρdd falls rapidly and ρcc even exceeds ρdd. Since the cooperation strategy is more likely becoming reference selection, so it is not hard to understand that the evolution of cooperation widely spreads in Fig. 2. For α = 4.0, the evolution trend is similar to α = 2.0. However, the gap between two curves becomes bigger and bigger and the number of retaining cooperation will increase to 1. This process means that the prevalence of cooperation is positively related to the value of α. We could draw a conclusion that the incorporation of teaching ability influenced by reputation adjusts the microscopic preference of players and accelerates the dissemination of cooperation based on these results. These promoting effects are consistent with the aforementioned results.

Figure 3

Time evolution of the strategy retention rate ρc(d)→c(d). ρcc represents the rate that cooperators continue to cooperate and ρdd denotes the rate that defectors continue to retain defective strategy. δ/α is fixed to 0.75 and the panels are corresponding to α = 1.0,2.0,4.0 except the traditional case. With the increasing of α, the readiness to cooperate increases gradually. The depicted results are obtained on a 200 × 200 square lattice, with the parameter k = 4 and b = 1.2.

Besides, it deserves to consider how the critical threshold value of bc changes with the fluctuation ratio parameter δ/α. Fig. 4 is the simulation results on a 200 × 200 square lattice (k = 4). As shown in Fig. 4, bc denotes the threshold for cooperators to die out. For the traditional version, we could see a straight line (black) at the bottom of Fig. 4. It indicates that all players only care about payoffs so that defection becomes the rational choice. However, it can be observed that bc increases monotonically from left to right for other three curves, which means that the space of cooperators living is enlarged as δ/α increases. Those players with good reputation restrain selfish agents from adopting defection. For example, the cooperators are located in the edge of clusters in Fig. 2 are more loyal to cooperation. This result fully explains that the new idea could promote the survival of cooperators among selfish players.

Figure 4

Critical threshold values b = bc to make the evolution to the pure D phase (extinction of cooperators). It can be observed that bc increases monotonically with the increasing of the fluctuation ratio of reputation δ/α. These results shows that the space occupied by cooperators is enlarged. The simulation is executed on a 200 × 200 square lattice and the neighborhood setup is the von Neumann neighborhood.

Lastly, it is worth exploring the robustness and generality of the above observations by means of different networks and evolutionary game models. Here we set δ/α = 1 for all curves in Fig. 5. Our MC simulation results in the left panel are about prisoner’s dilemma game on the ER random graph. The network has the same average degree (i.e. \(\bar{k}\) = 4) and size (N = 104 nodes) with regular lattice. As shown, a virtually promotion effect on the evolution of cooperation can be observed compared with the traditional version (δ = 0). The evolution of cooperation is strengthened effectively with the increasing of δ in the PDG, which is qualitatively consistent with the results obtained on the regular network. As an example, the critical value bc has exceeded 2.0 when δ = 3.0, which implies that the cooperators can survive or even thrive within a large range of b’s values. For the right panel of Fig. 5, it depicts the fraction of cooperators ρc of SDG on the regular network (200 × 200 and k = 4) depend on the parameter r. Likewise, cooperation is enhanced obviously. Anyway, these results support the fact that teaching ability influenced by reputation is a universally effective way to sustain and promote cooperation, regardless of the form of the underlying game and interaction network.

Figure 5

Left panel: fraction of cooperators ρc in dependence on the parameter b for different values of δ in the prisoner’s dilemma game on the ER graph. It has N = 104 nodes and its average degree is \(\bar{k}\) = 4. Right panel: fraction of cooperators ρc of the snowdrift game in dependence of the parameter r for different values of δ on the regular lattice (size of 200 × 200 and the degree k = 4).


In sum, we have proposed a coevolutionary model to investigate the impact of teaching ability influenced by reputation on the evolution of cooperation in spatial evolutionary games. This model emphasizes the relevance of strategy adoption and teaching ability when human behaviors are modeled. This form of strategy adoption illustrates that the players with high reputation can spread their strategies easily and vice versa. There is no doubt that this coevolution process conforms to the real situation compared to the traditional case. Numerical simulation implies that the amplitude of change of reputation δ and reputation’s range α have a significant impact on the persistence of cooperation. More cooperative clusters appear easily under this updating dynamics. Those players with good reputation restrain selfish agents from adopting defection strategy. In addition, the robustness of the enhancement effect is checked on ER graph for the prisoner’s dilemma game. The promoting effects are confirmed in the snowdrift game as well. The aforementioned results may illustrate that this mew mechanism has a certain degree of university because it works effectively for the prisoner’s dilemma game and the snowdrift game on two kinds of networks (the regular square lattice and the ER random graph). This work may be conductive to understanding the cooperative behaviors in complex economics as well as human society.


In this work, evolutionary PDG and SDG are employed to explore the role of teaching ability influenced by reputation, in which every player occupies a vertex of the underlying networks. For testing the robustness of the impact of this newly introduced mechanism on the evolution of cooperation, different networks topologies including regular lattice and Erdős-Rényi (ER) random graph are taken into consideration. For simplicity but without loss of generality, here we consider a so-called weak PDG31, which is characterized with the temptation to defect T = b, mutual cooperation R = 1, the punishment for the mutual defection P = 0 and the suckers’ payoff S = 0. Therefore, it is not hard to see that the outcome of the game is only dependent on the parameter b. In addition, 1 < b < 2 quantifies the temptation to defect and represents the advantage of defectors over cooperators. For the SDG, the rescaled payoffs are T = 1 + r, R = 1, S = 1 − r and P = 0, where 0 < r < 1 represents the so-called cost-to-benefit ratio and payoffs still satisfy the ranking T > R > S > P. Initially, each individual i is designed as a cooperator (si = C) or a defector (si = D) with equal probability and is given a reputation score coefficient Ri as well. To avoid the preferential influence, we set Ri = 0 before the game. Reputation is important in human society. It reflects one’s history information or status and is accessed by all members in his/her community. Individuals with different reputation scores have different influences on the players interacting with them. As a consequence, we use the reputation score to symbolize the teaching ability. Moreover, it is assumed that reputation spreads among neighbors by gossip and is evaluated under the simple protocol below without cost.

The game is simulated with the following Monte carlo (MC) simulation procedures: firstly, player i gets his total payoff Pi by playing the game with his nearest neighbors. Next, player i will choose a neighbor j randomly from its neighbors as the reference target, who acquires payoff Pj in the same way. Last, all agents synchronously update their strategies according to the following probability:

$$Pro{b}_{i}=\frac{1}{1+\exp [({P}_{i}-{P}_{j})/kK-{R}_{j}]},$$

where K represents the intensity of selection. Without loss of generality, we set K to be 0.1 in this paper if not directly stated53. Player i will adopt j’s strategy relying on the normalized payoff difference (e.g. (Pi − Pj)/k) and Rj. The degree of player i is k. It is noted that each player has a chance to adopt one of their neighbors’ strategies once on average during one full MC simulation. As mentioned above, here we assume that players have local information about his nearest neighbors. As a consequence, each neighbor’s reputation is known to the focal player and Rj could affect Probi directly. Furthermore, it is more practical that reputation is changing with the evolution. Here we assume that δ > 0 symbolizes the amplitude of change of reputation. That is to say, Ri increases by δ when i is a cooperator and Ri decreases by δ when i chooses defection. Additionally, Ri [−α, α](α > 0), which means that the value of reputation has a saturation effect whether it is good or bad and δ/α is the fluctuation ratio of reputation. According to formula (1), player i prefers to adopt player j’s strategy if Rj > 0. Instead, player i has an attitude of exclusion for player j if he is notorious. This simple design describes the influence of teaching ability on strategy adoption effectively.

Ultimately, a whole Monte Carlo step (MCS) is finished if the above-mentioned fundamental procedures are implemented. For the regular lattice, the results of MC simulation presented in the Results are obtained on populations comprising 100 × 100 up to 200 × 200 agents. And the neighborhood setups are the von Neumann neighborhood or the Moore neighborhood, namely, the degree k is equal to 4 or 8 for each agent, respectively. For the ER random graph, the size N and the average degree \(\bar{k}\) are N = 104 and \(\bar{k}\) = 4, respectively. Additionally, the fraction of cooperators ρc is acquired by averaging the last 1000 full MCS of the total 61000 and the final results are averaged over 10–20 independent runs to guarantee the accuracy.


  1. 1.

    Jain, S. & Krishna, S. A model for the emergence of cooperation, interdependence and structure in evolving networks. Proc. Natl. Acad. Sci. USA 98, 543–547 (2001).

  2. 2.

    Ohtsuki, H., Hauert, C., Lieberman, E. & Nowak, M. A. A simple rule for the evolution of cooperation on graphs and social networks. Nature 441, 502–505 (2006).

  3. 3.

    Brandt, H., Hauert, C. & Sigmund, K. Punishment and reputation in spatial public goods games. Proc. R. Soc. Lond. B 270, 1099–1104 (2003).

  4. 4.

    Xia, C. Y., Meng, X. K. & Wang, Z. Heterogeneous coupling between interdependent lattices promotes the cooperation in the prisoner’s dilemma game. PloS ONE 10, e0129542 (2015).

  5. 5.

    Nowak, M. A. Five rules for the evolution of cooperation. Science 314, 1560–1563 (2006).

  6. 6.

    Santos, F. C., Santos, M. D. & Pacheco, J. M. Social diversity promotes the emergence of cooperation in public goods games. Nature 454, 213–216 (2008).

  7. 7.

    Wu, Y., Zhang, Z. & Chang, S. Effect of self-interaction on the evolution of cooperation in complex topologies. Physica A 481, 191–197 (2017).

  8. 8.

    Wu, Y., Chang, S., Zhang, Z. & Deng, Z. Impact of social reward on the evolution of the cooperation behavior in the complex networks. Sci. Rep. 7, 41076 (2017).

  9. 9.

    Perc, M. Coherence resonance in a spatial prisoner’s dilemma game. New J. Phys. 8, 22 (2006).

  10. 10.

    Wang, Z., Wang, L., Yin, Z. Y. & Xia, C. Y. Inferring reputation promotes the evolution of cooperation in spatial social dilemma games. PLoS ONE 7, e40218 (2012).

  11. 11.

    Doebeli, M. & Hauert, C. Models of cooperation based on the prisoner’s dilemma and the snowdrift game. Ecol. Lett. 8, 748–766 (2005).

  12. 12.

    Gómez-Gardeñes, J., Reinares, I., Arenas, A. & Floría, L. M. Evolution of cooperation in multiplex networks. Sci. Rep. 2, 620 (2012).

  13. 13.

    Hauert, C. & Doebeli, M. Spatial structure often inhibits the evolution of cooperation in the snowdrift game. Nature 428, 643–646 (2004).

  14. 14.

    Hamilton, W. D. The genetical evolution of social behaviour. ii. J. Theor. Biol. 7, 17–52 (1964).

  15. 15.

    Trivers, R. L. The evolution of reciprocal altruism. Q. Rev. Biol. 46, 35–57 (1971).

  16. 16.

    Panchanathan, K. & Boyd, R. Indirect reciprocity can stabilize cooperation without the second-order free rider problem. Nature 432, 499–502 (2004).

  17. 17.

    Chen, X., Schick, A., Doebeli, M., Blachford, A. & Wang, L. Reputation-based conditional interaction supports cooperation in well-mixed prisoner’s dilemmas. PloS ONE 7, e36260 (2012).

  18. 18.

    Traulsen, A. & Nowak, M. A. Evolution of cooperation by multilevel selection. Proc. Natl. Acad. Sci. USA 103, 10952–10955 (2006).

  19. 19.

    Vukov, J., Szabó, G. & Szolnoki, A. Cooperation in the noisy case: prisoner’s dilemma game on two types of regular random graphs. Phys. Rev. E 73, 067103 (2006).

  20. 20.

    Szolnoki, A., Vukov, J. & Szabó, G. Selection of noise level in strategy adoption for spatial social dilemmas. Phys. Rev. E 80, 056112 (2009).

  21. 21.

    Mao, Y., Xu, X., Rong, Z. & Wu, Z. X. The emergence of cooperation-extortion alliance on scale-free networks with normalized payoff. EPL 122, 50005 (2018).

  22. 22.

    Li, X. et al. Punishment diminishes the benefits of network reciprocity in social dilemma experiments. Proc. Natl. Acad. Sci. USA 115, 30–35 (2018).

  23. 23.

    Jiménez, R., Lugo, H., Cuesta, J. A. & Sánchez, A. Emergence and resilience of cooperation in the spatial prisoner’s dilemma via a reward mechanism. J. Theor. Biol. 250, 475–483 (2008).

  24. 24.

    Helbing, D., Szolnoki, A., Perc, M. & Szabó, G. Punish, but not too hard: how costly punishment spreads in the spatial public goods game. New J. Phys. 12, 083005 (2010).

  25. 25.

    Wang, Z., Xia, C. Y., Meloni, S., Zhou, C. S. & Moreno, Y. Impact of social punishment on cooperative behavior in complex networks. Sci. Rep. 3, 3055 (2013).

  26. 26.

    Imhof, L. A., Fudenberg, D. & Nowak, M. A. Tit-for-tat or win-stay, lose-shift? J. Theor. Biol. 247, 574–580 (2007).

  27. 27.

    Baek, S. K. & Kim, B. J. Intelligent tit-for-tat in the iterated prisoner’s dilemma game. Phys. Rev. E 78, 011125 (2008).

  28. 28.

    Nowak, M. A. & Sigmund, K. A strategy of win-stay, lose-shift that outperforms tit-for-tat in the prisoner’s dilemma game. Nature 364, 56–58 (1993).

  29. 29.

    Chen, X., Fu, F. & Wang, L. Promoting cooperation by local contribution under stochastic win-stay-lose-shift mechanism. Physica A 387, 5609–5615 (2008).

  30. 30.

    Amaral, M. A., Wardil, L., Perc, M. & Da Silva, J. K. Stochastic win-stay-lose-shift strategy with dynamic aspirations in evolutionary social dilemmas. Phys. Rev. E 94, 032317 (2016).

  31. 31.

    Nowak, M. A. & May, R. M. Evolutionary games and spatial chaos. Nature 359, 826–829 (1992).

  32. 32.

    Perc, M., Gómez-Gardeñes, J., Szolnoki, A., Floría, L. M. & Moreno, Y. Evolutionary dynamics of group interactions on structured populations: a review. J. R. Soc. Interface 10, 20120997 (2013).

  33. 33.

    Perc, M. & Szolnoki, A. Coevolutionary games–a mini review. BioSystems 99, 109–125 (2010).

  34. 34.

    Poncela, J., Gómez-Gardeñes, J., Floría, L. M. & Moreno, Y. Robustness of cooperation in the evolutionary prisoner’s dilemma on complex networks. New J. Phys. 9, 184 (2007).

  35. 35.

    Du, W. B., Cao, X. B., Liu, R. R. & Wang, Z. Effects of inertia on evolutionary prisoner’s dilemma game. Comm. Theo. Phys. 58, 451–455 (2012).

  36. 36.

    Wu, Y., Zhang, B. & Zhang, S. Probabilistic reward or punishment promotes cooperation in evolutionary games. Chaos Solitons & Fractals 103, 289–293 (2017).

  37. 37.

    Cardillo, A., Gómez-Gardeñes, J., Vilone, D. & Sánchez, A. Coevolution of strategies and update rules in complex prisoner’s dilemma networks. New J. Phys. 12, 103034 (2010).

  38. 38.

    Szolnoki, A. & Szabó, G. Cooperation enhanced by inhomogeneous activity of teaching for evolutionary prisoner’s dilemma games. EPL 77, 30004 (2007).

  39. 39.

    Szolnoki, A., Perc, M. & Szabó, G. Diversity of reproduction rate supports cooperation in the prisoner’s dilemma game on complex networks. Eur. Phys. J. B 61, 505–509 (2008).

  40. 40.

    Nowak, M. A. & Sigmund, K. Evolution of indirect reciprocity by image scoring. Nature 393, 573–577 (1998).

  41. 41.

    Cong, R., Wu, B., Qiu, Y. & Wang, L. Evolution of cooperation driven by reputation-based migration. PloS ONE 7, e35776 (2012).

  42. 42.

    Fu, F., Hauert, C., Nowak, M. A. & Wang, L. Reputation-based partner choice promotes cooperation in social networks. Phys. Rev. E 78, 026117 (2008).

  43. 43.

    Wu, Z.-X., Rong, Z. & Holme, P. Diversity of reproduction time scale promotes cooperation in spatial prisoner’s dilemma games. Phys. Rev. E 80, 036106 (2009).

  44. 44.

    Rong, Z., Wu, Z.-X., Hao, D., Chen, M. Z. Q. & Zhou, T. Diversity of timescale promotes the maintenance of extortioners in a spatial prisoner’s dilemma game. New J. Phys. 17, 033032 (2015).

  45. 45.

    Rong, Z., Wu, Z.-X. & Wang, W.-X. Emergence of cooperation through coevolving time scale in spatial prisoner’s dilemma. Phys. Rev. E 82, 026101 (2010).

  46. 46.

    Rong, Z., Wu, Z.-X. & Chen, G. Coevolution of strategy-selection time scale and cooperation in spatial prisoner’s dilemma game. EPL 102, 68005 (2013).

  47. 47.

    Hauert, C. Replicator dynamics of reward & reputation in public goods games. J. Theor. Biol. 267, 22–28 (2010).

  48. 48.

    Ohtsuki, H. & Iwasa, Y. How should we define goodness?–reputation dynamics in indirect reciprocity. J. Theor. Biol. 231, 107–120 (2004).

  49. 49.

    Semmann, D., Krambeck, H.-J. & Milinski, M. Strategic investment in reputation. Behav. Ecol. Sociobiol. 56, 248–252 (2004).

  50. 50.

    Li, Y. The evolution of reputation-based partner-switching behaviors with a cost. Sci. Rep. 4, 5957 (2014).

  51. 51.

    Fehr, E. Human behaviour: don’t lose your reputation. Nature 432, 449–450 (2004).

  52. 52.

    Huang, K., Zheng, X., Li, Z. & Yang, Y. Understanding cooperative behavior based on the coevolution of game strategy and link weight. Sci. Rep. 5, 14783 (2015).

  53. 53.

    Chen, W., Wu, T., Li, Z. & Wang, L. Coevolution of aspirations and cooperation in spatial prisoner’s dilemma game. J. Stat. Mech. 2015, P01032 (2015).

Download references


This project was supported in part by the National Basic Research Program (2012CB955804), the Major Research Plan of the National Natural Science Foundation of China (91430108), the National Natural Science Foundation of China (11771322), the Major Program of Tianjin University of Finance and Economics (ZD1302) and the Graduate School of Tianjin University of Finance and Economics (2017TCB06). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

S.Z. and Z.Z. initiated the idea. Z.Z., Y.W. and Y.L. built the model. S.Z., Z.Z., Y.W. and Y.X. performed analysis. All authors contributed to the scientific discussion and revision of the article.

Correspondence to Shuhua Zhang.

Ethics declarations

Competing Interests

The authors declare no competing interests.

Additional information

Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark


By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.