Imitation dynamics on networks with incomplete information

Imitation is an important learning heuristic in animal and human societies. Previous explorations report that the fate of individuals with cooperative strategies is sensitive to the protocol of imitation, leading to a conundrum about how different styles of imitation quantitatively impact the evolution of cooperation. Here, we take a different perspective on the personal and external social information required by imitation. We develop a general model of imitation dynamics with incomplete information in networked systems, which unifies classical update rules including the death-birth and pairwise-comparison rule on complex networks. Under pairwise interactions, we find that collective cooperation is most promoted if individuals neglect personal information. If personal information is considered, cooperators evolve more readily with more external information. Intriguingly, when interactions take place in groups on networks with low degrees of clustering, using more personal and less external information better facilitates cooperation. Our unifying perspective uncovers intuition by examining the rate and range of competition induced by different information situations.


I. INTRODUCTION
Quantitatively understanding the evolution of collective behaviour in animal and human societies is a fundamental question in modern science [1][2][3].Evolutionary game theory provides a prominent mathematical metaphor to quantify behavioural strategies of individuals, related payoffs, and how they change under the influence of natural selection [4][5][6][7][8][9].Unlike in unstructured populations where natural selection favours free riders [10][11][12], network structure serves as a basic mechanism that promotes cooperation [13] by non-random and local interactions [14][15][16][17][18].The basic intuition dates back to Hamilton [19,20], who argued that the "viscosity" arising from limited (i.e., local) dispersal leads to altruists benefiting from proximity to genetic relatives.This intuition has been profoundly influential in evolutionary theory, and it is partially responsible for our current understanding of how cooperation evolves in networked systems.
When scrutinizing the evolution of collective cooperation on networks, researchers find that one of the key factors that determine the fate of cooperation is the update rule, i.e., the rule that specifies how individuals change their strategies over time [14,17,21].Indeed, network structures and update rules are two sides of a coin, with the former acting as the substrate and the latter driving the evolution of the entire system.Imitation-based update rules are commonly used rules in previous studies since imitating successful peers via social comparison is an important learning heuristic in both animal and human societies [22,23].Intriguingly, previous studies have shown that whether cooperation evolves depends sensitively on the protocol of imitation: forgoing one's own strategy and imitating successful neighbours by comparing all neighbours' payoffs (the so-called "death-birth" update rule) makes cooperation evolve if the benefit-to-cost ratio is greater than a positive threshold; comparing the payoff of a random neighbour with one's own and imitating based on this payoff difference (the so-called "pairwise-comparison" update rule) instead makes cooperation always disfavoured by natural selection irrespective of the benefit-to-cost ratio [14,15,24].Such qualitatively different results induced by distinct mechanisms of imitation raise important questions about how the mechanisms of imitation influence the evolution of cooperation.So far, few studies provide clear and satisfactory answers.
To address these questions, we start by examining the information required by different imitation-based update rules.Two kinds of information are considered, personal and external social information.The former refers to an individual's own strategies and payoffs while the latter refers to those of one's neighbours.From this perspective, the aforementioned two update rules (and other classical imitation-based rules as well) can be clearly differentiated: the death-birth update rule requires no personal information but full social information; the pairwise-comparison update rule needs both personal and social information and weights them equally.This suggests that the amount of personal and social information required and the relative weighting of personal to social information may serve as indicators to quantify the impact of imitation-based update rules on the evolution of cooperation.To undertake a thorough investigation, we propose a new class of imitation-based update rules called "imitation with incomplete social information".Under this rule, the amount of personal and social information and the relative importance of personal to social information during strategy updating are all tunable, covering a wide range of information requirements for strategy updating and recovering classical imitation-based update rules as special cases.
Employing this new class of update rules, we first derive analytical conditions for cooperation to prevail over defection in pairwise social dilemmas.These conditions reveal that it is best for the evolution of cooperation if individuals ignore their own information and instead imitate more successful social peers, irrespective of the number of peers (at least two) used for comparison.In group social dilemmas, the same result holds if the degree of clustering in the network is sufficiently high; otherwise, it is better to rely more on personal information and use less social information.This finding arises mainly from the low overlap between individuals' first-order and second-order neighbours, which makes it easier for defectors to exploit cooperators through group interactions when the network is sparse.Finally, we demonstrate that our findings are robust to heterogeneity in network structure as well as to the individualized utilization of social information.Our results thus highlight the degree to which social information affects the evolution of collective cooperation in networked systems.

A. Games and payoffs
We are interested in conflicts of interest arising in groups.Consider a group of size n, consisting of individuals of type C ("cooperator" for example) or D ("defector" for example).Suppose that f C (n C ) and f D (n C ) are the respective payoffs to types C and D when there are n C total cooperators in the group.A simple but highly influential model of a social dilemma was proposed by Dawes [29] as possessing two properties: all individuals prefer widespread cooperation to widespread defection (f Here, we consider two kinds of these social dilemmas: a donation game, which involves pairwise interactions, and a public goods game, which involves interactions in larger groups. The networked system we consider consists of N individuals arranged on the nodes of a network, whose structure represents the relationships between individuals.At each time step, individuals interact with neighbours and obtain payoffs from these interactions.In the donation game, every individual interacts with each neighbour separately [12,14,18].Cooperators (C) pay a cost of c to provide a neighbour with a benefit b, while defectors (D) pay no costs and provide no benefits.This pairwise "donation game" can be summarised by the payoff matrix [15,30,31] where each entry gives the payoff to the row player against the corresponding column player.Instead of interacting with each neighbour separately, each game could consist of group interactions, wherein every individual organises a multi-player game [32,33] involving all of its neighbours.If an individual has d neighbours, then they participate in d + 1 group interactions, with one initiated by the focal individual and d initiated by the neighbours.A cooperator pays a cost c in each game, and the total costs from cooperators are then enhanced by a multiplication factor and divided among all members of the group.When there are n C (0 ⩽ n C ⩽ n) cooperators in a group of n individuals, the respective payoffs for defectors and cooperators are where r is the multiplication factor for the public good.When 1 < r < n, the players in this game are confronted with a social dilemma, wherein the strategy to maximise individual payoffs (namely, defection) deviates from the collectively optimal choice (namely, cooperation).
In either kind of social dilemma, an individual i's payoff, u i , is calculated as the average of their payoffs over all interactions.This payoff is then transformed into fitness, F i , by the mapping F i = e δui , where δ ⩾ 0 is the intensity of selection [12,15,18].The selection intensity reflects the contribution of game interactions to the fitness of i, which we assume to be weak.The case of neutral drift corresponds to δ = 0, where cooperators and defectors are indistinguishable from the standpoint of reproductive success.

B. Imitation dynamics
Imitation-based rules are commonly used in exploring the evolution of cooperation on complex networks [5,14,18,30,34].Instead of viewing behaviour change as a result of death, birth, and replacement, imitation models have the property that agents remain alive but can periodically copy the behaviour of others.Popular update rules such as "death-birth" (DB) and "imitation" (IM) [7,14,17,18,24,30,35,36], as well as "pairwise-comparison" (PC) [15,21,27,28], all have natural interpretations in terms of strategy revision in a cultural context [34].However, these update rules (Fig. 1a-c) lie on the extreme ends of a spectrum in that they assume an individual has access to information about either all neighbours (DB and IM, with the distinction being that imitation is not compulsory under IM) or only one neighbour (PC) when making a decision about whether (and whom) to imitate.
After interactions, a single individual i is selected uniformly at random to update its strategy.The set of neighbours of i whose information (including strategies and payoffs at the current time step) is accessible to i is denoted by Ω i .We note that j ∈ Ω i only if j is a neighbour of i, so |Ω i | ⩽ d i , where d i is the degree of i.If this inequality is strict, then i has incomplete social information during imitation.Once social information is determined, the relative importance of i's personal information is quantified by θ ∈ [0, 1).For any j ∈ Ω i , the weight associated to j is (1 − θ) / |Ω i |, so the total weight associated to all neighbours for comparison is 1 − θ.Under the "imitation with incomplete social information" (abbreviated as "IMisi") update rule (Fig. 1d), i imitates the strategy of j ∈ Ω i with probability Otherwise, individual i does not imitate anyone and retains its own strategy.For complete social information (|Ω i | = d i ), IMisi reduces to the canonical DB (θ = 0) and IM (θ = 1/ (d i + 1)) update rules.PC corresponds to For the parameters of interest, this update rule defines an absorbing Markov chain, which eventually ends in a state where all individuals take the same strategy (either all-C or all-D).As a result, we consider the fixation probability of cooperators (resp.defectors), ρ C (resp.ρ D ), which represents the probability that one randomly-placed cooperator (resp.defector) invades and replaces a population of defectors (resp.cooperators).The metric we use to evaluate whether selection favours cooperators over defectors is the value of ρ C − ρ D .Specifically, cooperators are favoured relative to defectors [11,14,18] if ρ C > ρ D .Under neutral drift (δ = 0), both ρ C and ρ D take the value 1/N .We note that for the class of imitation dynamics we consider, under weak selection, the condition ρ C > ρ D is equivalent to the commonly-used alternative condition ρ C > 1/N , which measures the effects of selection on ρ C relative to its neutral value [14,18,37].
Here, we are primarily interested in sets Ω i that are chosen randomly, subject to the constraint of having fixed size |Ω i | = s for some parameter s, which represents the amount of social information.When individuals neglect personal information (θ = 0) and randomly select one neighbour to imitate at each time step (s = 1), the evolutionary process is equivalent to neutral drift and is independent of δ.Therefore, we mainly focus on the cases where either θ > 0 or s > 1.For the sake of simplifying the expressions we present, we assume that the network is unweighted, undirected, and regular of degree d, with no self loops.This assumption is not crucial for deriving results on the IMisi rule; but, in line with previous studies [14,24,38], we find this assumption to be useful to provide intuition for the impact of incomplete information on the evolution of cooperation for the imitation processes we consider.At the conclusion, we briefly consider heterogeneous networks.

C. Pairwise social dilemmas
To investigate the influence of incomplete information on the fate of cooperators, we first consider IMisi for pairwise interactions on regular graphs.For the cases where either θ > 0 or s > 1, we show in Methods that weak selection favours cooperators over defectors whenever b/c > (b/c) * > 0, where To investigate how θ and s affect (b/c) * , we start from the scenario where individuals ignore their own information during strategy updating (θ = 0).In this case, Eq. ( 4) reduces to (b/c) * = d (N − 2) / (N − 2d) for any 1 < s ⩽ d, and we find that the amount of social information used during imitation has no impact on the fate of cooperators (Fig. 2a).Note that the canonical DB rule [14,24,30,38] is a special case of IMisi with s = d and θ = 0 (Fig. 1e), and for large populations, we obtain a well-known rule [14], namely lim N →∞ (b/c) * = d.When individuals treat all information the same (including both personal and social information), meaning θ = 1/ (s + 1), we obtain Even with complete social information, such as in standard DB, a remarkable property of pairwise interactions on regular networks is that the critical benefit-to-cost ratio depends on only the size, N , and the degree, d, of the network.This critical ratio was first derived for vertex-transitive graphs [24], which look the same from every vertex, and subsequently extended to regular graphs [38].For IMisi, too, we find that pairwise interactions give a critical ratio that depends on just N , d, θ, and s (Eq.( 4)).However, when considering group interactions, an individual's payoff can be affected by both one-and two-step (i.e., first-and second-order) neighbours, which suggests that clustering in the network plays a role in the evolution of cooperation.To simplify the expressions we report for group interactions, we now assume that the network is vertex-transitive of degree d, a slightly stronger notion of symmetry than regularity.
In the public goods game, we find that cooperators evolve whenever r > r * , where Here, C is the global clustering coefficient of the graph, which quantifies the overlap of first-order and second-order neighbours (for an explicit expression, see Methods).As shown in Fig. 3i, the critical multiplication factor, r * , decreases as the clustering coefficient C increases, which means that highly clustered network structures generally promote the evolution of cooperation by reducing the barriers for selection to favour cooperators.
Although the critical ratio of Eq. ( 6) looks quite different from that of Eq. ( 4), there are some notable qualitative similarities between the two kinds of interactions.For example, when the relative importance of personal information takes extreme values (θ → 0 or θ → 1), the critical multiplication factor, r * , is independent of s.Specifically, N −d−1 .Despite these similarities, our findings for group interactions differ when 0 < θ < 1.Indeed, we find that there exists a critical threshold for the clustering coefficient, which satisfies both ∂r * /∂s < 0 if and only if C > C * and ∂r * /∂θ > 0 if and only if C > C * .What this means is that, when C > C * , the more social information that is used and the less that individuals weight their own information, the easier it is for cooperation to be favoured over defection (Fig. 3g).However, when C < C * , the results are reversed, meaning that the more social information that is used and the less that individuals weight their own information, the harder it is for cooperation to be favoured over defection (Fig. 3h).Intuitively, if the network has a low level of clustering, then cooperative clusters are not robust and are easily exploited by defectors.In this case, it is better for cooperators to retain their strategy in order to increase the likelihood of survival during strategy competition to fill a vacancy.To verify our theoretical results, we perform numerical simulations on two graphs with different clustering coefficients: C = 0.5 > C * (Fig. 3a) and C = 0 < C * (Fig. 3b).The effect of social information on cooperation is completely the opposite for large and small clustering coefficients (Fig. 3c, d, e, and f).When C = 0, increasing social information and decreasing the weight of personal information increases r * and thus impedes the evolution of cooperation, but this effect is reversed when C = 0.5.In addition to our explorations on regular graphs, we confirm that our findings are robust to heterogeneous network structures such as scale-free [55] and small-world networks [54] (see Methods and Supplementary Fig. S2).Moreover, we also numerically present the existence of the critical clustering coefficient determining the impact of social information in Supplementary Note 4 and Supplementary Fig. S8.

E. The rate and range of competition induced by the IMisi update rule
Intuitively, the evolutionary dynamics generated by the IMisi rule can be understood as involving two competitive relationships.First, when an individual is selected to change its strategy, the focal individual competes with its neighbours to avoid imitation and retain its strategy.If it fails, the neighbours then compete to be the role model for imitation.
Regarding the evolutionary process, the spread of cooperation can be understood using a random walk on networks.Denote by u (n) the expected payoff to an individual at the end of an n-step random walk from a cooperator (see Supplementary Note 2.3).Theoretically, we show that weak selection favours cooperators whenever The first term in this summation, weighted by θ, is associated to competition between one-step neighbours.The weight s/d is the probability of that a fixed neighbour is part of a focal individual's information set.The factor of 2 arises due to the two kinds of competition between one-step neighbours.The first occurs when a cooperator is chosen as the focal individual and competes to retain its strategy.The second occurs when a neighbour of the focal individual is a cooperator and is included in the focal individual's social information set.The remaining term in Eq. ( 8), weighted by 1 − θ, is associated to competition between two-step neighbours.Given a focal individual and a neighbour chosen for comparison, the probability that a fixed neighbour among the remaining nodes is part of the information set is (s − 1) / (d − 1).Competition between these neighbours can be understood by placing a cooperator at one location and comparing the respective payoffs of the two players.Fig. 4 illustrates the selection condition of Eq. ( 8).
It is difficult for cooperators to prevail in the competition with one-step neighbours.Specifically, the corresponding expected payoff of a focal cooperator is always less than the average of its random first-order neighbour, namely, u (0) < u (1) , because the first-order neighbours of the focal cooperator always have a cooperative neighbour.However, competition with second-order neighbours is the key to the success of the evolution of cooperation.Success in such competition for a cooperator happens when its neighbour starts to cooperate.Thus, the payoff of the focal cooperator increases and an aggregation of cooperators forms.Hence, increasing the relative weight of competition with secondorder neighbours can promote cooperation.
An individual's personal information and the number of social peers account for the range and rate of competition.Individuals may compete with first-order neighbours, second-order neighbours, or both (Fig. 4), depending on the information encoded in θ and s.As Eq. ( 8) shows, no competition occurs under neutral drift (θ = 0, s = 1).If individuals neglect personal information and consider more than one piece of social information (θ = 0 and s > 1), individuals compete only with their second-order neighbours for expansion, suggesting that the amount of social information has no impact on the critical value (b/c) * (Figs.2a, 3c, 3d).Otherwise for θ > 0, if individuals only consult one piece of social information (s = 1) or depend almost exclusively on their own information (θ → 1), individuals only compete with their first-order neighbours, which is harmful to the evolution of cooperation considering u (0) < u (1) .In other conditions with θ > 0 and s > 1, individuals compete with both first-order and second-order neighbours.And increasing the amount of social information s and decreasing the weight of personal information θ represent a larger relative weight of the competition with second-order neighbours, namely, (1 − θ)(s − 1)/(d − 1) compared to that with first-order neighbours (2θs/d).This explains why using less personal information and more social information better facilitates cooperation.
Our findings shed light on the famous perplexing result [15,17] that regular networks promote the evolution of cooperation under DB but not under PC.Only competing with first-order neighbours leads to the finding that PC fails to promote cooperation.In contrast, it is possible for the expected payoff of a focal cooperator to exceed that of a random second-order neighbour under DB as long as b/c is above a threshold.The nature of this threshold, as it depends on the amount of social information, the relative weightings, and the network structure, is captured by Eq. ( 4) in donation games and by Eq. ( 6) in public goods games.

F. Heterogeneity in external information
So far, we have explored the scenario in which different individuals use the same amount of external social information (the same value of s).Considering that different individuals may have different abilities for collecting and processing social information, we next consider the scenario of heterogeneous social information.Let s i denote the number of neighbours that individual i selects at random for comparison.We compare three distributions for s i (homogeneous, uniform, and Gaussian) in a population of size N = 100 and degree d = 5.Let n(s) be the number of individuals having s i = s.The homogeneous distribution fixes s at 3 for all individuals, i.e. n(3) = 100.For the uniform distribution, we use n(1) = n(2) = n(3) = n(4) = n(5) = 20.For the Gaussian distribution, we use n(3) = 48, n(2) = n(4) = 20, and n(1) = n(5) = 6.In each case, the mean of s i is 3.
When individuals do not take their own information into account during strategy updating (θ = 0), we find that the critical benefit-to-cost ratio holds the same for different distributions of s i (Fig. 5a, b).This means that, when θ = 0, heterogeneity in social information over different individuals does not qualitatively alter our results obtained under the homogeneous distribution.However, if individuals instead consider their own information for strategy updating, heterogeneity of social information usage generally hinders the fixation of cooperation (Fig. 5c, d).
These results again highlight the role that an individual's own information plays in the evolution of cooperation: it acts as a switch.When personal information is neglected during strategy updating, heterogeneity in the usage of social information has no impact on the evolution of cooperation; whenever personal information is considered, such heterogeneity generally inhibits the evolution of cooperation.This switching effect can be intuitively and approximately explained by our previous theoretical analysis.When θ = 0, we have shown that the amount of social information used does not affect (b/c) * .When θ = 1/ (s + 1), we see that whenever 1 ⩽ (N − 2) / (N − 2d) < s ⩽ d (see Eq. ( 5)).As a result, the rate at which (b/c) * decreases will slow down as s increases, which explains the inhibitory effect of heterogeneous usage of social information: when the number of neighbours s i that an individual consults deviates from the average value s, individuals with s i < s will induce an inhibitory effect on cooperators, and it cannot be counterbalanced by the positive effects led by those individuals with s i > s.This also explains why we observe that the homogeneous distribution is superior to uniform and Gaussian distributions: a smaller standard error (Fig. 5) indicates there are fewer individuals using s i ̸ = s.

III. DISCUSSION
Many classical evolutionary processes in networked systems can be interpreted as being intelligent, cultural and arising from imitation dynamics.Although these processes are abstractions of reality and cannot capture every aspect of the intricacies of intelligent animal and human behaviour, they are often amenable to mathematical analysis, which yields important insights into how traits spread over systems.In an overwhelming majority of these models, the imitation mechanism lies on an extreme end of the spectrum, involving either complete or very limited external information.Furthermore, they frequently assume that individuals interact with only first-order neighbours.In this study, we have considered a natural family of parametrised update rules, which includes classical imitation processes as special cases.We have analysed this model in terms of general payoff relationships, which allows for the study of traditional social dilemmas with first-order neighbours, like the donation game, as well as group interactions with individuals farther afield, including public goods games.Our framework can be easily extended to investigate imitation dynamics based on nonlinear multi-player games [42,43] and general group interactions [44].In addition, how different types of personal and external information affect the evolution of cooperation in heterogeneous networks, from a theoretical viewpoint, deserves further investigations.Exploring the independent impact of personal information and social information is also worthwhile [45].
For the prosperity of altruistic behaviour in donation games, individuals should ignore personal information and rely more on social peers for comparison.The situation is more nuanced in public goods games, as clustering in the network plays a greater role.There, another critical threshold appears for the clustering coefficient.Above this threshold, cooperation is more easily favoured by weighting one's own success less and using more neighbours for comparison.
Below this threshold, these findings are flipped.In fact, the appearance of clustering coefficients is interesting in and of itself, even when restricted to a classical mechanism like DB. Clustering is absent from the analysis of donation games altogether, and our results show that the critical multiplication factor in public goods games is a monotonically decreasing function of the clustering coefficient, reflecting the fact that cooperation in these games is favoured most when there is significant overlap between first-and second-order neighbours.The differing results between pairwise and group interactions are mainly due to the sparsity of connections: with sparse connections, defectors easily exploit cooperators through group interactions even when they are inside cooperative clusters.In such settings, it is better for cooperators to weight their own success more when deciding whether to imitate a neighbour.
The first-order competition for resisting strategy change is an instance of the so-called "status quo bias" in economics and psychology [46,47].People tend to be inclined to keep their present behaviour, especially when they are successful.Our results show that, for the emergence of cooperation, such behaviour is not necessarily conducive to the well-being of the community.Learning from better-performing individuals, represented by second-order competition, is often a more efficient way to promote the spread of altruism.And it has been shown that such a process plays an important role in human decision-making [48][49][50][51].Our study provides possible intuition for how coupling this inherent human psychological activity to incomplete social information influences the emergence of collective cooperation.
The IMisi rule and its selection condition, Eq. ( 8), raise the question of relationships to classical imitation rules on weighted graphs.For example, under IM dynamics, when an individual itself is weighted by θ s and neighbours are weighted by θ n , the probability that where h ij stands for the weight of the edge between i and j.Although such an update rule is evidently distinct from that of Eq. ( 3), it is not immediately obvious that this is so under the assumption of weak selection.By way of analogy, stochastic payoff schemes can be reduced to deterministic models (i.e. in expectation) under weak selection [34].In the present model, intriguingly, one cannot generally find weights θ s and θ n such that the weak-selection dynamics generated match those of Eq. ( 3) (see Methods).Therefore, generically, IMisi constitutes a new class of imitation mechanisms.
From a modelling perspective, our approach departs from the standard paradigm of fixing the update rule and varying the system structure.Instead, we fix a class of (regular) networks and study the effects of changing the parameters of the update rule on the evolution of cooperation.This approach is similar in spirit to that of Grafen and Archetti [52], who studied the effects of the range of density dependence on the evolution of altruism, which in turn illuminated why update rules with global competition for reproduction do not favour cooperation while others, with involving more localised competition, can.Recently, there has also been a focus on classifying the pertinent update rules for (meta-) populations of fixed structure [53].Such update rules can involve several steps (birth, death, and migration at various levels), and it is an open problem how each of these steps affects the evolution of densitydependent behaviours.Although the specific motivation for our study is quite different from these earlier works, it fits into the theme of understanding how the microscopic details of reproduction and survival affect the evolutionary dynamics of a population, which will continue to be an important task going forward.

A. Notation and payoff calculation
The population consists of N individuals, and its structure is represented by a d-regular graph, G.The state of the population can be represented by a binary vector, x ∈ {0, 1} N , where x i = 1 indicates that individual i is a cooperator and x i = 0 means a defector.Let us consider random walks on G in discrete time.For a random walk on the regular graph G, the probability of a one-step walk from node i to node j is p ij = 1/d if they are connected; otherwise, p ij = 0. We denote by p (m) ij the probability of going from i to j in an m-step random walk.Since the graph is regular, the unique stationary distribution places weight lim m→∞ p (m) ij = 1/N on node j.Let u (m) i be the expected average payoff of an individual at the end of an m-step random walk from individual i.Under pairwise interactions in the donation game, a cooperator pays a cost c to offer its opponent a benefit b and a defector pays nothing and provides no benefit.Thus, the average payoff is = j∈G p (m) ij x j represents the probability that an individual at the end of an m-step random walk from individual i is a cooperator.Intuitively, the first term in Eq. ( 10) represents the expected cost incurred for the individuals m-step away if they cooperate.The second term represents the benefits that these individuals receive when their neighbours cooperate.For group interactions, a cooperator pays a cost c in each game, and the total cost from cooperators is then enhanced by a multiplication factor r and divided among all members of the group (i.e. the focal individual who organises the game and its d neighbours).Without loss of generality, here we set c = 1.According to the definition and the detailed derivation presented in Supplementary Note 2.3, we have For a detailed derivation of Eq. ( 11), please refer to Supplementary Note 2.3.

B. General condition for the success of cooperators
Let D (x) be the expected instantaneous rate of change in the frequency of strategy C. We have , where b i (x) is the probability that i replaces one of its neighbours and d i (x) is the probability that it is replaced by its neighbours [18].Intuitively, D (x) > 0 represents a net increase of cooperators.Under neutral drift (δ = 0), D (x) = 0. Thus, we have D (x) = δ∂D (x) /∂δ + O(δ 2 ).Consequently, under weak selection (0 < δ ≪ 1), the condition for cooperation to be favoured over defection is where ⟨•⟩ • represents the expectation over states arising under neutral drift.Intuitively, Eq. ( 12) guarantees that the average difference between the birth and death rates for cooperators (x i = 1) is larger than zero, indicating a net increase in the population of cooperators.
Based on Eq. ( 12), we derive a general condition for cooperation to be favoured over defection, which reads The equation makes sense when x i = 1, which means that the individual i is a cooperator.u (0) i is the payoff of the cooperator, and u (m) i (m ≥ 1) is the payoff of a random mth-order neighbour of the cooperator.As demonstrated in Eq. ( 12), to ensure that the expected instantaneous rate of change of cooperators remains greater than zero, it is imperative for the average payoffs of cooperators to exceed those of their surrounding competitors.Eq. ( 13) states that for cooperators to be favoured, the net result of the combination of three types of competition that a cooperator engages in should be positive: (i) competition with a random first-order neighbour for not being replaced, u i , occurring with weight θ; (ii) competition with a random first-order neighbour to replace it, u i , occurring with weight θ; and (iii) competition with one of the second-order neighbours for finally replacing its first-order neighbours, u with weight (1 − θ).Here, (s − 1)/(d − 1) is the probability for a second-order neighbour to be randomly selected to participate in the competition, given that the cooperator has already been selected.

C. Condition for success under pairwise and group interactions.
To calculate condition (13), we introduce the coalescing random walk, which is a collection of random walks that step independently until two walks meet [18].Let τ ij denote the expected coalescence time between i and j under the discrete-time coalescing random walk.Suppose i and j are the two ends of a random walk of length m.Analogous to other imitation-based update rules, we have τ ii = 0 and for i ̸ = j.We let τ (m) = i,j∈G p (m) ij τ ij /N , which represents the expectation of τ ij over all possible choices of i and j in the stationary distribution of the random walk.According to a previous study [18], for m 1 , m 2 ≥ 0 , we have Letting τ + ii = 1/(1 − θ) + j∈G p ij τ ij be the expected remeeting time in the discrete-time random walk, we have where p (m) ii denotes the probability that an m-step random walk terminates at its starting position i.In particular, for regular graphs, we have τ + ii = N/(1 − θ) and p (m) ii = p (m) for all i ∈ G [18].Now, we have that for regular graphs with degree d, Substituting Eqs. ( 15), (16), and (17) to Eq. ( 12), we have that for pairwise interactions Similarly, for group interactions, we have Solving ⟨ ∂ ∂δ D⟩ • > 0, we recover conditions (4) and ( 6) for the success of cooperators.

D. Simulations on heterogeneous networks
We performed simulations under different amounts of social information on two well-known classes of heterogeneous networks: small-world networks [54] and Barabási-Albert networks [55].For both networks, the average degrees are set to d = 6, and we take the minimum degree of each network to be 3. Due to degree heterogeneity, the number of neighbours may vary for different individuals.We perform the simulations for 1 ≤ s ≤ 3 and for the cases where individuals know all the social information.Two relative weights of personal information are considered, namely θ = 0 and θ = 1/ (s + 1).
When s > 1 and θ = 0, the critical benefit-to-cost ratio (b/c) * is the same for various amounts of social information (Supplementary Figs.S1a, c, and S2a, c).This indicates that if an individual's personal information is neglected, the amount of social information has no impact on the evolution of cooperation.When θ = 1/ (s + 1), the critical ratio (b/c) * decreases as s increases, meaning cooperation is promoted (Supplementary Figs.S1b, d and S2b, d).These results are consistent with our findings on regular graphs.
Compared with our results on regular graphs, heterogeneity does affect the evolution of cooperation.Under donation game, for regular graphs, (b/c) * = 6.68 when θ = 0, N = 100, and d = 6.Barabási-Albert networks have inhibitory effects on the evolution of cooperation ((b/c) * ≈ 7.2).But small-world networks can slightly promote cooperation ((b/c) * ≈ 6.4).However, this effect of small-world networks is not strong at θ = 1/ (s + 1).In public goods game, small-world networks have larger clustering coefficients [54], which may be the reason why these kinds of networks better Illustration of imitation dynamics with incomplete information.a, Under the death-birth (DB) update rule, an individual i (denoted as the focal individual marked by a black circle) is randomly selected to update its strategy, and it forgoes its own strategy and imitates its neighbours with a probability proportional to their fitness [15].The link between i and the neighbour that i imitates is highlighted by a bold black line.The orange and blue filled circles represent cooperators and defectors, respectively.b, Under the imitation (IM) update rule, an individual i is selected at random to evaluate its strategy.The individual either keeps its current strategy or imitates a neighbour's strategy with a probability proportional to fitness [25,26].c, Under the pairwise-comparison (PC) update rule, an individual i is chosen randomly to evaluate its strategy, and a neighbouring individual is chosen at random as a role model [27,28].Individual i either adopts this neighbour's strategy or retains its own with a probability proportional to their fitness.d, Under imitation with incomplete social information (IMisi), only s (of d = 5) neighbours' information is accessible to the focal individual, and the relative importance of i's personal to external social information is quantified by θ (namely, the weights for all accessible neighbours are identical and equal to (1 − θ)/s).Here, neighbours whose information is not accessible to the focal individual are represented by grey filled circles.Under the IMisi rule, the focal individual could imitate a cooperative or non-cooperative strategy from s neighbours or keep its own strategy (see Eq. (3) for corresponding probabilities).e, The IMisi rule is a general imitation-based update rule that unifies classical rules including DB, IM, and PC by adjusting the value of θ and the level of social information ϕ = s/d: DB (ϕ = 1, θ = 0), IM (ϕ = 1, θ = 1/ (d + 1)), neutral drift (ϕ = 1/d, θ = 0), and PC (ϕ = 1/d, θ = 0.5).FIG. 3. Effects of incomplete information on the fixation of cooperation in group social dilemmas.We consider two different regular networks with different clustering coefficients (C).a, On the graph with C = 0.5, the payoffs of the public pools organised by cooperators are 2r.b, On the graph with C = 0, the payoffs of the group organised by the cooperators on both sides decrease.We perform simulations on fixation probability difference ρC − ρD as a function of multiplication factor r.Here markers are from numerical simulations and lines are from the corresponding linear curve fitting.When individuals ignore their own information (θ = 0), r * is the same for different amounts of social information s (c, d).The arrow points to the value of r * derived theoretically under weak selection (Eq.( 6)).When individuals treat both kinds of information equally (θ = 1/(s + 1)), the small amount of social information s makes r * larger for C = 0.5 > C * (e).The influence is totally reversed when C = 0 < C * (f ).We draw the critical r * as a function of the weight of personal information θ (Eq.( 6)) for the networks in a and c.As θ goes up, r * increases when C = 0.5 > C * (g), and decreases when C = 0 < C * (h).The critical r * is a decreasing function of the clustering coefficient C for multi-player game when θ = 0.9 (i).The curves converge when C = C * , and then diverge, reversing the influence of the amount of social information s.Here, c = 1, and other parameters are the same as those in Fig. 2. 1 q q (0) u FIG. 4. Intuition about competition and the evolutionary success of cooperators.The selection condition for cooperators to be favoured relative to defectors involves three kinds of competition, at two ranges.The individuals with black circle are the individuals who are changing strategy, and the individuals with purple circle linked by the purple lines are the individuals who are competing.a, Conditioned on a cooperator (orange solid circle) being chosen as the focal individual (black circle) to evaluate its strategy, this cooperator competes with a first-order neighbour (purple circle) to retain its strategy.b, Conditioned on a cooperator being a one-step neighbour of the focal individual (black circle), this cooperator competes to be a candidate (purple circle) for imitation.c, Once the focal individual (black circle) decides to imitate some neighbour, a neighbouring cooperator competes with other neighbours (purple circle) to fill the vacancy.d, Putting these three kinds of competition together, one obtains the selection condition reported in Eq. ( 8).Here, u (n) is the expected payoff to an individual at the end of an n-step random walk from a cooperator.
Relative importance of personal information, θ Level of social information, φ is not accessible to the focal individual Traditional imitation-based update rules Imitation with incomplete social information 1 3

FIG. 2 .
FIG.2.Effects of incomplete information on the fixation of cooperation in pairwise social dilemmas.Here, we present simulations of the fixation probability difference ρC − ρD of cooperation and defection as a function of benefit-to-cost ratio, b/c.Markers are from numerical simulations and lines from the corresponding linear curve fitting.The vertical arrows point to the values of (b/c) * derived theoretically under weak selection (Eq.(4)).a, If individuals ignore their own information (θ = 0), then the (b/c) * above which cooperation is favoured is the same for different amounts of social information s > 1. b, When individuals treat social and personal information as being equally important (θ = 1/ (s + 1)), (b/c) * decreases as s grows.c, We also illustrate (b/c) * as a function of the weight of personal information, θ, for different amounts of social information, s, according to Eq. (4).All curves converge to the same value of (b/c) * as θ → 0, suggesting that cooperation is favoured most when individuals neglect their personal information.Here, we set N = 100, δ = 0.01, and d = 6 in a and b, and d = 15 in c.

FIG. 5 .
FIG.5.Effects of heterogeneous social information on the fixation of cooperation.For three different distributions of external social information (homogeneous, uniform, and Gaussian), we present the fixation probability difference ρC − ρD for pairwise and group social dilemmas on regular graphs.Here, markers are from numerical simulations and lines are from the corresponding linear curve fitting.The vertical arrows point to the values of (b/c) * and r * derived theoretically.When personal information is not considered (θ = 0), we find that information heterogeneity does not change the critical values (i.e., (b/c) * and r * as shown in Figs.2 and 3) over different distributions for pairwise (a) and group (b) interactions.When an individual's personal information is taken into account, the results change (c, d), showing that the homogeneous distribution generates the smallest value of (b/c) * and r * .The mean for the homogeneous, uniform, and Gaussian distributions of social information is 3, and the standard error is 0, 1.41, and 0.90, respectively.