Contextual centrality: going beyond network structure

Centrality is a fundamental network property that ranks nodes by their structural importance. However, the network structure alone may not predict successful diffusion in many applications, such as viral marketing and political campaigns. We propose contextual centrality, which integrates structural positions, the diffusion process, and, most importantly, relevant node characteristics. It nicely generalizes and relates to standard centrality measures. We test the effectiveness of contextual centrality in predicting the eventual outcomes in the adoption of microfinance and weather insurance. Our empirical analysis shows that the contextual centrality of first-informed individuals has higher predictive power than that of other standard centrality measures. Further simulations show that when the diffusion occurs locally, contextual centrality can identify nodes whose local neighborhoods contribute positively. When the diffusion occurs globally, contextual centrality signals whether diffusion may generate negative consequences. Contextual centrality captures more complicated dynamics on networks than traditional centrality measures and has significant implications for network-based interventions.

ity in predicting the eventual outcomes in the adoption of microfinance and weather insurance. Our empirical analysis shows that the contextual centrality of first-informed individuals has higher predictive power than that of other standard centrality measures. Further simulations show that when the diffusion occurs locally, contextual centrality can identify nodes whose local neighborhoods contribute positively. When the diffusion occurs globally, contextual centrality signals whether diffusion may generate negative consequences. Contextual centrality captures more complicated dynamics on networks and has significant implications for network-based interventions.
Individuals, institutions, and industries are increasingly connected in networks where the behavior of one individual entity may generate a global effect [1][2][3] . Centrality is a fundamental network property which captures an entity's ability to impact macro processes, such as information diffusion on social networks 1 , cascading failures in financial institutions 3 , and the spreading of market inefficiencies across industries 2 .
Many interesting studies have found that the structural positions of individual nodes in a network explain a wide range of behaviors and consequences. Degree centrality predicts who is the first to be infected in a contagion 4 . Eigenvector centrality corresponds to the incentives to maximize social welfare 5 . Katz centrality is proportional one's power in strategic interactions in network games 6 . Diffusion centrality depicts an individual's capability of spreading in information diffusion 7 . These centrality measures operate 1 arXiv:1805.12204v2 [cs.SI] 4 Mar 2019 similarly, aiming to reach a large crowd via diffusion, and are solely dependent on the network structure.
However, several pieces of empirical evidence show that reaching a large crowd may generate adverse effects. For example, sales on Groupon, public announcements of popular items on Goodreads, and online video platforms are successful in reaching a large population of customers. However, studies show that these strategies lower online reviews by reaching the population of people who hold negative opinions about the product [8][9][10] . Let us further consider two motivating examples to demonstrate the importance of accounting for node characteristics. Example 1. Viral marketing. During a viral marketing campaign, the marketing department aims to attract more individuals to adopt the focal product. If we have ex-ante information about the customers' likelihood of adoption, we can utilize it to better target individuals who have higher chances of adoption and avoid wasting resources on those otherwise. Example 2. Political campaign. Typical Get-Out-The-Vote (GOTV) campaigns include direct mail, phone calls, and social-network advertisement 11,12 . However, rather than merely aiming to transform nonvoters to voters, a GOTV strategy should target voters who are more likely to vote for the campaigner's candidate.
In this paper, we introduce contextual centrality, which builds upon diffusion centrality proposed in Banerjee et al. and captures relevant node characteristics in the objective of the diffusion 13,7 . Moreover, contextual centrality aggregates these characteristics over one's neighborhood, which is defined by the diffusion process. It generalizes and nests degree, eigenvector, Katz, and diffusion centrality. When the spreadability (the product between the diffusion rate p and the largest eigenvalue λ 1 of the adjacency matrix) and the diffusion period T are large, contextual centrality linearly scales with eigenvector, Katz, and diffusion centrality. The sign of the scale factor is determined by the joint distribution of nodes' contributions to the objective of the diffusion and their corresponding structural positions.
We perform an empirical analysis of the diffusion of microfinance and weather insurance showing that the contextual centrality of the first-informed individuals better predicts the adoption decisions than that of the other centrality measures mentioned above. Moreover, simulations on the synthetic data show how network properties and node characteristics collectively influence the performance of different centrality measures. Further, we illustrate the effectiveness of contextual centrality over a wide range of diffusion rates with simulations on the real-world networks and relevant node characteristics in viral marketing and political campaigns. The diffusion process in this paper follows the independent cascade model 14 . It starts from an initial active seed. When node u becomes active, it has a single chance to activate each currently inactive neighbor v with probability P uv , where P ∈ R N ×N . We follow the terminology by Koschutzki to categorize degree, eigenvector, and Katz centrality as reachability-based centrality measures 15 . Reachability-based centrality measures aim to score a certain node v by the expected number of individuals activated if v is activated initially, s(v, A, P), and hence tend to rank nodes that can reach more nodes in the network higher. In particular,

Contextual centrality
where r i (v, A, P) denotes the probability that i is activated if v is initially activated 14,16,17 . In practice, s(v, A, P) is hard to estimate. Different reachability-based centrality measures estimate it in different ways.
Diffusion centrality extends and generalizes these standard centrality measures 13 . It operates on the assumption that the activation probability of an individual i is correlated with the number of times i 'hears' the information originating from the individual to be scored. Diffusion centrality measures how extensively the information spreads as a function of the initial node 13 . In other words, diffusion centrality scores node v by the expected number of times some piece of information originating from v is heard by others within a finite number of time periods T , s (v, A, P, T ), where r i (v, A, P, T ) is the expected number of times individual i receives the information if v is seeded. Eq.
(2) has at least two advantages over Eq. (1). First, r i (v, A, P, T ) is computationally more efficient than tedious simulations to get r i (v, A, P). Second, it nests degree, eigenvector, and Katz centrality 7i .
Both Eq. (1) and (2) Similar to diffusion centrality, we score nodes with the following approximated cascade payoff, i It is worth noting that Eq. (1) and (2) differ in a couple of ways. First, since r i (v, A, P, T ) is the expected number of times i hears a piece of information, it may exceed 1. Meanwhile, since ri(v, A, P) is the probability that i receives the information, it is bounded by 1. Second, in independent cascade model, each activated individual has a single chance to activate the non-activated neighbors. However, with the random walks of information transmission in contextual centrality, each activated individual has multiple chances with decaying probability to activate their neighbors. s c (v, A, p, T ), with heterogeneous y, This formulation generalizes diffusion centrality and inherits its nice properties in nesting existing reachability-based centrality measures. Moreover, it is more easier to compute than Eq. (3) ii . With this scoring function, we now formally propose contextual centrality.
Definition 1 Contextual centrality (CC) approximates the cascade payoff within a given number of time periods T as a function of the initial node accounting for individuals' contribution to the cascade payoff.
Heterogeneous diffusion rates across individuals are difficult to collect and estimate in real-world applications. Therefore, in the following analysis, we assume a homogeneous diffusion rate p across all edges. Hence, P • A in Eq. (5) is reduced to pA. Similar to diffusion centrality, contextual centrality is a random-walk-based centrality, where (pA) t measures the expected number of walks of length t between each individual pair and T is the maximum walk-length considered. Since T is the longest communication period, larger T indicates a longer period for diffusion (e.g., a movie that stays in the market for a long period) while smaller T indicates a shorter diffusion period (e.g., a coupon that expires soon). One the one hand, when pλ 1 is larger than 1, CC approaches infinity as T grows. On the other hand, when pλ 1 < 1, CC is finite for T = ∞, which can be understood as a lack of virality, expressed in a fizzling out of the diffusion process with time. In fact, the specific value of pλ 1 can be used to bound the maximum possible CC given the norm of the score vector y. As presented in proposition 1 in the Supporting Information, the upper bound for CC grows with pλ 1 and the norm of the score vector.
ii The computational complexity of the algorithm to score according to Eq.
where M is the average degree and T is the paths lengths that has been inspected. Note that the computational complexity of the formulation (5) is O(N M T ). We repeat the operation of multiplying a vector of length N with a sparse matrix which has an average of M entries per row for T times. This significantly reduces the run time.
Let us further illustrate the relationship between CC and diffusion centrality, DC for short iii . We can represent y as, y = σ(y) · z + y · 1, where σ(y) and z are the standard deviation and the z-score normalization of y. Using the linearity of CC with respect to y, we can write Eq. (6) shows the trade-off between the standard deviation σ(y) and the mean y of the contribution vector in CC. When y dominates over σ(y), network topology is more important in CC and it produces similar rankings to DC. If the graph is Erdos-Renyi and T is small enough, then, on expectation, the term y · DC dominates as the size of the network approaches infinity, as presented in Theorem 1 in the Supporting Information. However, when σ(y) dominates over y, CC and DC generate very different rankings.
The relevant node characteristics y provides the ex-ante estimation about one's contribution. Whether to incorporate y is the main difference between our centrality and existing centrality measures. In the realworld data, the observation or estimation on y can be noisy, biased or stochastic. Therefore, we discuss the robustness of contextual centrality in responses to perturbations in y in the Supporting Information.
We define the following terms, which we use throughout the paper: • Spreadability (pλ 1 ) captures the capability of the campaign to diffuse on the network depending on the diffusion probability (p) via a certain communication channel, and the largest eigenvalue (λ 1 ) of the network.
• Standardized average contribution ( y σ(y) ) is computed as the average of the contributions normalized by the standard deviation of the contribution. The sign of y σ(y) indicates whether the average contribution is positive or not. Moreover, the larger the magnitude of y σ(y) , the more homogeneous the contributions are.
iii In Banerjee et al. 13 , DC = T t=1 (pA) t . To derive the following relationship between CC and DC, we add the score of reaching the first seeded individual into computing diffusion centrality. Hence, DC =  Properties of contextual centrality when pλ 1 > 1 and T is large. Let us first provide the approximation of contextual centrality in this condition, which reveals one of the prominent advantages of contextual centrality. By the Perron-Frobenius Theorem, we have |λ j | ≤ λ 1 for every j. Moreover, if we assume that the graph is non-periodic, then in fact |λ j | < λ 1 for all j = 1. Note that the typical random graph is not periodic, so this assumption is reasonable. Thus, when pλ 1 > 1, the term (pλ 1 ) t grows exponentially faster than (pλ j ) t for j = 1 so that the j = 1 term dominates for sufficiently large values of T , and we obtain the approximation for contextual centrality (CC approx ): This approximation reveals some desirable properties of contextual centrality. Crucially, CC approx is simply a scalar multiple of the leading eigenvector when pλ 1 > 1 and T is large. Therefore, the sign of U T 1 y determines the direction of the relationship between CC approx and eigenvector centrality. By Perron-Frobenius Theorem, all elements in this leading eigenvector are nonnegative. Thus, the approximated cascade payoff, Eq. (4), for seeding any individual is nonpositive if U T 1 y < 0, pλ > 1 and T is large. This shows that in this condition the approximated cascade payoff is nonpositive for seeding any individual, so the campaigner should select a diffusion channel with lower diffusion rate to take advantage of the local neighborhood with positive contributions. Eq. (7) naturally suggests the following relationships between CC approx and eigenvector centrality.
• If U T 1 y > 0, CC approx and eigenvector centrality produce the same rankings.
• If U T 1 y < 0, CC approx and eigenvector centrality produce the opposite rankings.
The approximation does not hold when U T 1 y = 0, which is also unlikely to happen in practice. Hence, we disregard this case. Similarly, we relate contextual centrality to diffusion centrality (C Diffusion ) and Katz centrality (C Katz ), where α is the decay factor in Katz centrality. Similar to Eq. (7), all terms on the right-hand-side of Eq. (8) are positive except for U T 1 y, which similarly determines the direction of the relationship.

Results
Predictive power of contextual centrality We study two real-world empirical settings, adopting microfinance in 43 Indian villages iv and adopting weather insurance in 47 Chinese villages v . In each setting, there is a set of first-informed households in each village who went on to spread the information. We evaluate the adoption outcome of all other households in the village which are not first-informed. We use the adoption likelihood for the contribution vector y in computing contextual centrality, which is predicted using a model based on the adoption decisions of the first-informed households. Similar to Banerjee et al. 13 , we evaluate the R 2 of a linear regression model, where the independent variable is the average centrality of first-informed households and the dependent variable is the fraction of non-first-informed households in a village which adopted -controlled by the village size. In Fig. 1, we show how the R 2 for various measures of centrality varies with pλ 1 , in which the choice of p influences the two centrality measures that account for the diffusion process -diffusion centrality and contextual centrality. We see that the contextual centrality outperforms all other standard centrality measures, which indicates that marketing campaigners or social planners will benefit from using contextual centrality as the seeding strategy to maximize the participation. This result also highlights that utilizing ex-ante information about customers' likelihood of adoption helps to design better targeting strategies. Similar results without control variables and with more control variables are presented in the Supporting Information as a robustness check.
iv The data is made public by Banerjee et al. 13 . v The data is made public by Cai et al. 18 . Performance of contextual centrality relative to other centralities on random networks To better understand CC's performance with respect to different parameters (pλ 1 , y σ(y) ), we next perform simulations on randomly generated synthetic networks and contribution vectors (y). To compare the performance of contextual centrality against the other centralities, we use 'relative change' (calculated as and pλ 1 on three different types of simulated graphs. We can see that CC performs well when y < 0, pλ 1 < 1, and y σ(y) is small in magnitude. We will now discuss each of these cases in more detail.
When y < 0, maximizing the reach of the cascade is not ideal because that will result in a cascade vi We chose 'relative change' for comparison since it gives a sense of when the payoffs are different from the optimal centrality while keeping the magnitudes of the payoffs in perspective. This measure has some desirable properties. First, its value is necessarily between -2 and 2, so our scale for comparison is consistent across scenarios. Second, its magnitude does not exceed 1 unless a and b differ in sign, so we can tell if a centrality gets a positive average payoff while the rest do not. payoff which more closely reflects y. CC differs from the other centralities in that it does not try to maximize the reach of the cascade. Note the dark blue diagonal band present in all plots in Fig. 2. Since the magnitude of the relative change exceeds 1 only when the values being compared have opposite signs, this region shows that there are many settings where the standard average contribution is negative, yet CC achieves a positive payoff while the other centralities do not.
When pλ 1 is small, it is essential to seed an individual whose local neighborhood has higher individual contributions since there is not much risk of diffusing to individuals with lower individual contributions vii .
This highlights CC's advantage in discriminating the local neighborhoods with positive payoffs from those with negative payoffs while the other centralities cannot.
When y σ(y) is small in magnitude, CC takes advantage of the greater relative variations between contributions. As y σ(y) → +∞, Eq. (6) tells us that CC will seed similar to DC, which explains why CC loses some of its advantage. However, as y σ(y) → −∞, Eq. (6) tells us that CC will seed opposite to DC, which explains why CC maintains an advantage.
We briefly comment on the regions where CC does not seem to offer an advantage. CC appears to do slightly worse when y > 0 and pλ 1 is a bit greater than 1. We expect that when pλ 1 > 1, CC would offer less of an advantage since the cascade reaches most individuals in the network. However, Eq. (7) (and to some extent Eq. (6)) suggests that CC should seed similar to the other centralities. Note that in Fig. 2d, which averages over the samples of all three graph types, CC performs better. Thus, we conclude that this is due to the high variance of the cascade payoffs in this region. CC also seems to perform poorly when y = 0. This is because as pλ 1 increases, the payoff more closely reflects y, which means that CC's average payoff and the maximum average payoff of the other centralities are very close to 0 and often have different signs, so the relative change would appear to be large. In these two regions where CC does not seem to offer an advantage, no single centrality dominates the rest, which shows that there are large relative variations in these regions. Similar figures to Fig. 2 for all other centrality measures used in this vii As an extreme case, consider pλ1 = 0. In this case, the diffusion rate is 0, so seeding an individual with a high individual payoff makes much more sense than seeding an individual with high topological importance. where a is CC's average payoff and b is the maximum average payoff of the other centralities, for varying values of y σ(y) and pλ 1 . The plots correspond to the results on random networks generated according to the Barabasi-Albert, Erdos-Renyi, Watts-Strogatz models, and all models.
paper can be found in the Supporting Information.
Performance of contextual centrality relative to other centralities on real-world networks Next, we analyze the performance of contextual centrality in achieving the cascade payoff, as defined in Eq. (3), using simulations on three real-world settings, namely adoption of microfinance, adoption of the weather insurance, and political voting campaign. To compare the performance of contextual centrality against the maximum of centrality measures for each condition, we use 'relative change' as before. We observe the Approximation of contextual centrality and the importance of primary contribution A negative contextual centrality score indicates that seeding with the particular node will generate a negative payoff.
Therefore, we design a seeding strategy in which we seed only if the maximum of contextual centrality is nonnegative. As shown by the blue dashed and solid lines in Fig. 4, the new seeding strategy, "seed For 'eigenvector adjusted' centrality, we multiply eigenvector centrality with the primary contribution U T 1 y. For 'seed nonnegative', we only seed if the maximum of the centrality measure is nonnegative, otherwise it is named as 'seed always'. nonnegative", performs better than always seeding the top-ranked individual. Building upon Eq. (7), we introduce a variation of eigenvector centrality, "eigenvector adjusted", as the product of eigenvector centrality and the primary contribution (U T 1 y). This variation of eigenvector centrality performs on par with contextual centrality as pλ 1 grows large as expected according to Eq. (7). "Eigenvector adjusted" greatly outperforms eigenvector centrality viii . Comparing the strategies in Fig. 4, the new strategy of accounting for the sign of the centrality measures improves the average payoffs by an order of magnitude.
This pattern also highlights the importance of the primary contribution in campaign strategies. We present figures for the analogous variations of the other centralities in the Supporting Information.
Homophily and the maximum of contextual centrality Homophily is a long-standing phenomenon in social networks that describes the tendency of individuals with similar characteristics to associate with one viii Another variation of eigenvector centrality is to adjust eigenvector centrality by y. Note that the sign of U T 1 y does not always equal y. When the signs differ, seeding only when U T 1 y is positive produces a higher cascade payoff when pλ1 is not too large. However, as pλ1 further increases and the diffusion saturates most of the network, the sign of y predicts that of the cascade payoff. However, larger pλ1 is not as interesting as smaller ones, which happens more frequently in real life. We present average cascade payoff comparing the two strategies when y(U T 1 y) < 0 in the Supplementary Information. Figure 5: Homophily and maximum of contextual centrality when pλ 1 < 1. We regress the maximum of contextual centrality on homophily after controlling for y σ(y) and pλ 1 . The y-axis is the OLS coefficients of homophily (with the vertical line as the 95% confidence interval) and the x-axis corresponds to three types of networks. We perform the analysis separately for y σ(y) being larger than, smaller than and equals to zero.
another 19 . The strength of homophily is measured by the difference in the contributions of the neighbors, N i,j A ij (y i − y j ) 2 . We analyze the relationship between the strength of homophily and the approximated cascade payoff by seeding the highest-ranked node in contextual centrality. After controlling for y σ(y) and pλ 1 , we regress the maximum of the contextual centrality on the strength of homophily of the network separately for three conditions of y σ(y) . When the spreadability of contextual centrality is small, stronger homophily tends to correlate with a large approximated cascade payoff across all graph types. This result shows that stronger homophily of the network predicts higher approximated cascade payoff with small spreadability. When the network is Barabasi-Albert and y σ(y) > 0, the relationship is the strongest. As the spreadability further increases, the correlation between contextual centrality and homophily drops dramatically, and thereby we exclude the scenarios when pλ 1 > 1.

Discussion
Contextual centrality sheds light on the understanding of node importance in networks by emphasizing node characteristics relevant to the objective of the diffusion other than the structural topology, which is vital for a wide range of applications such as marketing or political campaigns on social networks. Notably, nodal contributions to the objective, the diffusion probability, and network topology jointly produce an effective campaign strategy. It should be obvious up to now with the thorough simulations in this study that exposing a large portion of the population in the diffusion is not always desirable. When the spreadability is small, contextual centrality effectively ranks the nodes whose local neighborhoods generate larger cascade payoffs the highest. When the spreadibility is large, the primary contribution tends to predict the sign of the approximated cascade payoff. This suggests that when the primary contribution is negative the campaigner should reduce the spreadability of the campaign to take advantage of the individuals whose local neighborhoods generate positive approximated cascade payoff in aggregation. Resorting to campaign channels with lower diffusion probability and less viral features, such as direct mail, can reduce spreadability. Moreover, as the standardized average contribution increases, the contribution vector becomes comparatively more homogeneous and comparatively less important than the network structure.
Therefore, when the average contribution is positive, seeding with contextual centrality becomes similar to seeding with diffusion centrality.
Contextual centrality emphasizes the importance of incorporating node characteristics that are exogenous to the network structure and the dynamic process. More broadly, contextual centrality provides a generic framework for future studies to analyze the joint effect of network structure, nodal characteristics, and the dynamic process. Other than applications on social networks, contextual centrality can be applied to analyzing a wide range of networks, such as the biology networks (e.g., rank the importance of genes by using the size of their evolutionary family as the contribution vector 20 ), the financial networks

Methods
In this study, we compare contextual centrality with diffusion centrality and other widely-adopted reachabilitybased centrality measures -degree, eigenvector, and Katz centrality. We compute degree centrality by taking the degree of each node, normalized by N − 1. We compute eigenvector centrality by taking the leading eigenvector U 1 with unit length and nonnegative entries. We compute Katz centrality as ∞ t=0 (αA) t 1, setting α, which should be strictly less than λ −1 1 , to 0.9 · λ −1 1 . We compute diffusion centrality as T t=1 (pA) t 1. For both diffusion and contextual centrality, we set T = 16, except for the microfinance in Indian villages setting, where we set T as done by Banerjee et al. 13 .
Simulations of the diffusion process in each setting follow the independent cascade model 14 . For each centrality, the highest ranked node is set to be the initial seed. We compute cascade payoff by summing up the individual contributions of all the nodes reached in the cascade. For each parameter tested in different settings, we run 100 simulations.
In the empirical analysis of microfinance in Indian villages and weather insurance in Chinese villages, we build models to predict the adoption likelihood to use as y in computing contextual centrality. For each setting, we use the data provided in Banerjee et al. 13 where α is a constant and β is the individual contribution of each node in the network. It is, of course, possible to parameterize α, β, or A by t as well, but for simplicity let us assume they remain constant.
Expanding this recurrence, we get Now if we substitute α = p, β = y, and c 0 = y, then c T is exactly equal to CC. Substitutions can be done for all the centralities discussed above and are summarized in Table 1.
Contextual centrality is developed upon and generalizes diffusion centrality, but there are two important differences. First, all nodes passed through by the random walk contribute positively and homogeneously in diffusion centrality, while the main advantage of contextual centrality is allowing for the heterogeneous contributions. Second, the random walk of contextual centrality starts from the chosen seed, while that of diffusion centrality starts from the neighbors of the chosen seed. Under the condition that y is positive and constant for all entries, contextual centrality inherits the nice nesting properties of Table 1: Centralities defined by c t = αAc t−1 + β.
Centrality α β c 0 t diffusion centrality, which encompasses and spans the gap between degree centrality, eigenvector centrality, and Katz centrality. In particular, CC is proportional to degree centrality when T = 1, proportional to eigenvector centrality as T → ∞ when p ≥ λ −1 1 , and proportional to Katz centrality when T = ∞ and p < λ −1 1 . Proof can be found in Banerjee et al. 22 .
Contextual centrality is also similar to Katz centrality, but we highlight two crucial differences. First, contextual centrality is more general in that p can be larger than λ −1 1 and provides essential insights into this region. Second, we allow T to vary according to the specific setting while in Katz centrality the diffusion period T is infinite. T carries important implications. For the product that is effective in a short period, such as a coupon that will expire within a day, T is relatively small compared with the diffusion of a new phone which will be on the market for much longer.

Relationship between approximated cascade payoff and cascade payoff
Contextual centrality aims to maximize objective (4), which provides an approximation to cascade payoff, as in objective (3), by an independent cascade model. In Fig. 6, we analyze the Spearman's and Pearson correlation between the two concerning different spreadability. Both correlation measures decrease as spreadability increases from 0 to 1 and increase afterward. On the bulk part, the Spearman's correlation between the two is higher than Pearson correlation and is around 0.9 or higher. Note that pλ 1 = 1 is the phase transition in network contagion with the Susceptible-Infected (SI) model and is known as the epidemics threshold 23 . This may explain why we see a different behavior close to pλ 1 = 1.

Game-theoretic interpretation of contextual centrality with local interactions
where α is a scalar and α > 0. Taking the first-order condition, it is easy to prove that the strategy in Nash equilibrium is a = I − βA −1 α ∝ c Katz , which is proportional to the Katz centrality.
In the previous setup, Ballester et al. assume that the marginal benefit is homogeneous and positive.
We relax this constraint, allowing it to vary across individuals (y i ) with and can take on negative values.
With this, suppose agent i chooses an action (a i ) according to the following utility function, With this variant, the equilibrium strategy becomes, Eq. (12) has the exact same form as CC when T → ∞, βλ 1 < 1 and β = p. Hence, we see that contextual centrality approximates agents' equilibrium actions with heterogeneous marginal utilities in this condition.

Bounds and distribution of contextual centrality in terms of spreadability
In this section, we present the upper bound for maximum possible contextual centrality. When pλ 1 is larger than 1, CC approaches infinity as T grows. On the other hand, when pλ 1 < 1, CC is finite for T = ∞, which can be understood as a lack of virality, expressed in a fizzling out of the diffusion process with time. In fact, the specific value of pλ 1 can be used to bound the maximum possible CC given the norm of the score vector y.
Since, for us, ρ(A) = λ 1 , we have which, if pλ 1 < 1, can be further bounded by 1 1−pλ 1 ||y|| While the above result bounds contextual centrality from above, the actual value of CC is highly variable, depending on the structure of the graph and the distribution of the score vector among its nodes.
For a discussion of expected CC among random networks, see the Erdos-Reyni section below. Next, we discuss the behavior of contextual centrality when y is variable.

Robustness of contextual centrality in response to perturbations in y
As discussed in the main body of the paper, in the real-world data, node characteristics can be noisy, stochastic and biased. Therefore, it is essential to analyze the robustness of contextual centrality in response to small perturbations in y. We first perform a sensitivity analysis, studying bounds on the error in contextual centrality in terms of noise in y, and then study contextual centrality as a random variable assuming a multivariate normal model of y.

Sensitivity Analysis
We let the observed (or estimated) score vector beŷ and let y be the true score vector. The errors in the score vector are given by the vector ∆y := y −ŷ and similarly ∆CC := CC(A, p, T,ŷ) − CC(A, p, T, y) is the error between the CC computed from observed and actual data.
We have the following bound on ||∆CC||, which follows directly from Proposition 1 and the fact that CC is linear with respect to the score vector y.
This shows that, when pλ 1 < 1, then as long as the error in y is sufficiently small, the error in CC will be small as well. However, the larger pλ 1 is, the more a small error in y can become amplified as an error in CC.
Next we focus on the case that pλ 1 > 1. In this case, we have shown in the main body of the paper that for large T, contextual centrality is well-approximated by (U T 1 y)U 1 , where U 1 is the eigenvector with largest eigenvalue. Thus in this case, the primary contribution U T 1 y is an essential quantity whose sign roughly determines the relative ranking of contextual centrality. Hence, we analyze its sensitivity to noise in y. The error in primary contribution is simply U T 1 ∆y, whose magnitude is bounded by ||∆y||. Thus if ∆y is small enough so that ||∆y|| < U T 1ŷ , this perturbation will not affect the sign of the primary contribution, so the relative ranking in CC will tend to stay fixed. Otherwise, the relative ranking is at risk of flipping.
Contextual centrality as a random variable Next, to study the impact of stochasticity in y, we suppose that y is a multivariate random variable with mean vectorŷ and covariance matrix Σ. Let To simplify, consider the case that Σ = σ 2 I, that is, the y i are uncorrelated and have the same standard deviation σ. Then the covariance matrix of CC(A, p, T, y) is σ 2 B 2 .
That is, we have where e i are the standard basis vectors.
In particular, the coefficients of CC may be positively correlated even when those of y are uncorrelated, and their standard deviations are given by σ(CC(A, p, T, y) i ) = σ||Be i || Note that, by definition of B, Be i = CC(A, p, T, e i ), whose jth coefficients represents the expected number of times node i is reached by the diffusion process, if seeded at node j.

Differences between contextual centrality and centrality measures developed on weighted networks
There have been some studies which generalize centrality measures to weighted or signed networks. we assume each edge has independent probability q of being present in the graph, where q is a function of n, the number of nodes. Assume that qn grows such that log(n) ≤ qn ≤ √ n. We also assume that T and p are functions of n, and let y denote the vector (depending on n) consisting of y 1 , . . . , y n for some infinite sequence {y i }. We suppress all dependency on n for ease of notation. We further assume that the mean y has a limit y as n approaches infinity, which is reasonable by the law of large numbers if the y i are sampled from a random variable. With this background, we study the expected behavior of E(CC(A, p, T, y)).
Given two functions f (n), g(n), we will say that f approaches g as n approaches infinity, if lim n→∞ f (n) g(n) = 1. Then we have the following result. In other words, if y = 0, then the term E 1 dominates, so the expected CC is uniform all nodes (in the limit as n approaches infinity). Moreover, y measures the magnitude of the diffusion as compared to DC, and the sign of y determines the expected sign of CC. In contrast, if y = 0, then CC equals E 2 so, on expectation, CC correlates perfectly with y itself. We note that in practice it is not likely for y to equal 0. However, if y is close to 0 and n is not too large, then the term E 2 could still be significant, indicating that the expected CC will be correlated with the nodal evaluation vector y.
This result can also be related to the tradeoff in Eq. (6). As implied by the Theorem, as long asȳ = 0, then expected CC approachesȲ E 1 , which in turn approachesȳE(DC) as n approaches infinity. Thus the second term of the tradeoff in Eq. (6) dominates, on expectation.
We also note that careful analysis will show that E 2 > 0, but that is beyond the scope of the present paper. Note that the threshold T = log(n) log(npq) given above is equal to log(n) log(pE(λ 1 )) , since E(λ 1 ) = nq. We also note that the expected diameter of the Erdos-Reyni graph is log(n) log(nq) , which is strictly smaller than the threshold given above.
To prove these theorems, we analyze E(A t ) for any t. Note that E(A t ) ij is the weighted sum of all paths of length t from i to j, with each path π weighted by q d(π) , where d(π) is the number of distinct edges along the path π. Note that by symmetry, the off-diagonal entries of E(A t ) are all the same, as are its diagonal entries; however, the diagonal entries are not necessarily equal to the off-diagonal ones.
We first prove the following lemma to aid our analysis.
Lemma 1 Let i, j, k be distinct numbers ranging from 1 to n. Let Z ij,k (t) be the subset of paths of length t from i to j which visit vertex k at some point. Let z ij,k (t) be its weighted sum π∈Z ij,k (t) q d(π) . Then . We now move on to the estimates of E(A t ij ).
Lemma 2 For the purposes of this lemma assume that t nq ≤ r < 1 4 for some r. Then we have Proof 3 Let us represent a path by the sequence of the vertices it visits. A path π of length t from i to j is represented as iv 1 v 2 · · · v t−1 j, where i and j will also be labeled v 0 and v t , respectively.
Next, we calculate the upper bounds. Suppose that t ≥ 1. Let Y ij (t) ⊂ X ij (t) consist of those paths in which edges are never repeated immediately, that is, v l = v l+2 for any index l. Let y ij (t) = π∈Y ij (t) q d(π) be its weighted sum. We further partition Y ij (t) as follows. For each k = 1, . . . , (t − 1), be the subset of those paths for which k is the smallest index such that the edge v k−1 v k is not revisited later in the path, and v k = j. Then Y ij (t) = t−1 k=0 Y ij,k (t). Let y ij,k (t) be the weighted sum of Y ij,k (t). Also, let y dif f (t) and y same (t) denote the values of y ij (t) in the cases i = j and i = j, respectively.
We will use the following properties for paths π ∈ Y ij,k (t). Given π, let π ∈ Y v k j (t − k) be the truncated path v k , . . . , v t−1 , j. We note that π has at least one edge that π does not, namely v k−1 v k , by definition of k. Thus d(π) ≥ d(π ) + 1. Furthermore, we note that every node v 1 , . . . , v k−1 must be present in π . Indeed, for each such vertex v, consider the greatest index l such that v l = v. If l < k, then, by definition of k, that means either v l = j, in which case it appears in π , or the edge v l−1 v l reappears later in the path. By assumption that π ∈ Y ij (t), this edge cannot be repeated immediately; hence v = v l itself must reappear later, contradicting the description of the index l. So, l ≥ k, that is, v indeed appears in π'.
These observations imply the following bound: Indeed, to specify a path in Y ij,k (t), we first choose v k from among ≤ n possibilities. Then we choose the truncated path π as described above from Y v k j (t − k), whose weighted sum is y v k ,j (t − k). Then we choose the k − 1 vertices v 1 , . . . , v k−1 . Each of them is repeated in π , hence may be chosen from among the ≤ t vertices of π . Finally, since d(π) ≥ d(π ) + 1, we introduce the additional factor of q.
Now we focus on the case that i = j.
If k ≥ 1, we can improve our bound further. Notice that, since k > 1, the starting vertex i must appear in the path π . So, either i = v k , or i = v k . In the former case, we can eliminate a factor of n from (13), and in the latter case, we can introduce a factor of t n into (13), by Lemma 1 (Note the Lemma applies since i, j, and v k are assumed distinct). We thus obtain the tighter bound Now we can prove by induction that y dif f (t) ≤ (nq + 2) t . Indeed, under this inductive hypothesis, the above bounds yield Where we used the fact that (t 2 ) ≤ (nq) 2 ≤ n and 1 1−r ≤ 2. Combining these bounds together we obtain, as desired, that Next, we plug in this bound for y dif f (t) into (13), to obtain a bound for y ij (t) (even if i = j). We have which is a geometric series with ratio 2t nq ≤ 2r, bounded by 1 1−2r (nq) t .
Hence, we obtain the following bound on y ij (t): We emphasize that this inequality holds only if t ≥ 1. Finally, we extend our analysis from the Y ij (t) to all paths. Any arbitrary path from i to j of length t may be obtained by starting with a path in Y ij (t − 2m), for some 0 ≤ m ≤ t/2 and performing a sequence of m insertions, replacing a vertex v with vwv instead, for some vertex w. We obtain the bound Indeed, for each insertion operation, there are two cases: either the inserted vertex w is already present in the path, so it can be chosen from among ≤ t vertices; or it is not already present, in which case it can be chosen from among ≤ n vertices and introduces a new edge, for an additional factor of q. Combining the two possibilities, each insertion operation introduces a factor of (t + nq) ≤ 2nq.
To evaluate this sum, we need to consider the two cases outlined in the statement of this lemma. a) Suppose that either i = j, or i = j and t is odd. In this case, note that the bound (15) can be applied to each y ij (t − 2m), since if t is odd, then t − 2m ≥ 1; and if i = j, we have y ij (0) = 0 regardless. Combining these bounds with (16), we obtain by a geometric series with ratio 2 nq < r. This completes the proof of part a) of the lemma.
b) Now suppose that i = j and t is even. The sum in (16) can be analyzed in the same way as in a), but with an extra term of (2qn) t/2 corresponding to the case m = t 2 .
We are now ready to prove Theorem 1.
Proof 4 By definition, E(CC(A, p, T, y)) = E( T t=0 p t A t y). By linearity of expectation, this equals T t=0 p t E(A t )y. Now, for each t and each i, we have (E(A t )y) i = n j=0 y j p t E(A t ij ). By separating the terms with i = j from the terms with i = j, this equals so we can write and a) By Lemma 2, we know that E 1 can be bounded where r = T nq . Since we assume this ratio approaches 0, these bounds imply that indeed E 1 approaches T t=0 (npq) t = (npq) T +1 1−npq as n tends to infinity.
b) Next, we show that E 2 = o(E 1 ). Indeed, we again use Lemma 2. We have so the result follows since both terms p t (2nq) t/2 and (npq) t n are lower-order than (npq) t . c) Diffusion centrality is a special case of contextual centrality in which y = 1, which has mean 1.
The result follows by part a), together with the fact that E 1 dominates over E 2 whenever y = 0 by part b).
Next, we prove Theorem 2.  For (a), we use village size, savings, self-help group participation, fraction of general caste members, and the fraction of village that is first-informed as done in 13 . For (b), we use village size, number of first-informed households, and fraction of village that is first-informed.

Performance relative to other centralities on random networks
Here we show supplementary results corresponding to Fig. 2 with figures for degree, eigenvector, Katz and diffusion centrality.

Average approximated cascade payoff for contextual centrality and the variations of other centrality measures
Here we present the average approximated cascade payoff for contextual centrality and the variations of other centrality measures. Note that the approximation does not hold for degree centrality when pλ 1 > 1 and T is large. However, scaling degree centrality with primary contribution still improves the performance. Hence, we present it here.
(a) degree (b) katz (c) diffusion Figure 13: Average cascade payoff for contextual centrality and the variations of (a) degree, (b) diffusion, and (c) Katz centrality. The x-axis is pλ 1 , and the y-axis is the average cascade payoff, with the shaded region as the 95% confidence intervals. For 'degree adjusted', 'Katz adjusted' and 'diffusion adjusted' centrality, we multiply the focal centrality with the primary contribution U T 1 y. For 'seed nonnegative', we adapt the original seeding strategy to seed only if the maximum of the centrality measure is nonnegative, otherwise it is named as 'seed always'.

Comparision of seeding strategies when y(U T 1 y) < 0
Here we show the effect of using different seeding strategies on the average approximated cascade payoff. For this plot, we generated random networks as before with contributions sampled from a standard normal distribution, but redistributed the contributions to make the signs of y and U T 1 y differ if possible.
More specifically, if the average contribution was negative, the individual with the largest eigenvector centrality score was given the most positive contribution, the individual with the second largest eigenvector centrality score was given the second most positive contribution, and so on. We used an analogous procedure if the average contribution was positive. Fig. 14 shows that seeding according to the contextual centrality score tends to perform the best as long a pλ 1 is not too large, after which seeding according to the average contribution performs the best. For small values pλ 1 , seeding always performs as well as, if not better than, seeding according to contextual centrality. As suggested by Eq. (7), seeding according to the primary contribution yields similar results as seeding according to contextual centrality score as pλ 1 grows large. Figure 14: Comparision of seeding strategies when y(U T 1 y) < 0. Here we show the average approximated cascade payoff generated by seeding the top-ranked individual according to contextual centrality using different seeding strategies. The x-axis is pλ 1 and the y-axis is the average approximated cascade payoff with shaded 95% confidence interval. The line marked "always" acts as our baseline in each we always seed the individual. For "average", we seed only if the average contribution is nonnegative. For "primary", we seed only if the primary contribution is nonnegative. For "contextual", we seed only if the contextual centrality score of the individual is nonnegative.