Mass media impact on opinion evolution in biased digital environments: a bounded confidence model

People increasingly shape their opinions by accessing and discussing content shared on social networking websites. These platforms contain a mixture of other users’ shared opinions and content from mainstream media sources. While online social networks have fostered information access and diffusion, they also represent optimal environments for the proliferation of polluted information and contents, which are argued to be among the co-causes of polarization/radicalization phenomena. Moreover, recommendation algorithms - intended to enhance platform usage - likely augment such phenomena, generating the so-called Algorithmic Bias. In this work, we study the effects of the combination of social influence and mass media influence on the dynamics of opinion evolution in a biased online environment, using a recent bounded confidence opinion dynamics model with algorithmic bias as a baseline and adding the possibility to interact with one or more media outlets, modeled as stubborn agents. We analyzed four different media landscapes and found that an open-minded population is more easily manipulated by external propaganda - moderate or extremist - while remaining undecided in a more balanced information environment. By reinforcing users’ biases, recommender systems appear to help avoid the complete manipulation of the population by external propaganda.

To better understand why we need to consider multiple measures simultaneously to correctly characterize the final state of the population and understand the dynamics as the parameters of the model vary, in Supplementary Figure 1 we plotted different possible final opinion distributions along with the metrics we previously described. Starting from the upper-left panel (a): in this case, the final opinions are uniformly distributed, which means that the level of normalized entropy close to the maximum possible, i.e., 1.0, (in fact, it is 0.88) and the computed number of clusters is C ≈ 40 (however, we can see that there are not 40 clusters, since we can not correctly identify clusters in such a final state). Instead, we need to interpret this situation as agents unable to cluster around few opinion values. Comparing panel (a) and (b), we can see that the values of the average pairwise distance and the standard deviation of the final opinion distribution are very similar. However, the distributions are significantly different. Such peculiarity emerges when considering the value of the normalized entropy (0.69) and the number of clusters (2, which indicates two clusters of similar size). Comparing (b) with (c), we can see that in both cases, polarization arises; however -in the second -polarization is "stronger", i.e., the two opinion cluster are more distant than in the previous case. For this reason, while the normalized entropy and the number of clusters have similar values, the average pairwise distance and the standard deviation are higher. Finally, in panel (d), we can see that in the case of a perfect consensus entropy, pairwise distance and standard deviation are equal to 0 and that the number of clusters is, as expected, 1. In the second row, we can observe that, although in panel (e) and (f) we have three clusters (the normalized entropy is 0.27 and the number of clusters is 3.0 in both cases), the fact that the average pairwise distance and the standard deviation are higher in the latter panel allows us to understand that these clusters are more apart than in the former one. In panel (g), however, a consensus around a single opinion does not emerge; however, the effective number of clusters (1.47) allows us to account for the fact that one of them is bigger than the other -the same goes for the fact that the normalized entropy is lower with respect to panel (b) or (c) for example, where the two clusters had the same size. Even if the two clusters are almost as distant as in panel (c), the average pairwise distance and the standard deviation are lower because the majority of agents belong to one of them. Finally, in the panel (h), we can see that, at first glance, the population has reached consensus (except for a few outliers); However, the number of clusters is 5.36. This latter result describes how agents could not agree around a single opinion; instead, they clustered in a range of values, which our model identifies as multiple clusters (even though they are not properly so). However, this situation can be correctly sorted out by focusing on the number of clusters and entropy (both pretty high) along with pairwise distance and standard deviation, which are lower than in the case where there were two clusters.

Moderate media landscape with algorithmic bias
In the following section, we present additional figures and metrics referred to the first setting. In the first setting we simulated the presence of one moderate media in the population promoting x m1 = 0.5. We can see from the heatmaps in Supplementary Figure 2 that: a higher confidence bound reduces the average normalised entropy of the final opinion distribution, while a higher level of the algorithmic bias increases it. More specifically, we can see that when ε = 0.2 (Supplementary Figure 2 (a)) in the Deffuant model the average normalised entropy is 0.2 (polarisation), which increases as p m grows (up to 0.27 for p m = 0.5). While in the Algorithmic Bias model the average entropy moves from 0.2 to 0.5 when adding the filtering algorithm (γ > 0), in the case of a moderate media the increase is lower for p m = 0.1 (from 0.23 to 0.41). Moreover, as p m increases, the effect of the bias in this case is less strong, and we have similar values of normalised entropy in any case. When ε = 0.3 (Supplementary Figure 2(b)) we can see that the average normalised entropy in the Deffuant model is 0.09 (consensus), but it is lower (0.04) when p m = 0.0 and then grows as p m grows. Adding bias to the picture, we can see that the average normalised entropy grows both as a function of p m and as a function of γ. The same can be said for ε = 0.4 and ε = 0.5, but starting from lower values of average normalised entropy. Moreover, we can also see that the increase due to the bias is less strong when there is a small/intermediate p m .
In Supplementary Figure 3 we can see the average number of clusters, computed as we explained in section Methods. We can see that for ε = 0.2 ( Supplementary Figure 3(a)) in the baseline model we go from 2 to 4 clusters when considering a positive algorithmic bias. We can see that the number of clusters gradually increases as p m grows, and that the bias has less power for low/intermediate p m . The average number of clusters is always around 3 or 4 in this case. When ε = 0.3 (Supplementary Figure 3(b)) we can see the same pattern, however starting from a situation of consensus in the baseline model and at most 2 cluster form for high p m and γ. For ε > 0.3 consensus is always reached.
As we can see from Supplementary Figure 5 the average standard deviation of the final opinion distribution decreases with ε and increases with γ, both in the baseline model and in the extension presented in this work, with p m > 0. The same conclusions on the levels of polarisation of the system can be drawn looking at the average pairwise distance in Supplementary  Figure 4.
As we can see from Supplementary Figure 6 convergence is on average faster when the population interacts with a moderate media and the algorithmic bias loses its slowing down effect on the dynamic .
From Supplementary Figure 7 we can see that in the baseline model only a small fraction of agents clusters around the mean opinion (and this fraction increases with ε). However, if we add a stubborn agent with fixed opinion x m = 0.5, this fraction increases significantly. However, we can see that it is not directly proportional to p m , on the contrary, it decreases both with p m and with γ.
As we can see from Supplementary Figure 8 when ε = 0.2 in the baseline model either 2 or 3 final clusters arise in the population when there is no bias; as the bias grows fragmentation in the final distribution rises and the process of convergence slows down. The effect of placing a moderate media interacting with the population is to attract a portion of the population towards the mean opinion, i.e. 0.5. However, this portion is smaller and smaller as the bias grows, due to the fact that a smaller portion of agents will likely interact with those agents who are influenced by the media. Also, while with a low bias other two polarised clusters form, as the bias grows still two clusters form, but instead of having all agents holding a single opinion value these cluster are more spread and eventually they split into multiple opinion clusters.
From Supplementary Figure 9 we can see that the presence of a moderate media contrasts the slowing-down effect of the algorithmic bias and reduces the levels of fragmentation with respect to the baseline model with no media. A central cluster always forms, and normally two polarised small clusters appear at the extremes of the opinion spectrum. Due to this fact fragmentation with low level of biases, where in the baseline model there would be consensus or polarisation, is now higher, because two polarised cluster and a moderate one now form. Moreover, the polarised clusters are now pushed to the extremes more than in the baseline model. As the bias grows in the baseline model convergence slows down and fragmentation arises, while with media interaction the population is still split in somehow three cluster that are more spread within themselves as the bias grows.
From Supplementary Figure 10 we can see that the behaviour is very similar to the one described for Supplementary Figure  9 but fragmentation is overall reduced due to the increase in the level of the confidence bound.
Finally, when ε = 0.5 (i.e. Supplementary Figure 11 we can see how media interaction make convergence to consensus faster and reduce fragmentation that would arise in the baseline model.

Extremist media landscape with algorithmic bias
In the following section, we present additional figures and metrics referred to the second setting. In the second setting we simulated the presence of one extremist media in the population promoting x m1 = 0.0, to simulate a situation where there is an extremist propaganda. As we can see from Supplementary Figure 12, the average normalised entropy in the final opinion distribution decreases as ε increases, but there is no ε in the considered range for which consensus is always perfect. We can also see that -in this setting -the main driver of fragmentation, i.e. increase in average entropy, is the algorithmic bias, especially for ε ≤ 0.3: for ε = 0.2 the average normalised entropy is always around 0.5, with no significant increase/decrease as p m and γ vary; in the case of ε = 0.3 entropy values are lower (around 0.2/0.3) but still there no significant dependence between entropy and p m and γ. In the case of a higher ε we can see instead that the entropy level increases gradually as both γ and p m increase. For ε = 0.5 consensus is always reached (i.e. entropy = 0.0) except for very high levels of p m and γ.
As we can see from the average number of clusters (Supplementary Figure 13), if we disregard the Deffuant case (i.e., γ = 0.0), the average number of clusters is in general higher when there are no agent-to-media interactions. As in the baseline model, the number of clusters decreases as the bounded confidence ε increases, while is instead directly proportional to the algorithmic bias γ. For ε ≤ 0.4 there is always a level of bias for which fragmentation arises in the population, while for ε ≥ 0.5 fragmentation only arises when there are no media interactions, while the presence of the mass media either favours consensus (for low bias) or polarisation (for higher bias).
From the average pairwise distance (Supplementary Figure 14) and the average standard deviation (Supplementary Figure  15) we obtain the same information: both measures grow with p m and γ, i.e. in this case the higher the number of clusters (or average normalised entropy) the higher the average pairwise distance (or average standard deviation).
The average number of iterations at convergence, instead, increases with the algorithmic bias (as in the baseline model). However, we can see from Supplementary Figure 16 that the dynamic is slower for low values of p m (e.g. p m = 0.1) and -in this case -the higher ε, the slower the convergence.
As the number of clusters increases with p m and γ, this does not happen with the average size of the extremist cluster. The size of this cluster -as we can see from Supplementary Figure 17 -depends mainly on ε: for ε = 0.2, 20-30% of agents are extremist, this fraction increases to 30-40% when ε = 0.3, while for ε = 0.4 we can have from 10 to 80% of extremist agents in the final state. Lastly, for ε = 0.5 we have around 80-100% of extremist agents. Additionally, we can see that there are cases where the size of the extremist cluster decreases with γ, while in other cases it grows with γ. This latter behaviour, in particular, can be seen for low p m (i.e. p m = 0.1 or p m = 0.2) and ε = 0.3 or ε = 0.4.

Polarised media landscape with algorithmic bias
In the following section, we present additional figures and metrics referred to the third setting. In the third setting we simulated the presence of two extremist media in the population promoting opinions at the opposite sides of the opinion spectrum, i.e. we set x m1 = 0.05 and x m2 = 0.95, to simulate a situation where there is a polarised media landscape. As predictable, the presence of two polarised media in a bounded confidence model increases the level of polarisation in the system which would already naturally arise due to the effects of the cognitive biases, pushing a population that would already polarise in the baseline model (ε ≤ 0.3) towards the media opinion which are more extreme than the ones that would form in the baseline model (normally around 0.2 and 0.8 and not further) as proven by the fact that the average pairwise distance is higher when p m > 0. When the population is close minded (ε = 0.2, see Supplementary Figure 22(a) and 23(a)) the population in the final state splits into more than two opinion clusters. As we can see from the heatmaps, in the baseline model with no media and no algorithmic bias, there is an average normalised entropy of 0.2 and an average number of cluster of 2, meaning that population mainly separates into two clusters of similar sizes, creating a polarised final population (as we can see from the fact that the average standard deviation of the final opinion distribution is 0.24 (see Supplementary Figure 25) and the average pairwise distance is 0.12 (see Supplementary Figure 24). We can see that, when γ = 0.0 the average normalised entropy increases with p m from 0.21 to 0.26, which corresponds to an increase in the average number of clusters from 2.31 to 2.79 (which means that on average there are three clusters of different sizes with one bigger than the others). We can see that the average pairwise distance and the average standard deviation do not increase with p m significantly, due to the fact that when the population splits from two to three clusters the average distance between the two more extremist clusters and the moderate one is less than the distance between two polarised clusters (so the average pairwise distance ranges from 0.18 to 0.19 and the average standard deviation between 0.35 and 0.36). We can see that, given a certain p m , the fragmenting power of the algorithmic bias brings rapidly the population to split into three clusters, which become four in some cases, however there is not a clear pattern for which the average number of clusters increases as a function of p m or γ when γ > 0: we can see that the number of clusters increases with γ when p m ≤ 0.3 and then it somehow decreases with γ. However, looking at the aerage normalised entropy we can see that all the values for γ > 0 we can see that it increases with γ for p m = 0.1 from 0.32 to 0.42, but for any other value of p m and γ we have similar entropy values ranging from 0.35 to 0.39 indicating similar levels of fragmentation. If we look at the average pairwise distance and the average standard deviation we can see that: the average pairwise distance decreases with p m and increases with γ for p m = 0.2 then it decreases with γ. Same happens with standard deviation which decreases both with p m and γ in this case. In this case (remember we are talking about a close minded population with ε = 0.2 we can see from Supplementary Figure 26 that the convergence slows down for p m = 0.1, with respect to the baseline case, and then it speeds up both as the probability of media interaction increase and as the bias increases. We can also see from Supplementary Figure 27 that, with respect to the baseline case with no media, there is always a positive fraction of the population holding an opinion in the cluster of one of the extremist media. As the filtering power of the recommender system increases, i.e. as the probability of interacting with like minded individuals is higher, this moderate cluster splits into multiple small opinion clusters, still around the center of the opinion spectrum. Moreover, as the algorithmic bias grows, the two extremist clusters become smaller and smaller and more agents become moderate/neutral. This is due to the fact that less people interact with extremist media and/or agents that ended up in the extremist cluster in the early stages of the process and so they are not able to attract a larger portion of the population with respect to the case where the filtering power of the recommender system is less strong. As the open mindedness of the population grows, a stronger and stronger algorithmic bias is needed to maintain the moderate cluster and in most cases the population is polarised and the two sub-populations hold the media opinions, until the level of open mindedness is high enough that the population is able to reach consensus which, however, now forms around the opinion of one of the two media: the population is completely radicalised around very extreme positions (0.05 or 0.95) like it happened in the case of one extreme media. Moreover, the recommender system in this case makes the process of polarisation faster than in the baseline model and there is less coexistence of multiple opinion clusters during the process ( Supplementary Figures 29-32).

Balanced media landscape with algorithmic bias measures
In the following section, we present additional figures and metrics referred to the fourth setting. In the fourth setting we simulated the presence of three media and we set x m1 = 0.05, x m2 = 0.95 and x m3 = 0.5, to simulate a situation where there is more balanced media landscape.
We can see from Supplementary Figure 33 that, without media interactions, the average normalised entropy grows with γ and decreases with ε. More in detail we can see that for ε = 0.2 and γ = 0.3 even low values of γ are enough to go from polarisation to fragmentation with a strong increase in entropy, which then remains more or less stable as the bias grows. Instead, for higher ε the increase in entropy is more gradual as the bias grows. When considering media interactions (p m ≥ 0.1) we can see that the increase in entropy is more gradual as the bias and p m grow, in close-minded populations (ε ≤ 0.3), while for open-minded populations (ε ≥ 0.4) the dynamic is different. In particular, the average entropy still increases as the probability of media interactions increases, however: we can see that for ε = 0.4 the average normalised entropy decreases as the bias grows, while for ε = 0.5 it increases up to a certain value of γ and then it decreases until reaching 0.0, i.e. perfect consensus, for γ ≥ 1.5.
We can see from the average number of clusters (Supplementary Figure 34) that the same insights we obtained by looking at the average normalised entropy can be seen from the average number of clusters. In particular, we can see that without media interactions, the number of clusters increases with γ and decreases with ε. In particular, we can see from heatmap (c) and (d) that for ε ≥ 0.4 we always obtain consensus, despite the strength of the bias. In any case the number of cluster grows with p m . However, for ε ≤ 0.3 we generally obtain at most three clusters in the final state, while for ε ≥ 0.4 we obtain a high number of clusters (however from this metric we cannot understand if these are proper opinion clusters or if few "cluster" have just a higher opinion range). Moreover, we can see that, for ε ≥ 0.4, as the bias grows, the number of clusters decreases (the higher the confidence bound, the higher the bias needed to bring the population back to a situation of polarisation or consensus).
From the average pairwise distance (see Supplementary Figure 35 we can see that it increases with p m and γ, when ε ≤ 0.3, otherwise it follows the same pattern than the number of clusters (or average entropy). However, we can see that, in some cases, despite the number of clusters is lower, the average entropy is higher: this is because fewer clusters form that are more far apart from each other in the opinion space. We can also notice that, when we have a high number of clusters, the average pairwise distance is relatively low, since opinions are not well separated into clusters and -on average -agents are closer than they would be if they were polarised.
As we can see from the standard deviation of the final opinion distribution (Supplementary Figure 36, for low openmindedness it increases both with γ and p m , while for ε = 0.4 it increases with γ and for ε = 0.5 it increases with γ until γ = 0.75 and then it decreases with γ.
From Supplementary Figure 37 we can see that the speed of he dynamic does not change much for close-minded populations, we can see just a slight slowdown for p m = 0.1 and p m = 0.2 when ε = 0.3. For open-minded populations, instead, we can see that the dynamic more than slow is unstable in the sense that -besides having a strong bias -agents keep changing their opinions and never reach equilibrium. As we can see from the heatmaps, in most cases the simulations stop because we reach the maximum number of iterations set in our experiments, and not because the population reaches equilibrium.
As we can see from the heatmaps in Supplementary Figure 37 for low bounded confidence the process slows down for small values of p m . For high bounded confidence we can see that the cases where we have fragmentation or the highest number of cluster in the final state have the slowest time to convergence and we can see from the values that in most cases the process does not reach equilibrium but it is stopped due to reaching the maximum number of iterations.
We can see from the heatmaps (Supplementary Figure 38 that the average opinion is generally always around the mean of the opinion spectrum. There is just one case for which the population seems to concentrate more around a lower opinion range (0.3).
We can see from the heatmaps (Supplementary Figure 39) that in the case of ε = 0.2 the percentage of agents in the moderate cluster is always around 40% when the population can interact with the media, the same cluster is bigger when ε = 0.3, but its dimension decreases with p m and γ, because both parameters tend to increase the probability that one agent interacts more often with one media and so that some agent cluster around different opinions. When ε = 0.4 this cluster becomes smaller with p m buut instead grows until γ = 1.0, then slightly decreases with γ. Finally, when ε = 0.5 the size of this cluster decreases with p m and grows with γ until the population always reaches consensus around the moderate opinion for γ = 1.5.
In Supplementary Figure 40 we can see that in the case of ε = 0.2 the percentage of agents in the 0.05 cluster is always around 30%, this percentage grows with p m and γ when ε = 0.3 and ε = 0.4, but in this latter case is always close to 0.0, while for ε = 0.5 it is always 0.0.

Case Study on Euro2020 echo chamber evolution on Twitter
In the following section we report details on data collection, labelling and network construction for the dataset employed in the case study that were not reported in the Materials and Methods section of the manuscript due to space limitations.

Data collection
The dataset covers a period of around one month, starting from June 10th and concluding on July 13th. We filtered the conversations included in the dataset using specific hashtags related to Italy's matches, the competition, and the topic of taking the knee. In total, we collected 38908 tweets from 16235 unique users. To analyze the opinions of Twitter users regarding taking the knee during EURO 2020, we employed a hashtag-based approach. We manually annotated 2,304 hashtags from the dataset and assigned a numerical value to each hashtag. Hashtags expressing a clear position for or against taking the knee were assigned a value of ±3, hashtags closely associated with either faction were given a value of ±1, while neutral or irrelevant hashtags were assigned a value of 0. For each tweet, we calculated its classification value (C t ) by averaging the non-neutral hashtag classification values (C h ) within it. Similarly, for each user (u), we determined their overall classification value (C u ) by averaging the classification values of their tweets. To utilize this dataset as a case study for our opinion dynamics model, we transformed the initial pro/against score, ranging from −3 to 3, into a normalized range of [0, 1]. We also discretized these leanings into three intervals: "Pro" if C u ≤ 0.4, "Against" if C u ≥ 0.6, and "Neutral" otherwise. By incorporating a third label, we ensure the inclusion of users with highly polarized viewpoints.

Network Construction
We built an undirected graph, where nodes are the users, and the edges represent their interactions (i.e. retweet, mention, quote and reply). The network is composed by N = 15378 nodes and L = 36496 edges. We divided the network into two snapshots, namely the first one corresponding to the group stage and round of 16, and the second one corresponding to the period from quarterfinals to the final, in order to obtain initial and final situations for validating our model. For the purpose of this study, we made the decision not to consider the temporal evolution of links, as our model does not account for the creation and dissolution of links during the process of opinion exchange. Consequently, we retained only the nodes that are present in both snapshots. However, the links were flattened, as we actually have a single graph G, and the only temporal element considered is the changing opinion, thus creating two different undirected snapshot networks G 0 , with nodes labelled according to their leaning in the first period and G 1 , with nodes labelled according to their leaning in the second period. This approach ensures a sufficiently similar baseline to our model, which considers a static network. The two snapshot graphs consist of N = 2925 users (approximately 20% of the total) and L = 9081 edges. The number of nodes in the giant connected component is 2894 and the number of edges is 9054. The analysis reveals that the degree distribution of the graph exhibits properties consistent with a power law distribution. The power law fit confirms that the degree distribution can be effectively approximated by a power law model. The calculated power law exponent (α) of 2.47 indicates a slower decay of the distribution, suggesting the presence of a heavy-tailed degree distribution. The minimum degree required for the power law fit is identified as 5.0, indicating that the power law behavior is observed for degrees equal to or greater than this threshold. The p-value from both the log-likelihood ratio test and the Kolmogorov-Smirnov test is 0.03, indicating a relatively low probability of obtaining test results as extreme as the observed data under the assumption of a power law distribution. This suggests reasonable agreement between the observed degree distribution and the power law model (see Supplementary Figure 46). The analysis of the opinion distribution across the population, presented in Supplementary Figure 47, reveals noteworthy dynamics between the first (G 0 ) and second (G 1 ) period. Initially, there is a relatively balanced distribution between "pro" and "against" users, but in the second period, the number of "against" users decreases while the number of "neutral" and "pro" users increases as we can see from Supplementary Table 1.
We also computed the average opinion for each leaning in G 0 : • avg(pro) = 0.28 • avg(neutral) = 0.49 • avg(against) = 0.87 which we then used to assign an opinion to the mass media in the simulations of the present model on this empirical network. To gain further insights, conformity distributions are examined. Conformity, defined in 1 , in this context refers to a local measure quantifying the prevalence of homophilic connections among network nodes based on shared attributes. A conformity value of 1.0 indicates that a node is predominantly surrounded by neighbors who hold the same attribute value, while -1.0 suggests the opposite scenario. The analysis of conformity distributions during the first period, represented by the red histograms in Supplementary Figure 48, reveals a negatively skewed pattern among users against taking the knee. This indicates a prevalence of homophilic connections within this group, where individuals tend to be connected to others with similar viewpoints. Furthermore, as the parameter α increases, which governs the level of interaction between nodes with diminishing influence as distance increases, homophily becomes more pronounced among users against taking the knee. In other words, the impact of nearby nodes outweighs that of distant nodes in determining conformity patterns within this group. Neutral users (0.4 < C u < 0.6), in contrast, exhibit a disassortative behavior, primarily connecting with users who hold well-defined positions on the matter rather than other neutral users. Users who support taking the knee display a varied range of conformity levels, with a tendency towards homophily as α increases. However, it is observed that this cluster of users interacts with both like-minded and opposing profiles without a clear preferential behavior. A similar analysis conducted during the second period, depicted in Supplementary Figure 49, reveals that neutral users still exhibit a disassortative behavior regarding their stance on taking the knee. On the other hand, users against taking the knee demonstrate a connectivity pattern that is randomly mixed, although it becomes more homophilic with higher values of α. However, the distribution is less skewed compared to the first snapshot. Users who support taking the knee also display an assortative behavior for lower values of α, but their conformity distribution becomes negatively skewed, similar to the previous period. In both cases, these distributions exhibit long tails when α ≥ 2.0, suggesting that, on average, user behavior tends to be homophilic. However, there is a portion of both subpopulations with a randomly mixed connectivity pattern, as well as another portion displaying a disassortative behavior. While the opinion distribution appears more balanced in the first period compared to the second period, an assessment of the population's level of polarization, as depicted in Supplementary Figure 50, reveals that the network is less polarized in the second period. This finding contrasts with the analysis in 2 , where only the subset of users actively participating in the conversation over the entire duration of the study demonstrates reduced polarization over time, in contrast to the overall population. Supplementary Figures 51-56 represent the final opinion distribution of the Algorithmic Bias Model with Mass Media for homogeneous ε. As we can see from Supplementary Figures 51-54, when there are no media in the population or when there is a single media, we have consensus in the final state. An exception is the case where we have no media and γ = 1.5 where the final opinion distribution is fragmented with two main clusters around the average leaning of the "pro" faction and the average leaning of the "against" faction. When there are two polarized media (see Supplementary Figure 55 Algorithm 1 Confidence bound estimation algorithm. G t = Weighted undirected interaction network at time t; V t = set of nodes at time t; E t = set of weighted edges at time t; x u (t) = opinion of agent u at time t; d u,v = |x u (t) − x v (t)| = opinion distance between u, v ∈ V at time t; CB = estimated confidence bounds.

Algorithmic Bias Model with Mass Media on Euro2020 Network
if u ∈ V t+1 then Procedure to estimate x u (t + 1) and CB u 12: 24: Figure 58(a) with γ = 0.0, the opinion evolution process exacerbates the existing polarization from the initial condition G 0 . However, for γ ≥ 0.5, the neighbors of users aligned with the opposing faction exhibit a more diverse range of opinions, thus reducing the polarization within this subpopulation and the overall network. This effect intensifies as γ increases. Similar outcomes are observed when users can interact with a single mass media source promoting an opinion in favor of "taking the knee" (x m = 0.28, the average opinion of the pro faction). Notably, the presence of a "stubborn agent" with an opinion of 0.28 facilitates the dissolution of the echo chamber among users opposing "taking the knee", resulting in outcomes similar to the real situation for γ ≥ 1.0. The correlation between nodes' opinions and the average opinions of their nearest neighbors is 0.87 in the real setting, 0.93 in Supplementary Figure 58(d), and 0.90 in Supplementary Figure 59(d). These values are quite similar, indicating that algorithmic bias primarily drives this depolarization, but such a process is further aided by promoting an opinion aligned with the pro faction. In the case of a mass media source promoting the opinion of the opposing faction (x m = 0.87), the depolarization of the pro echo chamber is less pronounced, and two distinct echo chambers persist in the network (see Supplementary Figure 60). The presence of a neutral media, as observed in Supplementary Figure 61, leads to the most drastic change in the distribution of opinions across the network in the final state. In the absence of bias (Supplementary Figure 61(a)), the correlation between nodes' opinions and their neighbors' is lower than in previous scenarios, and users form a single community primarily aligned around moderate, central positions. However, as the bias γ increases, homophilic behavior emerges in the network, although most users remain in the moderate community. As depicted in Supplementary Figure 62, polarization is higher than the starting conditions in all cases, yet the against echo chamber becomes less prominent as the bias grows. Similar outcomes are observed in the case of three media sources (see Supplementary Figure 63), where the presence of a neutral media alone is insufficient to disrupt the two echo chambers.