Main

Increased partisan polarization and hostility are often blamed on online echo chambers on social media3,4,5,6,7, a concern that has grown since the 2016 US presidential election8,9,10. Platforms such as Facebook are thought to fuel extremity by repeatedly showing people congenial content from like-minded sources and limiting exposure to counterarguments that could promote moderation and tolerance11,12,13. Similarly, identity-reinforcing communication on social media could strengthen negative attitudes toward outgroups and bolster attachments to ingroups14.

To assess how often people are exposed to congenial content on social media, we use data from all active adult Facebook users in the USA to analyse how much of what they see on the platform is from sources that we categorize as sharing their political leanings (which we refer to as content from like-minded sources; see Methods, ‘Experimental design’). With a subset of consenting participants, we then evaluate a potential response to concerns about the effects of echo chambers by conducting a large-scale field experiment reducing exposure to content from like-minded sources on Facebook. This research addresses three major gaps in our understanding of the prevalence and effects of exposure to congenial content on social media.

First, we have no systematic measures of content exposure on platforms such as Facebook, which are largely inaccessible to researchers2. Web traffic data suggest that relatively few Americans have heavily skewed information diets15,16,17,18, but less is known about what they see on social media. Prior observational studies of information exposure on platforms focus on Twitter, which is used by only 23% of the public19,20,21,22, or the news diet of the small minority of active adult users in the US who self-identified as conservative or liberal on Facebook in 2014–201523. Without access to behavioural measures of exposure, studies must instead rely on survey self-reports that are prone to measurement error24,25.

Second, although surveys find associations between holding polarized attitudes and reported consumption of like-minded news26,27, few studies provide causal evidence that consuming like-minded content leads to lasting polarization. These observed correlations may be spurious given that the people with extreme political views are more likely to consume like-minded content28,29. In addition, although like-minded information can polarize30,31,32, most experimental tests of theories about potential echo chamber effects are brief and use simulated content, making it difficult to know whether these findings generalize to real-world environments. Previous experimental work also raises questions about whether such polarizing effects are common18,33, how quickly they might decay18,33, and whether they are concentrated among people who avoid news and political content28.

Finally, reducing exposure to like-minded content may not lead to a corresponding increase in exposure to content from sources with different political leanings (which we refer to as cross-cutting) and could also have unintended consequences. Social media feeds are typically limited to content from accounts that users already follow, which include few that are cross-cutting and many that are non-political22. As a result, reducing exposure to like-minded sources may increase the prevalence of content from sources that are politically neutral rather than uncongenial. Furthermore, if content from like-minded sources is systematically different (such as in its tone or topic), reducing exposure to such content may also have other effects on the composition of social media feeds. Reducing exposure to like-minded content could also induce people to seek out such information elsewhere online (that is, not on Facebook34).

In this study, we measure the prevalence of exposure to content from politically like-minded sources among active adult Facebook users in the US. We then report the results of an experiment estimating the effects of reducing exposure to content from politically like-minded friends, Pages and groups among consenting Facebook users (n = 23,377) for three months (24 September to 23 December 2020). By combining on-platform behavioural data from Facebook with survey measures of attitudes collected before and after the 2020 US presidential election, we can determine how reducing exposure to content from like-minded sources changes the information people see and engage with on the platform, as well as test the effects over time of reducing exposure to these sources on users’ beliefs and attitudes.

This project is part of the US 2020 Facebook and Instagram Election Study. Although both Meta researchers and academics were part of the research team, the lead academic authors had final say on the analysis plan, collaborated with Meta researchers on the code implementing the analysis plan, and had control rights over data analysis decisions and the manuscript text. Under the terms of the collaboration, Meta could not block any results from being published. The academics were not financially compensated and the analysis plan was preregistered prior to data availability (https://osf.io/3sjy2); further details are provided in Supplementary Information, section 4.8.

We report several key results. First, the majority of the content that active adult Facebook users in the US see comes from like-minded friends, Pages and groups, although only small fractions of this content are categorized as news or are explicitly about politics. Second, we find that an experimental intervention reducing exposure to content from like-minded sources by about a third reduces total engagement with that content and decreases exposure to content classified as uncivil and content from sources that repeatedly post misinformation. However, the intervention only modestly increases exposure to content from cross-cutting sources. We instead observe a greater increase in exposure to content from sources that are neither like-minded nor cross-cutting. Moreover, although total engagement with content from like-minded sources decreased, the rate of engagement with it increased (that is, the probability of engaging with the content from like-minded sources that participants did see was higher).

Furthermore, despite reducing exposure to content from like-minded sources by approximately one-third over a period of weeks, we find no measurable effects on 8 preregistered attitudinal measures, such as ideological extremity and consistency, party-congenial attitudes and evaluations, and affective polarization. We can confidently rule out effects of ±0.12 s.d. or more on each of these outcomes. These precisely estimated effects do not vary significantly by respondents’ political ideology (direction or extremity), political sophistication, digital literacy or pre-treatment exposure to content that is political or from like-minded sources.

Exposure to like-minded sources

Our analysis of platform exposure and behaviour considers the population of US adult Facebook users (aged 18 years and over). We focus primarily on those who use the platform at least once per month, who we call monthly active users. Aggregated usage levels are measured for the subset of US adults who accessed Facebook at least once in the 30 days preceding 17 August 2020 (see Supplementary Information, section 4.9.4 for details). During the third and fourth quarters of 2020, which encompass this interval as well as the study period for the experiment reported below, 231 million users accessed Facebook every month in the USA.

We used an internal Facebook classifier to estimate the political leaning of US adult Facebook users (see Supplementary Information, section 2.1 for validation and section 1.3 for classifier details; Extended Data Fig. 1 shows the distribution of predicted ideology score by self-reported ideology, party identification and approval of former president Donald Trump). The classifier produces predictions at the user level ranging from 0 (left-leaning) to 1 (right-leaning). Users with predicted values greater than 0.5 were classified as conservative and otherwise classified as liberal, enabling us to analyse the full population of US active adult Facebook users. A Page’s score is the mean score of the users who follow the Page and/or share its content; a group’s score is the mean score of group members and/or users who share its content. We classified friends, Pages or groups as liberal if their predicted value was 0.4 or below and conservative if it was 0.6 or above. This approach allows us to identify sources that are clearly like-minded or cross-cutting with respect to users (friends, Pages and groups with values between 0.4 and 0.6 were treated as neither like-minded nor cross-cutting).

We begin by assessing the extent to which US Facebook users are exposed to content from politically like-minded users, Pages and groups in their Feed during the period 26 June to 23 September 2020 (see Supplementary Information, section 4.2, for measurement details). We present estimates of these quantities among US adults who logged onto Facebook at least once in the 30 days preceding 17 August 2020.

We find that the median Facebook user received a majority of their content from like-minded sources—50.4% versus 14.7% from cross-cutting sources (the remainder are from friends, Pages and groups that we classify as neither like-minded nor cross-cutting). Like-minded exposure was similar for content classified as ‘civic’ (that is, political) or news (see Supplementary Information, section 4.3 for details on the classifiers used in this study). The median user received 55% of their exposures to civic content and 47% of their exposures to news content from like-minded sources (see Extended Data Table 1 for exact numbers and Supplementary Fig. 3 for a comparison with our experimental participants). Civic and news content make up a relatively small share of what people see on Facebook, however (medians of 6.9% and 6.7%, respectively; Supplementary Table 11).

However, patterns of exposure can vary substantially between users. Figure 1 provides the distribution of exposure to sources that were like-minded, cross-cutting or neither for all content, civic content and news content for Facebook users.

Fig. 1: The distribution of exposure to content among Facebook users.
figure 1

a, The distribution of the exposure of monthly active adult Facebook users in the USA to content from like-minded sources, cross-cutting sources, and those that fall into neither category in their Facebook Feed. Estimates are presented for all content, content classified as civic (that is, political) and news. b, Cumulative distribution functions of exposure levels by source type. Source and content classifications were created using internal Facebook classifiers (Supplementary Information, section 1.3).

Source Data

Despite the prevalence of like-minded sources in what people see on Facebook, extreme echo chamber patterns of exposure are infrequent. Just 20.6% of Facebook users get over 75% of their exposures from like-minded sources. Another 30.6% get 50–75% of their exposures on Facebook from like-minded sources. Finally, 25.6% get 25–50% of their exposures from like-minded sources and 23.1% get 0–25% of their exposures from like-minded sources. These proportions are similar for the subsets of civic and news content (Extended Data Table 1). For instance, like-minded sources are responsible for more than 75% of exposures to these types of content for 29% and 20.6% of users, respectively.

However, exposure to content from cross-cutting sources is also relatively rare among Facebook users. Only 32.2% have a quarter or more of their Facebook Feed exposures coming from cross-cutting sources (31.7% and 26.9%, respectively, for civic and news content).

These patterns of exposure are similar for the most active Facebook users, a group that might be expected to consume content from congenial sources more frequently than other groups. Among US adults who used Facebook at least once each day in the 30 days preceding 17 August 2020, 53% of viewed content was from like-minded sources versus 14% for cross-cutting sources, but only 21.1% received more than 75% of their exposures from like-minded sources (see Extended Data Fig. 2 and Extended Data Table 2).

These results are not consistent with the worst fears about echo chambers. Even among those who are most active on the platform, only a minority of Facebook users are exposed to very high levels of content from like-minded sources. However, the data clearly indicate that Facebook users are much more likely to see content from like-minded sources than they are to see content from cross-cutting sources.

Experiment reducing like-minded source exposure

To examine the effects of reducing exposure to information from like-minded sources, we conducted a field experiment among consenting US adult Facebook users. This study combines data on participant behaviour on Facebook with their responses to a multi-wave survey, a design that allows us to estimate the effects of the treatment on the information that participants saw, their on-platform behaviour and their political attitudes (Methods).

Participants in the treatment and control groups were invited to complete five surveys before and after the 2020 presidential election assessing their political attitudes and behaviours. Two surveys were fielded pre-treatment: wave 1 (31 August to 12 September) and wave 2 (8 September to 23 September). The treatment ran from 24 September to 23 December. During the treatment period, 3 more surveys were administered: wave 3 (9 October to 23 October), wave 4 (4 November to 18 November) and wave 5 (9 December to 23 December). All covariates were measured in waves 1 and 2 and all survey outcomes were measured after the election while treatment was still ongoing (that is, in waves 4 and/or 5). Throughout the experiment, we also collected data on participant content exposure and engagement on Facebook.

In total, the sample for this study consists of 23,377 US-based adult Facebook users who were recruited via survey invitations placed at the top of their Facebook feeds in August and September 2020, provided informed consent to participate and completed at least one post-election survey wave (see Supplementary Information, sections 4.5 and 4.9).

For participants assigned to treatment, we downranked all content (including, but not limited to, civic and news content) from friends, groups and Pages that were predicted to share the participant’s political leaning (for example, all content from conservative friends and groups and Pages with conservative audiences was downranked for participants classified as conservative; see Supplementary Information, section 1.1).

We note three important features of the design of the intervention. First, the sole objective of the intervention was to reduce exposure to content from like-minded sources. It was not designed to directly alter any other aspect of the participants’ feeds. Content from like-minded sources was downranked using the largest possible demotion strength that a pre-test demonstrated would reduce exposure without making the Feed nearly empty for some users, which would have interfered with usability and thus confounded our results; see Supplementary Information, section 1.1. Second, our treatment limited exposure to all content from like-minded sources, not just news and political information. Because social media platforms blur social and political identities, even content that is not explicitly about politics can still communicate relevant cues14,35. Also, because politics and news account for a small fraction of people’s online information diets18,36,37, restricting the intervention to political and/or news content would yield minimal changes to some people’s Feeds. Third, given the associations between polarized attitudes and exposure to politically congenial content that have been found in prior research, we deliberately designed an intervention that reduces rather than increases exposure to content from like-minded sources to minimize ethical concerns.

Treatment effects on content exposure

The observed effects of the treatment on exposure to content from like-minded sources among participants are plotted in Fig. 2. As intended, the treatment substantially reduced exposure to content from like-minded sources relative to the pre-treatment period. During the treatment period of 24 September to 23 December 2020, average exposure to content from like-minded sources declined to 36.2% in the treatment group while remaining stable at 53.7% in the control group (P < 0.01). Exposure levels were relatively stable during the treatment period in both groups, except for a brief increase in treatment group exposure to content from like-minded sources on 2 November and 3 November, owing to a technical problem in the production servers that implemented the treatment (see Supplementary Information, section 4.11 for details).

Fig. 2: Day-level exposure to content from like-minded sources in the Facebook Feed by experimental group.
figure 2

Mean day-level share of respondent views of content from like-minded sources by experimental group between 1 July and 23 December 2020. Sources are classified as like-minded on the basis of estimates from an internal Facebook classifier at the individual level for users and friends, and at the audience level for Pages and groups. W1–W5 indicate survey waves 1 to 5; shading indicates wave duration. Extended Data Fig. 3 provides a comparable graph of views of content from cross-cutting sources. Note: exposure levels increased briefly on 2 and 3 November owing to a technical problem; details are provided in Supplementary Information, section 4.11.

Source Data

Our core findings are visualized in Fig. 3, which shows the effects of the treatment on exposure to different types of content during the treatment period (Fig. 3a), the total number of actions engaging with that content (Fig. 3b), the rate of engagement with content conditional on exposure to it (Fig. 3c), and survey measures of post-election attitudes (Fig. 3d; Extended Data Table 3 reports the corresponding point estimates from Fig. 3; Supplementary Information, section 1.4 provides measurement details).

Fig. 3: Effects of reducing Facebook Feed exposure to like-minded sources.
figure 3

Average treatment effects of reducing exposure to like-minded sources in the Facebook Feed from 24 September to 23 December 2020. ac, Sample average treatment effects (SATE) on Feed exposure and engagement. b, Total engagement (for content, the total number of engagement actions). c, Engagement rate (the probability of engaging conditional on exposure). d, Outcomes of surveys on attitudes, with population average treatment effects (PATEs) estimated using survey weights. Supplementary Information 1.4 provides full descriptions of all outcome variables. Non-bolded outcomes that appear below a bolded header are part of that category. For example, in d, ‘issue positions’, ‘group evaluations’ and ‘vote choice and candidate evaluations’ appear below ‘ideologically consistent views’, indicating that all are measured such that higher values indicate greater ideological consistency. Survey outcome measures are standardized scales averaged across surveys conducted between 4 November and 18 November 2020 and/or 9 December and 23 December 2020. Point estimates are provided in Extended Data Table 3. Sample average treatment effect estimates on attitudes are provided in Extended Data Fig. 4. All effects estimated using ordinary least squares (OLS) with robust standard errors and follow the preregistered analysis plan. Points marked with asterisks indicate findings that are significant (P < 0.05 after adjustment); points marked with open circles indicate P > 0.05 (all tests are two-sided). P values are false-discovery rate (FDR)-adjusted (Supplementary Information, section 1.5.4).

Source Data

As seen in Fig. 3a, the reduction in exposure to content from like-minded sources from 53.7% to 36.2% represents a difference of 0.77 s.d. (95% confidence interval: −0.80, −0.75). Total views per day also declined by 0.05 s.d. among treated participants (95% confidence interval: −0.08, −0.02). In substantive terms, the average control group participant had 267 total content views on a typical day, of which 143 were from like-minded sources. By comparison, 92 out of 255 total content views for an average participant in the treatment condition were from like-minded sources on a typical day (Supplementary Tables 33 and 40).

This reduction in exposure to information from like-minded sources, however, did not lead to a symmetrical increase in exposure to information from cross-cutting sources, which increased from 20.7% in the control group to 27.9% in the treatment group, a change of 0.43 s.d. (95% confidence interval: 0.40, 0.46). Rather, respondents in the treatment group saw a greater relative increase in exposure to content from sources classified as neither like-minded nor cross-cutting. Exposure to content from these sources increased from 25.6% to 35.9%, a change of 0.68 s.d. (95% confidence interval: 0.65, 0.71).

Figure 3a also indicates that reducing exposure to content from like-minded sources reduced exposure to content classified as containing one or more slur words by 0.04 s.d. (95% confidence interval: −0.06, −0.02), content classified as uncivil by 0.15 s.d. (95% confidence interval: −0.18, −0.13), and content from misinformation repeat offenders (sources identified by Facebook as repeatedly posting misinformation) by 0.10 s.d. (95% confidence interval: −0.13, −0.08). Substantively, the average proportion of exposures decreased from 0.034% to 0.030% for content with slur words (a reduction of 0.01 views per day on average), from 3.15% to 2.81% for uncivil content (a reduction of 1.24 views per day on average), and from 0.76% to 0.55% for content from misinformation repeat offenders (a reduction of 0.62 views per day on average). Finally, the treatment reduced exposure to civic content (−0.05 s.d.; 95% confidence interval: −0.08, −0.03) and increased exposure to news content (0.05 s.d., 95% confidence interval: 0.02, 0.07) (see Supplementary Information, section 1.3 for details on how uncivil content, content with slur words and misinformation repeat offenders are measured).

Treatment effects on content engagement

We next consider the effects of the treatment (reducing exposure to content from like-minded sources) on how participants engage with content on Facebook. We examine content engagement in two ways, which we call ‘total engagement’ and ‘engagement rate’. Figure 3b presents the effects of the treatment on total engagement with content—the total number of actions taken that we define as ‘passive’ (clicks, reactions and likes) or ‘active’ (comments and reshares) forms of engagement. Figure 3c presents effects of the treatment on the engagement rate, which is the probability of engaging with the content that participants did see (that is, engagement conditional on exposure). These two measures do not necessarily move in tandem: as we report below, participants in the treatment group have less total engagement with content from like-minded sources (since they are by design seeing much less of it), but their rate of engagement is higher than that of the control group, indicating that they interacted more frequently with the content from like-minded sources to which they were exposed.

Figure 3b shows that the intervention had no significant effect on the time spent on Facebook (−0.02 s.d., 95% confidence interval: −0.050, 0.004) but did decrease total engagement with content from like-minded sources. This decrease was observed for both passive and active engagement with content from like-minded sources, which decreased by 0.24 s.d. (95% confidence interval: −0.27, −0.22) and 0.12 s.d. (95% confidence interval: −0.15, −0.10), respectively. Conversely, participants in the treatment condition engaged more with cross-cutting sources—passive and active engagement increased by 0.11 s.d. (95% confidence interval: 0.08, 0.14) and 0.04 s.d. (95% confidence interval: 0.01, 0.07), respectively. Finally, we observe decreased passive engagement but no decrease in active engagement with content from misinformation repeat offenders (for passive engagement, −0.07 s.d., 95% confidence interval: −0.10, −0.04; for active engagement, −0.02 s.d., 95% confidence interval: −0.05, 0.01).

When people in the treatment group did see content from like-minded sources in their Feed, however, their rate of engagement was higher than in the control group. Figure 3c shows that, conditional on exposure, passive and active engagement with content from like-minded sources increased by 0.04 s.d. (95% confidence interval: 0.02, 0.06) and 0.13 s.d. (95% confidence interval: 0.08, 0.17), respectively. Furthermore, although treated participants saw more content from cross-cutting sources overall, they were less likely to engage with the content that they did see: passive engagement decreased by 0.06 s.d. (95% confidence interval: −0.07, −0.04) and active engagement decreased by 0.02 s.d. (95% confidence interval: −0.04, −0.01). The number of content views per days active on the platform also decreased slightly (–0.05 s.d., 95% confidence interval: −0.08, −0.02).

Treatment effects on attitudes

Finally, we examine the causal effects of reducing exposure to like-minded sources on Facebook on a range of attitudinal outcomes measured in post-election surveys (Fig. 3d). As preregistered, we apply survey weights to estimate PATEs and adjust P values for these outcomes to control the false discovery rate (see Supplementary Information, sections 1.5.4 and 4.7 for details). We observe a consistent pattern of precisely estimated results near zero (open circles in Fig. 3d) for the outcome measures we examine: affective polarization; ideological extremity; ideologically consistent issue positions, group evaluations and vote choice and candidate evaluations; and partisan-congenial beliefs and views about election misconduct and outcomes, views toward the electoral system and respect for election norms (see Supplementary Information, section 1.4 for measurement details). In total, we find that 7 out of the 8 point estimates for our primary outcome measures have values of ±0.03 s.d. or less and are precisely estimated (exploratory equivalence bounds: ±0.1 s.d.; Supplementary Table 60), reflecting high levels of observed power. For instance, the minimum detectable effect in the sample for affective polarization is 0.019 s.d. The eighth result is a less precise null for ideologically consistent vote choice and candidate evaluations (0.056 s.d., equivalence bounds: 0.001, 0.111.)

We also tested the effects of reducing exposure to content from like-minded sources on a variety of attitudinal measures for which we had weaker expectations. Using an exploratory equivalence bounds test, we can again confidently rule out effects of ±0.18 s.d. for these preregistered research questions across 18 outcomes, which are reported in Extended Data Fig. 5 and Supplementary Table 47. An exploratory equivalence bounds analysis also rules out a change in self-reported consumption of media outlets outside of Facebook that we categorized as like-minded of ±0.07 s.d. (Supplementary Tables 59 and 67).

Finally, we examine heterogeneous treatment effects on the attitudes reported in Fig. 3d and the research questions across a number of preregistered characteristics: respondents’ political ideology (direction or extremity), political sophistication, digital literacy, pre-treatment exposure to content that is political, and pre-treatment levels of like-minded exposure both as a proportion of respondents’ information diet and as the total number of exposures (see Supplementary Information, section 3.9). None of the 272 preregistered subgroup treatment effect estimates for our primary outcomes are statistically significant after adjustment to control the false discovery rate. Similarly, an exploratory analysis finds no evidence of heterogeneous effects by age or number of years since joining Facebook (see Supplementary Information, section 3.9.5).

Discussion

Many observers share the view that Americans live in online echo chambers that polarize opinions on policy and deepen political divides6,7. Some also argue that social media platforms can and should address this problem by reducing exposure to politically like-minded content38. However, both these concerns and the proposed remedy are based on largely untested empirical assumptions.

Here we provide systematic descriptive evidence of the extent to which social media users disproportionately consume content from politically congenial sources. We find that only a small proportion of the content that Facebook users see explicitly concerns politics or news and relatively few users have extremely high levels of exposure to like-minded sources. However, a majority of the content that active adult Facebook users in the US see on the platform comes from politically like-minded friends or from Pages or groups with like-minded audiences (mirroring patterns of homophily in real-world networks15,39). This content has the potential to reinforce partisan identity even if it is not explicitly political14.

Our field experiment also shows that changes to social media algorithms can have marked effects on the content that users see. The intervention substantially reduced exposure to content from like-minded sources, which also had the effect of reducing exposure to content classified as uncivil and content from sources that repeatedly post misinformation. However, the tested changes to social media algorithms cannot fully counteract users’ proclivity to seek out and engage with congenial information. Participants in the treatment group were exposed to less content from like-minded sources but were actually more likely to engage with such content when they encountered it.

Finally, we found that reducing exposure to content from like-minded sources on Facebook had no measurable effect on a range of political attitudes, including affective polarization, ideological extremity and opinions on issues; our exploratory equivalence bounds analyses allow us to confidently rule out effects of ±0.12 s.d. We were also unable to reject the null hypothesis in any of our tests for heterogeneous treatment effects across many distinct subgroups of participants.

There are several potential explanations for this pattern of null results. First, congenial political information and partisan news—the types of content that are thought to drive polarization—account for a fraction of what people see on Facebook. Similarly, social media consumption represents a small fraction of most people’s information diets37, which include information from many sources (for example, friends, television and so on). Thus, even large shifts in exposure on Facebook may be small as a share of all the information people consume. Second, persuasion is simply difficult—the effects of information on beliefs and opinion are often small and temporary and may be especially difficult to change during a contentious presidential election33,40,41,42,43. Finally, we sought to decrease rather than increase exposure to like-minded information for ethical reasons. Although the results suggest that decreasing exposure to information from like-minded sources has minimal effects on attitudes, the effects of such exposure may not be symmetrical. Specifically, decreasing exposure to like-minded sources might not reduce polarization as much as increasing exposure would exacerbate it.

We note several other areas for future research. First, we cannot rule out the many ways in which social media use may have affected participants’ beliefs and attitudes prior to the experiment. In particular, our design cannot capture the effects of prior Facebook use or cumulative effects over years; experiments conducted over longer periods and/or among new users are needed (we note, however, that find no evidence of heterogeneous effects by age or years since joining Facebook). Second, although heterogeneous treatment effects are non-existent in our data and rare in persuasion studies in general44, the sample’s characteristics and behaviour deviate in some respects from the Facebook user population. Future research should examine samples that more closely reflect Facebook users and/or oversample subgroups that may be particularly affected by like-minded content. Third, only a minority of Facebook users occupy echo chambers yet the reach of the platform means that the group in question is large in absolute terms. Future research should seek to better understand why some people are exposed to large quantities of like-minded information and the consequences of this exposure. Fourth, our study examines the prevalence of echo chambers using the estimated political leanings of users, Pages, and groups who share content on social networks. We do not directly measure the slant of the content that is shared; doing so would be a valuable contribution for future research. Finally, replications in other countries with different political systems and information environments will be essential to determine how these results generalize.

Ultimately, these findings challenge popular narratives blaming social media echo chambers for the problems of contemporary American democracy. Algorithmic changes that decrease exposure to like-minded sources do not seem to offer a simple solution for those problems. The information that we see on social media may be more a reflection of our identity than a source of the views that we express.

Methods

Participants

Participants in our field experiment are 73.3% white, 57.3% female, relatively highly educated (50.7% have a college degree), and 54.1% self-identify as Democrats or lean Democrat. They also use Facebook more frequently than the general Facebook population and are exposed to more content from politically like-minded sources (the phenomenon of interest), including civic and news content from like-minded sources, than are other Facebook users (Supplementary Tables 2 and 410). Our treatment effect estimates on attitudes therefore apply survey weights created to reflect the population of adult monthly active Facebook users who were eligible for recruitment (see Supplementary Information, section 4.7). The demographic characteristics of the weighted sample are similar to those of self-reported Facebook users in an AmeriSpeak probability sample (Extended Data Table 5).

Experimental design

Respondents were assigned to treatment or control with equal probability using block randomization (see Supplementary Information, section 4.5 for details; participants were blind to assignment). The Feed of participants in the control condition was not systematically altered. Owing to the difficulty of measuring the political leaning or slant of many different types of content at scale, we instead varied exposure to content based on the estimated political leaning of the source of the information. Using a Facebook classifier, we estimate the political leaning of other users directly (see Supplementary Information, section 1.3 for details). Building on prior research16,17,23,45,46, we estimate the political leanings of Pages and groups using the political leanings of their audience (group members and Page followers). We classify all users as liberal or conservative using a binary threshold to maximize statistical power, but results are consistent when we exclude respondents with classifications between 0.4 and 0.6 in an exploratory analysis (see Supplementary Information, sections 3.10 and 3.11).

We designed the study to provide statistical power to detect small effects. For instance, our power calculations showed that a final sample size of 24,480 would generate a minimum detectable effect of 1.6 percentage points on vote choice among likely voters (see Supplementary Information, section 4.5).

Randomization was successful: the treatment and control groups do not differ in their demographic characteristics at a rate above what would be expected by chance (see Supplementary Table 5). In total, 82.6% of experimental participants completed at least one post-election survey (23,377 valid completions out of 28,296 eligible participants; see Supplementary Information, section 2.1.3). The final sample consists of respondents who completed at least one post-election survey and did not delete their account or withdraw from the study before data were de-identified. Those who left the study prior to completing a post-election survey do not significantly differ from our final sample (see Supplementary Information, sections 2.1 and 1.2).

Analyses

All analyses in the main text and in the Supplementary Information follow the preregistration filed at the Open Science Foundation (https://osf.io/3sjy2; see Supplementary Information, section 4.10 except for deviations reported in Supplementary Information, section 4.11). Treatment effect estimates use OLS with robust standard errors and control for covariates selected using the least absolute shrinkage and selection operator47 (see Supplementary Information, section 1.5.1). As preregistered, our tests of treatment effects on attitudes also apply survey weights to estimate PATEs (see Supplementary Information, section 4.7). Sample average treatment effects, which are very similar, are provided in Supplementary Information, sections 3.23.5.

Ethics

We have complied with all relevant ethical regulations. The overall project was reviewed and approved by the National Opinion Research Center (NORC) Institutional Review Board (IRB). Academic researchers worked with their respective university IRBs to ensure compliance with human subject research regulations in analysing data collected by NORC and Meta and authoring papers based on those findings. The research team also received ethical guidance from Ethical Resolve to inform study designs. More detailed information is provided in Supplementary Information, sections 1.2 and 4.9.

All experimental participants provided informed consent before taking part (see Supplementary Information, section 4.6 for recruitment and consent materials). Participants were given the option to withdraw from the study while the experiment was ongoing as well as to withdraw their data at any time up until their survey responses were disconnected from any identifying information in February 2023. We also implemented a stopping rule, inspired by clinical trials, which stated that we would terminate the intervention before the election if we detected it was generating changes in specific variables related to individual welfare that were much larger than expected. More details are available in Supplementary Information, section 1.2.

None of the academic researchers received financial compensation from Meta for their participation in the project. The analyses were preregistered at the Open Science Foundation (https://osf.io/3sjy2). The lead authors retained final discretion over everything reported in this paper. Meta publicly agreed that there would be no pre-publication approval of papers for publication on the basis of their findings. See Supplementary Information, section 4.8 for more details about the Meta–academic collaboration.

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.