Introduction

Disagreement over policy is a salient feature of contemporary American politics. A wealth of evidence indicates that Americans strongly disagree on a wide range of policy issues – such as gun control, abortion, healthcare, immigration, and many more – and hold increasingly extreme or “sorted” attitudes on those topics1,2,3,4,5. While disagreement between partisans is not in and of itself a cause for concern, severe levels of issue-based polarization can have adverse consequences for democracy. For example, polarized policy attitudes have been shown to exacerbate people’s feelings of hostility and dislike toward out-partisans6, diminish the likelihood of inter-party interaction7, and increase individuals’ tendency to evaluate policies based on group loyalties rather than evidence8.

Against this background, recent years have seen growing theoretical interest in understanding the foundations of political attitude polarization. One of the most prominent lines of inquiry is derived from the theory of politically motivated reasoning (PMR). According to this perspective, when people process political information, they are often not driven by the desire to arrive at the best conclusion given the available evidence (i.e., an accuracy motivation) but by the desire to arrive at a predetermined conclusion consistent with their political identities (i.e., a directional motivation9,10,11,12).

The PMR account posits that partisan motivations strongly shape both the way people process political information and the downstream effects of that information on their attitudes. The theory posits that when individuals are driven by partisan motivations, they often ignore, downplay, or outright reject information that runs counter to their preexisting beliefs13,14,15. Partisan motivations are also theorized to make people more likely to attend to and learn information that supports their preexisting beliefs, which in turn exacerbates attitude polarization. Consistent with these theoretical claims, influential works have concluded that factual knowledge about an issue has a polarizing effect on people’s attitudes. According to these accounts, more knowledgeable individuals have “a greater facility to discover and use – or explain away – evidence relating to their groups’ positions” (ref. 16, p. 734) and “possess greater ammunition with which to counterargue incongruent facts, figures, and arguments” (ref. 17, p. 757). Similar conclusions have been proposed in various other studies (e.g., refs. 18,19,20).

Despite the causal nature of the arguments described above, prior work utilizes observational data to examine the relationship between domain-specific knowledge and attitude polarization. In a typical study exploring this relationship, participants are either asked to answer a cross-sectional survey that measures all variables at a single time point (e.g., refs. 16,19) or take part in an experiment that manipulates some other aspect and uses factual knowledge as an observed moderator that is measured prior to the experimental treatment (e.g., ref. 17). These observational approaches raise a critical inferential challenge, as they do not allow researchers to separate the impact of obtaining factual knowledge from the effect of other variables that are typically correlated with both domain-specific political knowledge and attitude polarization, such as the intensity of one’s political group identity11, one’s pre-existing factual beliefs21, or one’s general interest in politics22.

Several prior studies have utilized random assignment to examine the causal impact of political information on citizens’ issue attitudes. However, these experiments use thin informational treatments consisting of a single piece of information. Researchers typically present to participants a single fact relevant to a topic (e.g., ref. 4) or one argument either supporting or opposing a policy proposal (e.g., ref. 23). While these experiments have contributed important insights to our understanding of information’s effects on political attitudes, neither comprehensively manipulates individuals’ domain-specific knowledge. To address this gap, we have developed a novel experimental design that does just that. Our informational treatment consists of a training session that exposes participants to a high volume of issue-relevant information, covers diverse aspects of the issue, uses objective, verifiable facts; and, critically, includes a mix of facts both for and against people’s initial policy attitudes. This design enables us to assess whether the causal predictions of the PMR literature described above, hold empirically. By allowing participants to interact as they wish with an extensive and diverse set of facts on both sides of a salient policy debate, we can examine what type of information people learn and how the learned information affects their attitudes on the issue.

In contrast to the motivated reasoning claim, we argue it is far from obvious that factual knowledge causally increases attitude polarization. In fact, there seem to be good theoretical and empirical reasons to expect the opposite. Recent research suggests that human reasoning is “meaningfully directed toward the formation of accurate, rather than merely identity-confirming, beliefs” (ref. 24 p. 2525) and that individuals act as Bayesian learners who incorporate new information into their evaluations of the political and social environment25,26. Consistent with this view, a growing number of empirical studies have uncovered evidence that people update their political attitudes and beliefs in line with evidence (e.g., refs. 4,13,27,28,29) and arguments23,30,31 presented to them. Building on this line of work, we test the pre-registered hypothesis that learning facts about a contentious political issue has a causal depolarizing effect on political attitudes. We predict that when people are presented with a credible and politically diverse set of facts on an issue and are sufficiently incentivized to learn that content, they learn that there are valid considerations on both sides of the debate. This, in turn, results in the adoption of less extreme attitudes.

In this work, we report on a two-wave study designed to identify the causal effect of policy-relevant knowledge on attitude polarization. In Wave 1 of the study, we recruited a nationally representative, quota-matched sample of N = 1011 participants from Bovitz, Inc.32. Participants were randomized to learn about either gun control or a nonpolitical control topic via a series of training modules that educated them on a variety of facts relevant to the issue. Participants interacted with each factual module as they wished and rated each module for its perceived helpfulness. Importantly, participants were allowed to spend as little or as much time as they liked on any module.

After the training, participants’ knowledge of gun control facts was measured to see if we had successfully manipulated topic-relevant knowledge. Participants then proceeded to answer a series of questions measuring three aspects of their policy attitudes: (a) their position on increasing gun control, (b) their opinion about which side in the gun control debate is most aligned with the evidence, and (c) their view of what the evidence on gun control implies for policy. In line with the pre-registered protocol, the three outcomes were analyzed separately, and all three were recoded such that higher scores indicate greater movement in the opposing direction from one’s initial gun control attitude. Lastly, for exploratory analysis, participants also answered a series of affective polarization items that measured their feelings toward a person holding the opposing gun control attitude.

In Wave 2 of the study, we recontacted all participants a month later with a retention rate of 87% (N = 881). All individuals were first asked the same attitude measures described above, to assess how well the attitude changes endured. Then, they were asked the gun control knowledge items to see how well this topic-relevant information was retained. For further details on the study design and analytic approach, see Methods.

Results show that people engage with policy-relevant facts both for and against their initial attitudes and learn both types of information. This increased factual knowledge shifts individuals toward more moderate policy attitudes, a durable effect that is still visible after one month. These results suggest that the impact of directionally motivated reasoning on the processing of political information might be more limited than previously thought.

Results

The tests presented below are designed to examine three questions: (1) Do individuals ignore and undervalue information that runs counter to their preexisting political attitudes, as PMR theory would predict, or do they attend to such information and see value in it? (2) Is information that supports individuals’ preexisting attitudes preferentially stored in memory over counter-attitudinal information, as PMR theory would suggest, or are people willing to learn both pro- and counter-attitudinal facts? (3) Does learning information result in a bolstering of initial attitudes (i.e., polarization), as suggested in the PMR literature, or in a softening of them (i.e., depolarization)? Questions 1 and 2 can be interpreted as a manipulation check; our main goal with these analyses is to show that the informational treatment was powerful and worked as intended. We present these analyses in detail because we are not only interested in whether people learned but also what type of information they learned (given that they were exposed to both pro- and counter-attitudinal facts). Question 3 then tests our pre-registered hypothesis: after establishing that the manipulation worked, we directly test how the learned information affected individuals’ attitudes on the issue.

Throughout the Results section, we report intent-to-treat models without control variables. In Supplementary Tables 1AI, we show that all results are qualitatively equivalent when controlling for participants’ age, gender, race, education, party identification, political ideology, and the amount of political news they consume. In Supplementary Note 2, we also show that the results hold when using instrumental variables to estimate the complier average treatment effects, which take into account non-compliance. All statistical tests reported below are two-tailed.

Engagement with Factual Information

In this section, we focus exclusively on engagement among participants assigned to the experimental condition, because only they were exposed to gun-control facts. To test if these individuals undervalue or even ignore counter-attitudinal information (e.g., ref. 17), we examine the amount of time spent on and value given to the different training modules. We first look at how much time participants spent on politically neutral gun-control facts, pro-attitudinal gun-control facts, and counter-attitudinal gun-control facts. Recall that participants could spend as much, or as little, time as they liked on any given information module. Here, we find first that participants spent a substantial amount of time in both information categories, with the mean log time in a politically neutral module being M = 4.17 (SD = 0.89), and the mean log time in a valenced (or politicized) module being only slightly longer at M = 4.18 (SD = 1.09), a statistically insignificant difference, β = 0.006 (df = 456), p = 0.670 d = − 0.013, 95% CI (− 0.14, 0.12) (see Fig. 1).

Fig. 1: The figure shows time data for participants in the experimental condition.
figure 1

The left panel shows a density plot of the time spent (in log seconds) on politically neutral gun-control modules (teal) versus politically valenced gun-control modules (red). Each dashed colored line indicates the mean of the respective group. The right panel shows a density plot of log times for pro-attitudinal (gray) and counter-attitudinal modules (purple). Each dashed colored line indicates the mean of the respective group.

The above models compare mean log times for neutral vs. valenced modules. Using medians from the raw scores and a Wilcoxon rank-sum test, clustered on participant IDs, shows a significant difference in medians, where politicized modules had larger medians (Med = 78.26) than neutral modules (Med = 68.89), b = 0.025 (df = 456), p = 0.005, 95% CI (0.01, 0.04). This indicates participants spent significantly more time on valenced (or politicized) modules.

When comparing log time spent on counter-attitudinal modules (i.e., gun control modules discussing facts that contradicted participants’ initial attitude on the issue, M = 4.19, SD = 1.12) versus pro-attitudinal modules (i.e., gun control modules discussing facts that supported participants’ initial attitude on the issue, M = 4.17, SD = 1.09), we see no significant difference, β = 0.008 (df = 420), p = 0.541, d = − 0.017, 95% CI (− 0.15, 0.12), (see Fig. 1).

Next, we look at the value participants ascribed to the informational content in the form of helpfulness ratings provided at the end of each module. These ratings could range from 1 (Very unhelpful) to 5 (Very helpful). Looking first at ratings for neutral versus valenced information modules, we see the valenced modules have a slightly higher value rating (M = 4.27, SD = 0.658) compared to neutral modules (M = 4.19, SD = 0.647), β = − 0.056 (df = 456), p = 0.004, d = 0.113, 95% CI (− 0.02, 0.24). However, the difference is small and dependent on model parameters. What is more, both valenced (β = 0.806 [df = 456], p < 0.001, d = − 2.718, 95% CI [− 2.9, − 2.54]) and neutral modules (β = 0.794 [df = 456], p < 0.001, d = − 2.61 95% CI [− 2.78, − 2.43]) are significantly and substantially above the midpoint of the rating scale (“Neither helpful nor unhelpful”). Moreover, 74.4% of participants gave the neutral modules an average rating of helpful or very helpful, and 80.1% of participants gave the valenced modules an average rating of helpful or very helpful. These results indicate that both categories of information were perceived by participants as valuable for learning about gun control (see Fig. 2).

Fig. 2: Density plots.
figure 2

The left panel shows a density plot of the helpfulness ratings given to politically neutral gun-control modules (teal) versus politically valenced gun-control modules (red). The black dashed line indicates the scale midpoint and the colored dashed lines indicate the mean of the respective group. The right panel shows a density plot of the ratings given to counter-attitudinal (purple) and pro-attitudinal modules (gray). Again, the black dashed line indicates the scale midpoint and the colored dashed lines indicate the mean of the respective group.

When comparing pro-attitudinal versus counter-attitudinal information, we also see only a small difference where pro-attitudinal modules are rated as slightly more helpful (M = 4.33, SD = 0.699) than counter-attitudinal modules (M = 4.23, SD = 0.724), β = − 0.069 (df = 420), p = 0.001, d = 0.138, 95% CI (.01, .27). However, as above, both categories were rated significantly and substantially above the scale midpoint (counter-attitudinal: β = 0.769 [df = 420], p < 0.001, d = − 2.41, 95% CI [-2.59, − 2.23]; pro-attitudinal: β = 0.803 [df = 420], p < 0.001, d = − 2.7, 95% CI [− 2.88, − 2.51]). Further, 86.5% of participants gave the pro-attitudinal modules an average rating of helpful or very helpful, as did 82% of participants for counter-attitudinal modules. This shows again that participants perceived both categories of information as quite valuable when learning about gun control (see Fig. 2).

In sum, when examining participants’ willingness to engage with factual information, we find a result quite contrary to what the PMR theory would predict. Participants did not avoid but attended to and indicated value in the factual information presented to them. Importantly, this was the case both when the information confirmed people’s previously held beliefs and when it contradicted their prior beliefs.

Learning of Factual Information

After showing that individuals are willing to engage with both pro- and counter-attitudinal information, we next consider if and how much this information was retained in their long-term memory. We examine the fraction of correct answers to the gun-control knowledge items, which were asked right after participants finished the training modules. As expected, participants in the treatment group performed significantly better than participants in the control group, β = .402 (df = 1,009), p < .001, d = .878, 95% CI (1.01, .75) (see Fig. 3). This improvement in topic-relevant knowledge was still evident in the one-month follow-up, \(\beta\) = .15 (df = 879), p < .001, d = .304, 95% CI (.44, .17), although attenuated in magnitude.

Fig. 3: Effect of treatment on gun-control knowledge, Wave 1.
figure 3

Violin plots (left) show distributions, interquartile ranges, and medians; bar graphs (right) show means with 95% CIs, overlaid with raw data. Both plots represent the full sample of participants, N = 1011.

According to the motivated reasoning argument, partisans are selectively resistant to learning counter-attitudinal political facts (e.g., ref. 33). However, we find the opposite pattern. Figure 4 breaks the factual knowledge outcome down by type of gun-control fact: politically neutral (e.g., an automatic gun can fire more rapidly than a semi-automatic gun), pro-gun control (e.g., most American gun owners support stricter gun laws), or anti-gun control (e.g., less than 0.8% of murder victims are killed in mass shootings). Results show that individuals initially identifying as pro-gun control show a significantly larger learning effect for anti-gun control facts than pro-gun control facts (β = − 0.211 [df = 588], p < .001, d = − 0.356, 95% CI [− 0.47, − 0.24]); and individuals initially identifying as anti-gun control show a larger learning effect for pro-gun control facts than anti-gun control facts, though this interaction was not statistically significant (β = 0.103 [df = 334], p = 0.096, d = − 0.635, 95% CI [− 0.79, − 0.48]). For details on coding and full model results, see Supplementary Fig. 3A and Supplementary Table 4A, respectively.

Fig. 4: Effect of treatment on each gun control knowledge category in Wave 1, by pre-treatment gun control attitude.
figure 4

The left panel looks at politically neutral facts by both groups (control and treatment). The center panel looks at both political fact categories, among those who identified as pro-gun control pre-treatment. The right panel looks at both political fact categories, among those who identified as anti-gun control pre-treatment. Bars show means with 95% CIs, overlaid with raw data. All three plots represent the sample of participants who indicated having a pre-treatment position on gun control (n = 926).

In sum, given a modest incentive, individuals learn all three types of gun-control facts, but especially counter-attitudinal facts. Critically, the treatment seems to have helped close the knowledge gap between the two sides of the gun control debate. As Fig. 4 shows, pro-gun-control individuals have become more knowledgeable about anti-gun-control facts after treatment, and anti-gun-control individuals have become more knowledgeable about pro-gun-control facts after treatment. Below, we test the implications of this learning for people’s attitudes.

Incorporation of factual information

Figure 5 displays the average treatment effect on our three attitudinal outcomes and one affective outcome. Since our pre-registered hypothesis posits that factual knowledge reduces attitude polarization, each point estimate in this figure represents the average difference between the treatment and control groups in how much people’s attitudes have depolarized.

Fig. 5: Average treatment effects on the three attitudinal and one affective outcome are plotted on the left (black for Wave 1, purple for Wave 2).
figure 5

Higher scores indicate greater movement in the opposing direction. 90, 95, and 99% CIs are depicted via line width, central dot indicates the average treatment effect. The right panel depicts density plots showing the distribution of each outcome on the left panel (black for Wave 1, purple for Wave 2), split by treatment (darker color) and control (lighter color) n = 1008.

Beginning with the attitudinal outcomes, we first test for treatment effects on gun control attitudes, calculated as the extent to which post-treatment gun control attitudes moved in the opposing direction from pre-treatment gun control attitudes, while excluding the 8.4% of participants whose attitudes were already moderate (i.e., exactly at the midpoint) pre-treatment. As predicted, the gun control attitudes of participants in the treatment group moved further from their initial attitude than the attitudes of participants in the control group, insignificantly in Wave 1, β = 0.057 (df = 922), p = 0.086, d = − 0.113, 95% CI (− 0.24, 0.02); significantly in Wave 2, β = 0.093 (df = 806), p = 0.008, d = − 0.187, 95% CI (− 0.33, − 0.05). Even though we found a significant effect only in Wave 2, there was no significant interaction between Wave and the treatment on this outcome, b = 0.058 (df = 803), p = 0.557, 95% CI (− 0.14, 0.25). Further, we found no significant interaction between the treatment and pre-treatment gun control attitudes (Supplementary Note 5), and our results become even stronger when including control variables (Supplementary Table 1F).

Next, we examine treatment effects on perceptions of which side in the gun control debate is more aligned with the evidence. We use the same coding process as above, such that higher scores indicate greater movement toward the opposite end of the scale from where one began. Since this outcome (as well as the next one below) was only asked after the treatment, moving in the opposing direction here means in the direction counter to one’s pre-treatment gun control attitude (see Supplementary Fig. 3A for details). Also, since this outcome and the following both had each level verbally labeled, we use ordinal logistic regressions to avoid violating the assumptions of OLS (though all results persist when using OLS; Supplementary Tables 6AD). Here we again see those in the treatment group move further from their initial attitude than those in the control group (significantly in Wave 1, b = .437 [df = 916], p < 0.001, d = − 0.185, 95% CI [− 0.32, − 0.06]; insignificantly in Wave 2, b = 0.226 [df = 806], p = 0.072, d = − 0.101, 95% CI [− 0.24, .04]). As above, there was no significant interaction between Wave and treatment, b = − 0.227 (df = 796), p = .125, 95% CI (− 0.52, 0.06). An interaction of the treatment with initial gun control attitude is again insignificant (Supplementary Note 5), and the effects are similar when adding controls (Supplementary Table 1G).

As a final attitudinal measure, we examine perceptions of what the current evidence on gun control implies for policy. We again see that those in the treatment group moved further from their initial attitude than those in the control group in Wave 1 (b = 0.346 [df = 919], p = 0.004, d = − 0.14, 95% CI [− 0.27, − 0.01]), though this effect was not significant in Wave 2 (b = .157 [df = 806], p = .22, d = − 0.02, 95% CI [− 0.16, .11]). Again, there was no significant interaction between Wave and treatment, b = − 0.149 (df = 799), p = .298, 95% CI (− 0.43, 0.13). As before, an interaction of the treatment with initial gun-control attitude is not significant (Supplementary Note 5), and the effects are similar when adding controls (Supplementary Table 1H).

Our pre-registered coding method for the three attitudinal outcomes examines how much participants’ attitudes moved in the opposing direction from the side they started on. An alternative coding, used in some studies of political attitude polarization (e.g., ref. 34), is to “fold” each response scale at the midpoint, effectively recoding each attitudinal variable into a measure of pure extremity (that is, distance from the scale midpoint). Supplementary Note 7 provides details on this alternative coding, which yields similar conclusions to those reported above. In short, people in the treatment group tend to report less extreme (i.e., closer to the midpoint) attitudes than people in the control group.

We also ran (non-pre-registered) mediation analyses (reported in Supplementary Table 8A), in which variables indicating learning pro- and counter-attitudinal gun facts are tested as mediators of the treatment effect on attitude depolarization. These models are mostly consistent with our claim that the depolarizing effect of our treatment might be driven by learning counter-attitudinal facts. The mediation results should be interpreted cautiously, however, because we did not experimentally manipulate the mediators35,36.

Finally, we examined treatment effects on affective (rather than attitude) polarization. To do so, we use the overall aggregate of social distance measures (α = 0.88) as our dependent variable. In contrast to the moderating effect on attitudes observed above, we find no significant effect of the knowledge treatment on affective polarization (Wave 1, β = − 0.001 [df = 1006], p = 0.985, d = .001, 95% CI [-0.12, 0.13]; Wave 2 follow-up, β = − 0.015 [df = 879], p = 0.651, d = 0.031, 95% CI [− 0.10, .16]). Disaggregating the social distance scale did not change this result. The depolarization effect appears to be specific to policy attitudes and does not generalize to feelings toward people who hold an opposing view.

Discussion

The main goal of this study was to experimentally test predictions derived from the theory of politically motivated reasoning (PMR). Under the PMR account, people incorporate pro-attitudinal information into their belief systems and reject counter-attitudinal information, which, in turn, makes them more polarized (e.g., ref. 16,17). To test whether this indeed is the case, we have utilized a novel experimental design that exogenously varies domain-specific factual knowledge and examines its independent causal effect on attitude polarization. We find that learning issue-relevant facts causes people to adopt more moderate policy attitudes, without changing their feelings towards people with opposing views. Exposure to facts about gun control results in attitudes that are closer to the other side of the policy debate, and this effect occurs for those who are predisposed to support gun control as well as those who are predisposed to oppose it. The depolarizing effect of domain-specific knowledge is found for different indicators of attitude, and it is still visible one month after the treatment.

Our results inform ongoing debates about the impact of motivated reasoning on political cognition. Specifically, the findings presented here challenge a common claim in the PMR literature, namely, that acquiring domain-specific knowledge results in the adoption of more extreme attitudes. In contrast to this claim, we find that, given a modest incentive, individuals are willing to learn all types of facts, including a substantial amount of counter-attitudinal content. More importantly, this increased factual knowledge leads people to adopt less extreme attitudes. The latter result is especially telling because, while we incentivized people to learn the facts presented to them, we did not incentivize them to adopt more moderate attitudes.

In line with previous work suggesting people can be motivated to accurately update their political beliefs (e.g., refs. 4,23,25,37), we find that upon learning relevant facts about both sides of an issue, participants adopt, on average, less extreme political attitudes. Individuals in our study used facts not to bolster their prior beliefs, but to revise them. This finding can have important political consequences: while the motivated reasoning account implies that educating people about contentious issues will simply polarize them along political lines, we demonstrate that closing the knowledge gap between the two sides of the gun control debate can result in attitude moderation. Our study thus suggests that if citizens who disagree on policy had the same factual knowledge, their attitudes may converge.

At this point, it seems necessary to rule out that participants’ attitudes depolarized for reasons other than the content they have learned. One might argue, for instance, that the monetary incentives to get questions right have put our participants in an “accuracy mindset,” which led them to adopt more moderate attitudes independent of content. Our use of a control group addresses this concern because individuals in this group were in the same mindset as those in the experimental group – that is, they also received a bonus for each correct answer – yet their attitudes did not depolarize. Since the only difference between the experimental and control groups is the information people were incentivized to learn, we can rule out that the depolarization effect is driven by an increased motivation to be accurate independent of content.

Interestingly, our fact-based treatment reduces attitude polarization (i.e., the extremity of people’s policy attitudes) but has no statistically significant effect on levels of affective polarization (i.e., people’s feelings of hostility toward the political outgroup). While learning about an issue (in our case, gun control) moves people’s attitudes on that specific issue, there is no evidence that facts impact their affective evaluations of outgroup members. This might be because our measure of affect asked people about their willingness to interact with outgroup individuals. Direct interaction bears higher behavioral costs than self-reported attitudes and might, therefore, be harder to change to a discernable level. It is also possible that people assume that gun control attitudes are indicative of other attitudes and characteristics of outgroup members they dislike and want little contact with.

Our results also inform research into partisan divides in the evaluation of objective facts. There is an active debate in the literature about whether partisans’ tendency to provide politically favorable responses to factual questions reflects genuine differences in beliefs or insincere survey responding38,39,40. In our study, people on both sides of the gun control debate have gained a substantial amount of counter-attitudinal knowledge following the experimental treatment (see Fig. 4), and their attitudes have depolarized as a result. This finding suggests that accurate political information, along with sufficient incentives for learning, can truly reduce partisan gaps in factual beliefs.

What might account for the discrepancy between our findings and those of previous studies, which have found a positive correlation between domain-specific political knowledge and attitude polarization? One possibility is that a third variable, such as people’s general level of engagement with politics, increases both their domain-specific knowledge and attitude polarization. Perhaps previous observational studies have not accounted for the possibility that as one becomes more politically engaged, both knowledge and polarization increase independent from one another22. A second possible explanation is selective exposure: If individuals with greater political knowledge consume more pro-attitudinal (and less counter-attitudinal) political information in their daily lives (where incentives to attend to and learn counter-attitudinal information, such as those we employed here, are often absent), this may explain why observational studies find that their attitudes are more polarized41. While the data presented here cannot directly test these explanations, our experimental results suggest that factual knowledge is unlikely to causally increase attitude polarization.

Our results also suggest a need for greater clarity and precision in what we mean when using the term “politically motivated reasoning.” In the existing literature, the term is often used to describe different things, including people’s tendency to actively seek pro-attitudinal information and avoid counter-attitudinal information (i.e., selective exposure42) and the way people engage with pro- and counter-attitudinal information once they encounter it (i.e., information processing17). In this paper, we have focused on the latter. We show that once individuals have been exposed to both pro- and counter-attitudinal information, they attend to, internalize, and update on both types of facts. Hence, when PMR is defined as politically biased information processing, it is less prevalent than the catch-all term implies.

While we show here that learning facts can reduce attitude polarization, there are various limitations to our approach that should be acknowledged. First, we incentivized people to learn at various points throughout the experiment. While these incentives seem to have worked in our study, the benefits of learning political facts might not be clear to people in their daily lives. Second, participants in our experiment were not able to choose which facts they will be exposed to. Yet outside the lab, people can – and often do – avoid uncongenial political information (e.g., ref. 43). Outside the lab, people may not learn counter-attitudinal facts as readily as they did in our study. Third, our study provided facts and measured policy attitudes on one political issue, namely, gun control. While multiple previous studies have utilized this salient and contentious topic to test the effects on partisan polarization (e.g., refs. 21,44), it remains an open question whether the same results would be obtained for other issues. Finally, since our study only included American respondents, further work is needed to assess whether the results apply cross-nationally.

In sum, we have demonstrated that people are willing to engage with and learn facts both for and against their preexisting attitudes and that the facts they have learned influence their political attitudes in the opposite direction from what is often hypothesized in the politically motivated reasoning literature. The depolarizing effect of policy facts is found for different attitude measures and is still detectable one month after the treatment. Our findings suggest that the impact of directional motivations on the processing of political information may be more limited than previously thought, and that polarization can be countered with accurate, balanced, and credible factual information.

Methods

We pre-registered our design, measures, hypothesis, and analysis plan at Aspredicted.org on August 30, 2021, as study #73660 (see https://aspredicted.org/WGT_BQ2). This research complies with all relevant ethical regulations and received approval from the Internal Review Board of the Sloan School of Management at the Massachusetts Institute of Technology. Informed consent was obtained from all participants at the beginning of the study.

Wave 1

Sample

For Wave 1, n = 1000 participants were targeted for recruitment via Bovitz, Inc. The sample was quota-matched for age, gender, race, education, and region. Of the 1673 participants who entered the study, 632 were filtered out for cellphone use, not wanting to do a long study or do the amount of work described, or because they indicated themselves to be True Independent or Political Other. Of the remaining 1041 individuals who continued into the study, 30 (2.96%) completely attrited the study, giving us a final sample of N = 1011 (Control group: 51.93%; Democrats (including leaners): 52.23%; female: 45.54%; median age: 43). The study was fielded between 8–15 September 2021.

Procedure

The study took around 30–35 min to complete. Upon starting the study, participants first provided informed consent after receiving extensive information on the demands of this study, including their compensation level, which could reach up to $8. They then answered questions about their age, gender, ethnicity, education, political party affiliation, political ideology, political news consumption, and attitudes on several policy issues, including gun control. Participants then advanced on to receive an explanation of the study’s setup as well as the “opt-out” option if they wished to end the study early. Critically, to prevent attrition issues, participants were given several opportunities to exit the main sections of the study (at a cost) if they felt they no longer wanted to continue. In case they chose to exit early, they advanced directly to the end knowledge and polarization measures, which allows us to preserve those observations we would have lost due to attrition. For details on our efforts to mitigate attrition, see Supplementary Note 9.

Next, participants were presented with a topic – gun control in the experimental condition and dog training in the control condition – and were provided with access to a series of training modules on that topic to learn as much as they can in the time allotted. All module content was researched and was cited at the end of each module in an effort to convey to participants that the information is honest and could be verified (this was done to increase participants’ trust in the materials; see Supplementary Note 10). We chose gun control because of prior studies’ focus on the topic, with an understanding that it is a salient and contentious issue (e.g., refs. 21,44). We also confirmed in a pilot study (reported in Supplementary Note 11) that gun control is a highly contentious political issue.

The training session in both conditions included multiple modules that introduced the topic, gave some background content, and presented key concepts and facts about the topic. The modules were one to four paragraphs long. The modules included text that could be highlighted to be reviewed later, and the amount of time participants spent in each module was recorded. Further, each module ended with the same question for evaluation: “Did you find the above content helpful for your goal of learning as much as you can about your topic?” Participants responded on a five-point rating scale: Very unhelpful, Somewhat unhelpful, Neither helpful nor unhelpful, Somewhat helpful, Very helpful.

Looking specifically at the experimental group, participants first saw several modules focusing on gun control facts that were deliberately designed to be non-partisan. These facts focused on topics such as different categories of firearms, the introduction of the Second Amendment to the Bill of Rights, and more. Participants then saw four modules discussing a variety of facts of clear partisan value. These facts were developed from several high-quality resources, all accessible to participants, and citations to all content were provided at the end of each training module to make vetting the information easier for participants if they wished to do so.

The politically valenced modules depicted facts that supported claims on each side of the gun control debate. For example, facts showing how guns were dangerous under a certain set of conditions, that some gun control laws would provide protection from such dangers, and that a large proportion of gun owners endorse such legal fixes were all labeled as pro-gun control. Conversely, facts showing how, in certain conditions, guns are less dangerous than other common threats society treats as benign, that some gun control laws are unlikely to result in their stated goals, and that guns are a protective feature under some conditions were all labeled as anti-gun control. These “partisan” modules were randomized such that they were presented in staggered order. Note that the above labels (i.e., pro- vs. anti-gun control) were only used for our analyses; at no point did any module use such language when presenting information to participants.

Once finished with the training, participants were given the opportunity to go back and review the material before advancing to the main section. When asked at the end of the study if they used the highlighting function, 78.7% of participants indicated they had. As noted above and in Supplementary Note 10, this effort to provide high-quality, accessible, and informative training content produced high levels of trust from participants in the content presented to them, and there was no difference in trust in content between the experimental and control groups (p = .895).

Measures

After the training session, we measured participants’ topic knowledge using an 11-item gun-control knowledge battery. All items were multiple choice, participants were given 20 s to complete each question, and they were paid $0.10 for each correct answer. In addition to an overall gun knowledge measure aggregating the 11 items, we also measured participants’ scores on the three kinds of facts the training modules focused on: neutral gun-control facts, pro-gun control facts, and anti-gun control facts. One additional item was an overly difficult question designed to detect cheating45. All results reported above persist when examining only those who showed no cheating behavior (Supplementary Note 12). To allow comparison, participants in both conditions completed the gun control knowledge test.

After their factual knowledge was measured, participants in both conditions continued to answer the attitude measures. These involved three measures presented in a fixed order on the same page. The first outcome mirrors the initial policy question asked at the beginning of the study: “With regard to increasing gun control, what best describes your position?” (Strongly oppose, Oppose, Somewhat oppose, Neither oppose nor support, Somewhat support, Support, Strongly Support).

The second measure is designed to assess which side in the gun-control debate participants think is most aligned with the evidence. It uses an ordinal scale to ensure all individuals understand each response level: “When thinking about the different sides of the gun control debate, what statement below is closest to accurate?” (Those who oppose gun control are aligned with the evidence, those who support gun control are not; Those who oppose gun control are mostly aligned with the evidence, those who support gun control are mostly not; Those who oppose gun control are slightly more aligned with the evidence, those who support gun control are slightly less; Those who oppose gun control are about equally aligned with the evidence as those who support gun control; Those who support gun control are slightly more aligned with the evidence, those who oppose gun control are slightly less; Those who support gun control are mostly aligned with the evidence, those who oppose gun control are mostly not; Those who support gun control are aligned with the evidence, those who oppose gun control are not) (order counterbalanced).

The third measure is designed to assess what participants think the current evidence on gun control implies for policy and also uses an ordinal scale to ensure clarity: “What statement do you think is most true about the current evidence for the gun control debate in the U.S.?” (The available evidence makes it very clear that we need to restrict gun ownership in the U.S.; The available evidence makes it somewhat clear that we should restrict gun ownership in the U.S.; The available evidence suggests it would maybe be better to restrict gun ownership in the U.S.; The available evidence is not clear on if we should restrict gun ownership, or not, in the U.S.; The available evidence suggests it would maybe be better to not restrict gun ownership in the U.S.; The available evidence makes it somewhat clear that we should not restrict gun ownership in the U.S.; The available evidence makes it very clear that we need to not restrict gun ownership in the U.S.) (order counterbalanced).

In line with the pre-registered protocol, the three attitude items described above were recoded into variables indicating depolarization, that is, attitude change in the opposing direction from one’s pre-treatment gun control attitude (anti- or pro-gun control; see Supplementary Fig. 3A). After completing the three policy attitude measures, participants answered a series of “social distance” questions measuring their willingness to interact with people who hold opposing views on the issue. There were five items in total that asked participants about their social distance from individuals holding the opposing gun control attitude. The items varied in personal/social cost and involved (i) a son or daughter marrying-, (ii) a serious friend who started dating-, (iii) being required to work with-, (iv) regularly carpooling and making casual conversation with-, and (v) having a conversation with- an individual who supports more (less) gun restrictions. The order of items was randomized.

Wave 2

Sample

All 1,011 participants who completed Wave 1 of the study were recontacted for Wave 2 a full month later. The recontact was carried out by Bovitz, Inc., and participants were not told that there was any association with the earlier study. Of the initial sample, 881 (87%) participated in Wave 2. The Wave 2 participants were comparable to those from Wave 1 on political ideology, party identification, gender, and condition (see Supplementary Note 13). Further, looking at the Wave 1 results while subsetting just on those who returned in Wave 2 yields substantively identical results (see Supplementary Note 14). Thus, there appears to be no obvious “type” of participant who was available at Wave 1 but not at Wave 2.

Procedure and measures

For Wave 2, participants completed the same set of demographics as in Wave 1. They then answered the three key attitudinal outcomes in the same order as before. Participants then continued to the affective polarization measure, where the order of the five items was again randomized. Lastly, participants completed the same 11-item gun-control knowledge measure from Wave 1. Participants were again given 20 s to complete each item, but unlike before, there was no payment for correct answers. Instead, participants were told to “try their best” and that they would be told at the end how well they did. This non-incentivized context gives us an even clearer estimate of how much participants have internalized the information.

Reporting summary

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.