The extent to which belief in (mis)information reflects lack of knowledge versus a lack of motivation to be accurate is unclear. Here, across four experiments (n = 3,364), we motivated US participants to be accurate by providing financial incentives for correct responses about the veracity of true and false political news headlines. Financial incentives improved accuracy and reduced partisan bias in judgements of headlines by about 30%, primarily by increasing the perceived accuracy of true news from the opposing party (d = 0.47). Incentivizing people to identify news that would be liked by their political allies, however, decreased accuracy. Replicating prior work, conservatives were less accurate at discerning true from false headlines than liberals, yet incentives closed the gap in accuracy between conservatives and liberals by 52%. A non-financial accuracy motivation intervention was also effective, suggesting that motivation-based interventions are scalable. Altogether, these results suggest that a substantial portion of people’s judgements of the accuracy of news reflects motivational factors.
Misinformation—which can refer to fabricated news stories, false rumours, conspiracy theories or disinformation—can have serious negative effects on society and democracy1,2. Misinformation exposure can reduce support for climate change3,4 or lead to vaccine hesitancy5,6,7, and the mere repetition of misinformation can increase belief in it8,9. There has thus been a growing interest in understanding the psychology of belief in misinformation and how to mitigate its spread1,2,10,11,12.
There are large partisan differences in how people judge information to be true or false. People are much more likely to believe news with politically congruent content13,14,15,16 or news that comes from politically congruent sources17,18. However, there are multiple possible reasons that can explain why this partisan divide exists. One possible explanation is that people tend to engage in politically motivated cognition19,20: although people are often motivated to be accurate, they also have social goals (for example, group belonging, status and so on) for holding certain beliefs that can interfere with accuracy goals13. Another potential explanation is that partisans have different pre-existing knowledge, or different prior beliefs, as a result of exposure to different partisan news outlets and social media feeds10.
Yet, it is challenging to differentiate between these explanations unless accuracy or social motivations are experimentally manipulated21,22,23,24. If belief in misinformation in part reflects motivational factors, experimentally manipulating people’s accuracy or social motivations should shift people’s judgements of misinformation21,23,24,25. However, if belief in misinformation simply reflects different prior beliefs or exposure to different information sources, these experimental manipulations should not change people’s judgements of misinformation.
Several studies have also found that US conservatives or Republicans tend to believe in and share more misinformation than US liberals or Democrats26,27,28,29,30,31,32. One interpretation behind this asymmetry is that US conservatives are exposed to more low-quality information and thus have less accurate political knowledge, perhaps due to US conservative politicians and news media sources sharing less accurate information33,34. Another interpretation again focuses on motivation, suggesting that US conservatives may, in some contexts, have greater motivations to believe ideologically or identity-consistent claims that could interfere with their motivation to be accurate31,35,36,37. But, again, it is difficult to disentangle these two explanations without experimentally manipulating motivations.
In this Article, we examine the causal role of accuracy motives in shaping judgements of true and false political news via the provision of financial incentives for correctly identifying accurate headlines. Prior research using financial incentives for accuracy has yielded mixed results. For example, previous studies have found that financial incentives to be accurate can reduce partisan bias about politicized issues38,39 and news headlines40, and improve accuracy about scientific information41. However, another study found that incentives for accuracy can backfire, increasing belief in false news stories14. Incentives also do not eliminate people’s tendency to view familiar statements42,43 or positions for which they advocate44 as more accurate, raising questions as to whether incentives can override the heuristics people use to judge truth45. These conflicting results motivate the need for a systematic investigation of when and for whom various motivations influence belief in news.
We also examine whether social identity-based motivations to identify posts that will be liked by one’s political in-group interfere with accuracy motivations. On social media, content that appeals to social-identity motivations, such as expressions of out-group derogation, tends to receive high engagement online46,47,48. False news stories may be good at fulfilling these identity-based motivations, as false content is often negative about out-group members26,49. The incentive structure of the social media environment draws attention to social motivations (for example, receiving social approval in the form of likes and shares), which may lead people to give less weight to accuracy motivations online50,51. As such, it is important to understand how these social motivations might compete with accuracy motivations13.
Finally, we compare the effect of accuracy motivations with the effects of other factors that are regularly invoked to explain the belief and dissemination of misinformation, such as analytic thinking52 political knowledge53, media literacy skills54 and affective polarization49. By including these variables in the same study, we are able to compare different theoretical models of (mis)information belief and sharing2,11.
Across four pre-registered experiments, including a replication with a nationally representative US sample, we test whether incentives to be accurate improve people’s ability to discern between true and false news and reduce partisan bias (experiment 1). Additionally, we test whether increasing partisan identity motivations by paying people to correctly identify posts that they think will be liked by their political in-group (mirroring the incentives of social media) reduces accuracy (experiment 2). Further, we examine whether the effects of incentives are attenuated when partisan source cues are removed from posts (experiment 3). Then, to test the generalizability of these results and help rule out alternate explanations, we test whether increasing accuracy motivations through a non-financial accuracy motivation intervention also improves accuracy (experiment 4). Finally, in an integrative data analysis (IDA), we examine whether motivation helps explain the gap in accuracy between conservatives and liberals, and compare the effects of motivation with the effects of other variables known to predict misinformation susceptibility.
Experiment 1: incentives improve accuracy and reduce bias
In experiment 1, we recruited a politically balanced sample of 462 US adults via the survey platform Prolific Academic55. Participants were shown 16 pre-tested news headlines with an accompanying picture and source (similar to how a news article preview would show up on someone’s Facebook feed). In a pre-test, eight headlines (four false and four true) were rated as more accurate by Democrats than Republicans, and eight headlines (four false and four true) were rated as more accurate by Republicans than Democrats56. An example of a Democrat-leaning true headline was ‘Facebook removes Trump ads with symbols once used by Nazis’ from apnews.com, and an example of a Democrat-leaning false news headline was ‘White House Chef Quits because Trump Has Only Eaten Fast Food For 6 Months’ from halfwaypost.com. After seeing each headline, participants were asked ‘To the best of your knowledge, is the claim in the above headline accurate?’ and were then asked ‘If you were to see the above article on social media, how likely would you be to share it?’ For more details, see Methods.
Half of the participants were randomly assigned to the ‘accuracy incentives’ condition. In this condition, participants were told they would receive a small bonus payment of up to one US dollar based on how many correct answers they could provide regarding the accuracy of the articles. The other half of participants were assigned to a ‘control’ condition in which they were asked the same questions about accuracy and sharing without any incentive to be accurate.
We first examined whether accuracy incentives improved truth discernment, or the number of true headlines participants rated as true minus the number of false headlines participants rated as true15. As predicted, participants in the accuracy incentives condition (mean (M) = 3.01, 95% confidence interval (CI) 2.68–3.34) were better at discerning truth than those in the control condition (M = 2.43, 95% CI 2.12–2.73), t(457.64) = 2.58, P = 0.010, d = 0.24. In other words, participants answered 11.01 (out of 16) questions correctly in the accuracy incentives condition, as opposed to 10.43 (out of 16) questions in the control condition.
We next examined whether incentives decreased partisan bias, or the number of politically congruent headlines participants rated as true minus the number of politically incongruent headlines participants rated as true. This measurement of partisan bias follows recommendations from prior work15,57, yet we discuss alternative ways to measure partisan bias and debates about the term ‘partisan bias’58 in Supplementary Appendix 1. We also re-analysed our data using an alternate measure of partisan bias in Supplementary Appendix 1 and found no changes to our main conclusions.
As predicted, partisan bias, or one’s belief in politically congruent over politically incongruent claims, was 31% smaller in the accuracy incentives condition (M = 1.31, 95% CI 1.04–1.58) as compared with the control condition (M = 1.91, 95% CI 1.62–2.19), t(495.8) = 3.01, P = 0.001, d = 0.28. Results from all four studies are plotted visually in Fig. 1.
Additional analysis (for extended results, see Supplementary Appendix 1) found that the accuracy incentives condition increased the percentage of politically incongruent true headlines rated as true (M = 51.53%, 95% CI 47.36–55.70) as compared with the control condition (M = 38.25%, 95% CI 34.41–42.08), P < 0.001, d = 0.43. Incentives did not statistically significantly impact judgements of politically congruent true news, politically incongruent false news or politically congruent false news when controlling for multiple comparisons with Tukey post-hoc tests (ps >0.444). Thus, the effects of incentives were mainly driven by an increased belief in true news from the opposing party.
Finally, we examined whether the incentives influenced sharing discernment, or the number of true headlines shared minus the number of false headlines people intended to share. Interestingly, even though sharing higher-quality articles was not explicitly incentivized, sharing discernment was slightly higher in the accuracy incentive condition (M = 0.38, 95% CI 0.28–0.48) as compared with the control condition (M = 0.22, 95% CI 0.15–0.30), t(424.8) = 2.49, P = 0.037, d = 0.23.
Experiment 2: social motivations
In experiment 2, we aimed to replicate and extend on the results of experiment 1 by examining whether social or partisan motivations to correctly identify articles that would be liked by one’s political in-group might interfere with accuracy motives. We recruited another politically balanced sample of 998 US adults (Methods). In addition to the accuracy incentives and control condition, we added a ‘partisan sharing’ condition, whereby participants were given a financial incentive to correctly identify articles that would appeal to members of their own political party. This condition was meant to mirror the incentive structure of social media whereby people try to share content that will be liked by their friends and followers. Specifically, participants were told that they would receive a bonus payment of up to one dollar based on how accurately they identified articles that would be liked by members of their political party if they shared them on social media. Immediately after answering this question, participants were asked about the accuracy of the article and how likely they would be to share it. To examine how partisan identity goals might interfere with accuracy goals, we added a final condition, called the mixed motivation condition, in which participants received a financial incentive of up to one dollar to identify articles that would be liked by one’s in-group, followed by an additional financial incentive to accurately identify true and false articles.
We first examined how these motivations influenced truth discernment. Replicating the results of experiment 1, there was a significant main effect of the accuracy incentives condition on truth discernment, F(1, 994) = 29.14, P < 0.001, η2G = 0.03, a significant main effect of the partisan sharing manipulation on truth discernment, F(1, 994) = 7.53, P = 0.006, η2G = 0.01, but no significant interaction between the accuracy and the partisan sharing manipulation (P = 0.237). Tukey honestly significant difference (HSD) post-hoc tests indicated that truth discernment was higher in the accuracy incentives condition (M = 3.01, 95% CI 2.69–3.32) compared with the control condition (M = 2.02, 95% CI 1.74–3.30), P < 0.001, d = 0.41. Truth discernment was also higher in the accuracy incentives condition compared with the partisan sharing condition (M = 1.78, 95% CI 1.49–2.07), P < 0.001, d = 0.50, and the mixed condition (M = 2.42, 95% CI 2.11–2.71), P = 0.029, d = 0.27. However, the mixed condition did not differ from the control condition (P = 0.676), and the partisan sharing condition also did not significantly differ from the control condition (P = 0.241). Taken together, these results suggest that accuracy motivations increase truth discernment, but motivations to share articles that appeal to one’s political in-group can decrease truth discernment.
We then examined how these motives influenced partisan bias. Replicating the results from experiment 1, there was a significant main effect of accuracy incentives on partisan bias, F(1, 994) = 9.01, P = 0.003, η2G = 0.01, but no effect of the partisan sharing manipulation, F(1, 994) = 0.60, P = 0.441, η2G = 0.00, and no interaction between the accuracy and the partisan sharing manipulation, F(1, 994) = 0.27, P = 0.606, η2G = 0.00. Post-hoc tests indicated that there was a non-significant difference in partisan bias between the accuracy incentives condition (M = 1.26, 95% CI 1.01–1.51) and the control condition (M = 1.72, 95% CI 1.47–1.98), P = 0.062, d = 0.23, a 27% decrease in partisan bias. There was a significant difference between the accuracy incentives condition and the partisan sharing condition (M = 1.76, 95% CI 1.48–2.03), P = 0.040, d = 0.24. No other post-hoc tests yielded significant differences (ps >0.182).
Follow-up analysis (Supplementary Appendix 1) once again indicated that the incentives primarily impacted the percentage of politically incongruent true headlines rated as accurate (M = 55.61%, 95% CI 51.68–59.54) when compared with the control condition (M = 37.65%, 95% CI 33.83–41.46), P < 0.001, d = 0.58. The incentives again did not impact congruent true news, incongruent false news or congruent false news (ps >0.148).
There was no significant effect of accuracy incentives on sharing discernment (P = 0.996), diverging from the results of study 1. However, follow-up analysis (Supplementary Appendix 1) indicated that those in the partisan sharing condition shared more politically congruent news (either true or false) (M = 1.98, 95% CI 1.90–2.05) as compared with the control condition (M = 1.80, 95% CI 1.74–1.87), P = 0.015, d = 0.21. Additionally, those in the mixed condition (M = 2.02, 95% CI 1.94–2.10) shared more politically congruent news (true or false) as compared with the control condition, P < 0.001, d = 0.26. Thus, prompting participants to identify whether an article will be liked by their political allies—whether or not they are also incentivized to be accurate—appears to increase intentions to share both true and false news that appeals to one’s own partisan identity.
Experiment 3: accuracy incentives and source cues
In experiment 3, we sought to replicate our prior findings in a nationally representative sample in the United States. We recruited a sample of 921 US participants that was quota matched to the national distribution on age, gender, ethnicity and political party. We also tested a potential psychological process underlying the effects of accuracy incentives. As prior work has found strong effects of source cues17 on judgements of news headlines, we suspected that people were responding to source cues when making judgements about news. As true news often contains more recognizable sources with partisan connotations (for example, ‘nytimes.com’ as opposed to the fake news website ‘yournewswire.com’)59, this may explain why incentives only impacted judgements of true news in experiments 1 and 2. To test this possibility, we examined the effect of incentives with and without source cues (for example, a URL name such as ‘foxnews.com’) present beside the headlines (for more details, see Methods). Because we wanted to compare the effects of accuracy incentives with and without sources, this study had four conditions: accuracy incentives (with sources), control (with sources), accuracy incentives (without sources) and control (without sources).
Replicating the main results from experiments 1 and 2, the accuracy incentives condition significantly improved truth discernment, F(1, 917) = 4.44, P = 0.035, η2G = 0.01, reduced partisan bias, F(1, 917) = 18.21, P < 0.001, η2G = 0.02, and increased the number of politically incongruent true articles rated as accurate, F(1, 917) = 20.94, P < 0.001, η2G = 0.02. Thus, accuracy incentives appear to increase accuracy and reduce partisan bias in a large representative sample, suggesting that the results of these experiments probably generalize to the US population as a whole.
Although effect sizes appeared to be descriptively smaller when sources were removed from the headlines (for details, see Fig. 1 and Supplementary Appendix 1), we did not find significant interactions between the main outcome variables and the presence or absence of source cues. However, this study design did not provide strong power to test whether this was not due to chance, since interaction effects can require up to 16 times as much power as main effects60,61 (for power analysis, see Methods). Additional analysis using Bayes factors62 reported in Supplementary Appendix 1 did not find strong evidence for the absence of interaction effects. Like in experiment 2, there was once again no significant impact of accuracy incentives on sharing discernment (P = 0.906).
Experiment 4: the effect of a non-financial intervention
In experiment 4, we replicated the accuracy incentive and control condition in another politically balanced sample of 983 US adults, but also added a non-financial accuracy motivation condition. This non-financial accuracy motivation condition was designed to rule out multiple interpretations behind our earlier findings. One mundane interpretation is that participants are merely saying what they believe fact-checkers think is true, rather than answering in accordance with their true beliefs. However, this non-financial intervention does not incentivize people to answer in ways that do not align with their actual beliefs. Additionally, because financial incentives are more difficult to scale to real-world contexts, the non-financial accuracy motivation condition speaks to the generalizability of these results to other, more scalable ways of motivating accuracy.
In the non-financial accuracy condition, people read a brief text about how most people value accuracy and how people think sharing inaccurate content hurts their reputation63 (see intervention text in Supplementary Appendix 2). People were also told to be as accurate as possible and that they would receive feedback on how accurate they were at the end of the study.
Our main pre-registered hypothesis was that this non-financial accuracy motivation condition would increase belief in politically incongruent true news relative to the control condition. An analysis of variance (ANOVA) found a main effect of the experimental conditions on the amount of politically incongruent true news rated as true, F(2, 980) = 17.53, P < 0.001, η2G = 0.04. Supporting our main pre-registered hypothesis, the non-financial accuracy motivation condition increased the percentage of politically incongruent true news stories rated as true (M = 43.97, 95% CI 40.59–47.34) as compared with the control condition (M = 35.19, 95% CI 31.93–38.45), P < 0.001, d = 0.29. Replicating studies 1–3, the accuracy incentive condition also increased perceived accuracy of politically incongruent true news (M = 49.15, 95% CI 45.74–52.55), P < 0.001, d = 0.45. The accuracy incentive and non-financial accuracy motivation condition were not significantly different from one another (P = 0.083, d = 0.17), though this may be because we did not have enough power to detect a difference. In short, the non-financial accuracy motivation manipulation was also effective at increasing belief in politically incongruent true news, with an effect about 63% as large as the effect of the financial incentive.
Since we expected the non-financial accuracy motivation condition to have a smaller effect than the accuracy incentives condition, we did not pre-register hypotheses for truth discernment and partisan bias, as we did not anticipate having enough power to detect effects for these outcome variables. Indeed, the non-financial accuracy motivation condition did not significantly increase truth discernment (P = 0.221) or partisan bias (P = 0.309). However, replicating studies 1–3, accuracy incentives once again improved truth discernment (P = 0.001, d = 0.28) and reduced partisan bias (P = 0.003, d = 0.25). The effect of the non-financial accuracy motivation condition was 47% as large as the effect of the accuracy incentive for truth discernment and 45% as large for partisan bias. There was also no overall effect of the experimental conditions on sharing discernment (P = 0.689). For extended results, see Supplementary Appendix 1.
Together, these results suggest that a subtler (and also more scalable) accuracy motivation intervention that does not employ financial incentives is effective at increasing the perceived accuracy of true news from the opposing party, but has a smaller effect size than the stronger financial incentive intervention.
To generate more precise estimates of our effects, we pooled data from all four studies to conduct an IDA64. For the IDA, we used only the 16 news headlines that were used in all four studies, and included only the accuracy incentives and control conditions that were used in all four studies.
We did not have any studies in the file drawer on this topic, meaning that our estimate was not influenced by publication bias.
Incentives had the largest positive effect on the perceived accuracy of politically incongruent true news, P < .001, d = 0.47; and a smaller positive effect on the perceived accuracy of politically congruent true news, P = 0.001, d = 0.17. Incentives did not significantly affect belief in politically incongruent false news, P = 0.163, d = 0.13, or belief in politically congruent false news, P = 0.993, d = −0.04 (Fig. 2), after adjusting for multiple comparisons with Tukey post-hoc tests. Analysis for each individual item revealed that incentives significantly increased belief in all true items, but they did not significantly decrease belief in any false items (though they significantly increased belief in one false item). More details are reported in Supplementary Appendix 1, and an analysis for each individual headline is reported in Supplementary Appendix 3. Additional analysis using Bayes factors reported in Supplementary Appendix 4 found strong evidence that incentives impacted belief in both politically congruent and politically incongruent true news, but found inconsistent evidence that they affected belief in false news.
While effects on sharing discernment were inconsistent across studies, the IDA found that there was a small positive effect of the incentive on sharing discernment, t(2020.20) = 2.19, P = 0.029, d = 0.10. Finally, people spent slightly more time on each headline in the accuracy incentives condition, t(818.53) = 2.34, P = 0.019, d = 0.16, indicating that incentives may have led people to put more effort into their responses.
Replicating prior work26,27,28,29,30,31, conservatives were worse at discerning between true and false headlines than liberals. Conservatives answered about 9.26 (out of 16) questions correctly when not incentivized to be accurate, and liberals answered 10.93 questions out of 16 correctly when unincentivized—a 1.67-point difference, 95% CI 1.41–1.94, t(1035.69) = 12.53, P < 0.001, d = 0.77. However, when conservatives were incentivized to be accurate, they answered 10.12 questions correctly, making the gap between incentivized conservatives and unincentivized liberals 0.81 points, 95% CI 0.53–1.09, t(951.91) = 5.65, P < 0.001, d = 0.35. In other words, paying conservatives less than a dollar to correctly identify news headlines as true or false reduced the gap in performance between conservatives and (unincentivized) liberals by 51.50%. Incentives also considerably reduced the gap between conservatives and liberals in terms of partisan bias, sharing discernment and belief in politically incongruent true news. More detail is reported in Supplementary Appendix 1 and plotted visually in Fig. 3. Altogether, these results suggest that a substantial portion of US conservatives’ tendency to believe and share less accurate news reflects a lack of motivation to be accurate rather than lack of knowledge alone.
Importantly, the incentives improved truth discernment for both liberals, d = 0.23, P < 0.001, and conservatives, d = 0.40, P < 0.001 (for table of effect sizes broken down by political affiliation, see Supplementary Appendix 5). Descriptively, the effect sizes for our intervention were larger for conservatives than liberals, which diverges from other misinformation interventions that tend to show larger effect sizes for liberals65,66. Furthermore, political ideology (liberal versus conservative) was a significant moderator of belief in incongruent true news, P = 0.033, and partisan bias, P = 0.029 (though this moderation effects was not significant for truth discernment, P = 0.095, or sharing discernment, P = 0.061), such that the effects of incentives appeared to be larger for conservatives than liberals. The effect of the incentives on truth discernment was not significantly moderated by cognitive reflection, political knowledge or affective polarization (ps <0.182). However, even though we had a large sample, we were still slightly underpowered to detect these interaction effects (see power analysis in Methods), and supplemental Bayesian analyses also did not find strong evidence for the significant moderation effects (Supplementary Appendix 11), so these interaction effects should be interpreted with caution.
Relative importance of accuracy incentives
In each experiment, we measured other individual difference variables known to be predictive of truth discernment, such as cognitive reflection, political knowledge and partisan animosity, as well as demographic variables, such as age, education and gender. We ran a multiple regression analysis on our IDA with all of these variables included in the model (Fig. 4a). To compare the relative importance of each of these predictors, we also ran a relative importance analysis using the ‘lmg’ method67, which calculates the relative contribution of each predictor to the R2 (Fig. 4b). Full models and relative importance analyses are in Supplementary Appendix 6 and 7.
Political conservatism and accuracy incentives were among the most important predictors for many of the key outcome variables, although confidence intervals were large and overlapping for the relative importance analysis (Supplementary Appendix 4). While prominent accounts claim that partisanship and politically motivated cognition play a limited role in the belief and sharing of misinformation as compared with other factors (such as cognition reflection or inattention)10,68, our results indicate that motivation and partisan identity or ideology are very important factors. Our data point to the importance of broad theoretical accounts of (mis)information belief and sharing that integrate motivation and partisan identity with other variables2,10,11,24,69. Indeed, an investigation using cognitive modelling found that a broad model of misinformation belief that included multiple factors (such as partisan identity, cognitive reflection and more) performed better at predicting acceptance of misinformation than other models that focused exclusively on cognitive or emotional factors70.
Increasing motivations to be accurate via a small financial incentive improved people’s accuracy in discerning between true and false news and decreased the partisan divide in belief in news by about 30%. These effects were observed across four experiments (n = 3,364), and were primarily driven by an increase in the perceived accuracy of politically incongruent true news (d = 0.47). No significant effects were found for false news, which people encounter relatively infrequently online71. Additionally, providing people with an incentive to identify articles that would be liked by their political in-group reduced accuracy and increased intentions to share politically congruent true and false news. Thus, social or partisan identity goals appear to interfere with accuracy goals. Furthermore, a non-financial accuracy motivation intervention that provided people feedback about their accuracy, emphasized social norms about accuracy and highlighted the reputational benefits of being accurate significantly increased the perceived accuracy of politically incongruent true news (d = 0.29). This illustrates that accuracy motivation interventions that do not involve financial incentives can be applied at scale.
These results make a number of key theoretical contributions. First, they suggest that partisan differences in news judgements do not simply reflect differences in factual knowledge10. Instead, our data suggest that a substantial portion of this partisan divide can be attributed to a lack of motivation to be accurate. While there have been debates about whether partisan differences in belief reflect differing prior beliefs versus politically motivated cognition21,22, our studies provide robust causal evidence for the effect of motivation on belief.
Additionally, while a number of studies have observed that American conservatives tend to be more susceptible to misinformation than liberals26,27,28,29,30,31, incentives closed the gap in truth discernment between liberals and (unincentivized) conservatives by more than half. This suggests that a significant portion of partisan differences in (mis)information belief can be attributed to motivational factors, rather than reflecting knowledge gaps alone.
Along with other research39,72,73, these findings suggest that survey data about belief in (mis)information should not be taken at face value. People respond differently when they are highly motivated to be accurate compared with when they are motivated to appeal to their in-group50. However, this does not mean that prior beliefs are not also important, or that motivation is relevant in every context. Indeed, judgements of false headlines appeared to be unaffected by accuracy motivations, suggesting that other factors may play a more prominent role in people’s assessment of false news as compared with true news. Future work can explore why incentives have different effects for true and false news. However, since people encounter fake news websites rarely, some have argued that it is more important to increase trust in reliable news than decrease belief in false news74.
These results also have practical implications for interventions for improving the accuracy of people’s beliefs and sharing decisions75,76. Accuracy incentives improved the accuracy of people’s judgements, and an IDA found that this effect may have spilled over into intentions to share more accurate articles (though this effect was small and inconsistent across studies). Further, making people think about what headlines would be liked by their in-group increased people’s intentions to share both politically congruent false (and true) news. Thus, interventions and social media design features should aim to both increase accuracy motivations and decrease motivations to share content that receives high social reward at the cost of accuracy. In line with this, experimental studies have found that providing social rewards for sharing high-quality content and punishments for sharing low-quality content77 improves the quality of news people report intending to share. Additionally, making people publicly endorse that the news that they share is accurate78, or showing people that fellow in-group members believe content is misleading79, also improves people’s sharing intentions. Future work should continue to explore how to incentivize people to engage with more accurate content online by, for example, emphasizing social norms around accuracy or emphasizing the reputational benefits of sharing accurate content (as in experiment 4).
One limitation of this work is that survey experiments have unknown ecological validity. To maximize ecological validity, we used real, pre-tested news headlines in the format in which they would be regularly encountered on social media websites such as Facebook. Additionally, self-reported sharing intentions are highly correlated with real online news sharing80, and a field experiment suggests that priming accuracy can improve news sharing decisions on Twitter68, illustrating that results from survey experiments on misinformation can translate to the field. Another potential limitation is that there are multiple ways to interpret the effects of financial incentives. For instance, people may be guessing what they think fact-checkers believe to earn money, rather than expressing their true beliefs. However, this interpretation is unlikely to explain the full effect, since a subtle non-financial accuracy motivation intervention had similar (albeit smaller) effects. Furthermore, supplementary analysis found that few participants reported answering in ways that did not accord with their true beliefs to receive money (Supplementary Appendix 1).
There is a sizeable partisan divide in the kind of news liberals and conservatives believe in, and conservatives tend to believe in and share more false news than liberals. Our research suggests these differences are not immutable. Motivating people to be accurate improves accuracy about the veracity of true (but not false) news headlines, reduces partisan bias and closes a substantial portion of the gap in accuracy between liberals and conservatives. Theoretically, these results identify accuracy and social motivations as key factors in driving news belief and sharing. Practically, these results suggest that shifting motivations may be a useful strategy for creating a shared reality across the political spectrum.
We report how we determined our sample size, all data exclusions, all manipulations and all measures in the experiment. The research methods were approved by the University of Cambridge Psychology Ethics Committee (Protocol #PRE.2020.110). These studies were pre-registered. Stimuli, Qualtrics survey files, anonymized data, analysis code and all pre-registrations are available on our Open Science Framework (OSF) page: https://osf.io/75sqf.
The experiment launched on 30 November 2020. We recruited 500 participants via the survey platform Prolific Academic55. Specifically, we recruited 250 conservative participants and 250 liberal participants from the United States via Prolific Academic’s demographic pre-screening service to ensure the sample was politically balanced. Our a priori power analysis indicated that we would need 210 participants to detect a medium effect size of d = 0.50 at 95% power, though we doubled this sample size to account for partisan differences and oversampled to account for exclusions. A total of 511 participants took our survey. Following our pre-registered exclusion criteria, we excluded 32 participants who failed our attention check (or did not get far enough in the experiment to reach our attention check), and an additional 17 participants who said they responded randomly at some time during the experiment. This left us with a total of 462 participants (194 M, 255 F, 12 trans/non-binary; age: M = 35.85, standard deviation (s.d.) 13.66; politics: 253 Democrats, 201 Republicans). The experiment 1 pre-registration is available at https://aspredicted.org/blind.php?x=gk9xg5.
The materials were 16 pre-tested true and false news headlines from a large pre-tested sample of 225 news headlines56. In total, eight of these news headlines were false, and eight of the news headlines were true. Because we were interested in whether accuracy incentives would reduce partisan bias, we specifically selected headlines that had a sizeable gap in perceived accuracy between Republicans and Democrats as reported in the pre-test, as well as headlines that were not outdated (the pre-test was conducted a few months before the first experiment). Specifically, we chose eight headlines (four false and four true) that Democrats rated as more accurate than Republicans in the pre-test, and eight headlines (four false and four true) that Republicans rated as more accurate than Democrats. For example stimuli, see Supplementary Appendix 8, and for full materials, see the OSF page.
News evaluation task
Participants were shown these 16 news headlines, along with an accompanying picture and source (similar to how a news article preview would show up on someone’s Facebook feed), and asked ‘To the best of your knowledge, is the claim in the above headline accurate?’ on a scale from 1 (‘extremely inaccurate’) to 6 (‘extremely accurate’). Afterwards, they were asked ‘If you were to see the above article on social media, how likely would you be to share it?’ on a scale from 1 (‘extremely unlikely’) to 6 (‘extremely likely’).
Accuracy incentives manipulation
Half of the participants were randomly assigned to a control condition, in which we explained the news evaluation task, but we did not provide any information about a bonus payment. The other half were assigned to an accuracy incentives condition. In this condition, we explained the news evaluation task, and then told participants they would receive a ‘bonus payment of up to $1.00 based on how many correct answers [they] provide regarding the accuracy of the articles. Correct answers are based on the expert evaluations of non-partisan fact-checkers.’ Specifically, they received one dollar for answering 15 out of 16 questions correctly, and 50 cents for answering 13 out of 16 questions correctly. Since we measured accuracy on a continuous scale, we told participants that ‘if the headline describes a true event, either ‘slightly accurate’, ‘moderately accurate’ or ‘extremely accurate’ constitute correct responses. Similarly, if the headline describes a false event, either ‘extremely inaccurate’, ‘moderately inaccurate’ or ‘slightly inaccurate’ constitute ‘correct’ responses. In other words, the continuous scale was measured dichotomously for the purposes of giving financial incentives. Participants were also notified that all other questions would not affect their bonus payment. For full manipulation text, see Supplementary Materials 2 or the OSF.
We gave participants a three-item cognitive reflection task52. We measured participants’ political knowledge using a five-item scale49 and in-group love/out-group hate with feeling thermometers81. For question text, see Supplementary Appendix 9 and the OSF. These measures were repeated across all studies.
For truth discernment, partisan bias and sharing discernment, two-sided independent samples t-tests were used. While we asked participants to rate the truth of headlines on a continuous scale, these variables were recoded as dichotomous for analysis because the financial incentive only rewarded participants on the basis of whether they correctly identified a headline as true or false. Since we did not clearly specify this in the experiment 1 pre-registration (but did for experiments 2–4), we show the results with a continuous coding in Supplementary Appendix 10. The continuous coding did not change the conclusions of our studies.
To test what types of headlines were affected by the incentives, we ran a 2 (accuracy incentive versus no incentive) × 2 (politically congruent versus politically incongruent) × 2 (true headlines versus false headlines) mixed-design ANOVA with the percentage of articles rated as accurate as the dependent variable, and then followed up with Tukey HSD post-hoc tests. Extended analyses are in Supplementary Appendix 1.
The experiment launched on 22 January 2021. We aimed to recruit 1,000 total participants (250 per condition) via the survey platform Prolific Academic, though we oversampled and recruited 1,100 to account for exclusion criteria. We chose this sample size because a power analysis revealed that we needed at least 216 participants per condition to detect the smallest effect size (d = 0.24) at 0.80% power using a one-tailed t-test (although two-tailed tests were used for all analysis). Once again, we used Prolific’s pre-screening platform to recruit 550 liberals and 550 conservatives from the United States, and 1,113 participants took our survey. Following our pre-registered exclusion criteria, we excluded 76 participants who failed our attention check (or did not finish enough of the survey to reach the attention check) and an additional 39 participants who said they responded randomly at some point during the experiment. This left us with a total of 998 participants in total (463 M, 505 F, 30 transgender/non-binary/other; age: M = 36.17, s.d. 13.94; politics: 568 liberals, 430 conservatives). This experiment was also pre-registered (pre-registration at https://aspredicted.org/blind.php?x=/FKF_15L).
Partisan sharing and mixed incentives manipulations
In the new partisan sharing condition, participants were first asked before the experiment to report the political party with which they identify. Then, they were told that they would receive a bonus payment of up to $1.00 based on how accurately they identified information that would be liked by members of their political party if they shared it on social media. Bonuses were awarded on the basis of how closely participants’ answers matched partisan alignment scores from a pre-test48. Before each question about accuracy and sharing, participants were asked ‘If you shared this article on social media, how likely is it that it would receive a positive reaction from [your political party] (for example, likes, shares, and positive comments)?’ In the mixed condition, participants were first given financial incentives for both correctly identifying whether the article would be liked by a member of their political party, and were then asked about accuracy and given incentives for identifying whether the article was accurate. For full intervention text, see Supplementary Appendix 2.
To understand the impact of accuracy and partisan sharing motivations on truth discernment and partisan bias, we ran 2 (accuracy incentive versus control) × 2 (partisan sharing versus control) ANOVAs and followed up on the results using Tukey HSD post-hoc tests. To test what types of headlines were affected by the incentives, we ran a 2 (accuracy versus control) × 2 (partisan sharing versus control) × 2 (politically congruent versus politically incongruent) × 2 (true headlines versus false headlines) mixed-design ANOVA with the percentage of articles rated as accurate as the dependent variable, and then followed up with Tukey HSD post-hoc tests.
The experiment launched on 13 June 2021. We aimed to recruit a nationally representative sample (quota matched to the US population distribution by age, ethnicity and gender) of 1,000 participants via the survey platform Prolific. As in studies 1 and 2, we ensured that the nationally representative sample was politically balanced, or half liberal and half conservative. A total of 1,055 total participants took the survey. Then, we once again excluded 95 participants who failed our attention check (or did not make it to that point in the survey), as well as 39 participants who said they were responding randomly at some point in the survey. This left us with a total of 921 participants (439 M, 470 F, 12 transgender/non-binary/other; age: M = 40.07, s.d. 14.67; politics: 542 liberals, 379 conservatives). This experiment was also pre-registered (pre-registration available at https://aspredicted.org/7M2_9K9).
We once again used the same 16 pre-tested true and false news headlines in addition to eight extra true and false news items from the same pre-test. For consistency, we report the results of the 16 news items in the paper, but we also report the results for the full set of 24 items in Supplementary Appendix 3, which did not change our conclusions.
In addition to the accuracy incentive and control condition, participants were assigned to identical accuracy incentive and control conditions without source cues present on the stimuli. In these conditions, the sources (for example, ‘nytimes.com’) were greyed out, so participants could only make assessments of the stimuli based on the photo and headline alone (for examples, see Supplementary Materials 8).
To understand the impact of accuracy incentives and source cues on truth discernment and partisan bias, we ran 2 (accuracy versus control) × 2 (source versus no source) ANOVAs and followed up on the results using Tukey HSD post-hoc tests. To test what types of headlines were affected by the incentives, we ran a 2 (accuracy versus control) × 2 (source versus no source) × 2 (politically congruent versus politically incongruent) × 2 (true headlines versus false headlines) mixed-design ANOVA with the percentage of articles rated as accurate as the dependent variable, and then followed up with Tukey HSD post-hoc tests.
Power analysis for interaction effects
On the basis of the effect sizes of study 2 and the principle that 16 times the sample size is needed to detect an attenuated interaction effect60,61, a power analyses conducted after we ran the study found that we needed roughly 1,536 participants to detect an interaction for the amount of politically incongruent news rated as true, 2,560 participants to detect an interaction effect for truth discernment and 7,488 participants to detect an interaction effect for partisan bias with 80% power. Thus, this particular design was underpowered to detect whether accuracy incentives interacted with source cues.
This experiment launched on 25 May 2022. We aimed to recruit a total of 1,000 participants (roughly 333 per condition) via the platform Prolific academic. We chose this sample size as a power analysis found that we would need 312 per condition to detect the smallest effect size found in the previous study (d = 0.26) with 90% power. Additionally, we wanted relatively high power because we expected the effect of the non-financial accuracy motivation condition to be smaller than that of the financial incentive condition. We used Prolific’s pre-screening platform to recruit a sample that was balanced by politics and gender. A total of 1,007 participants took our survey. Following our pre-registered exclusion criteria, we excluded 17 participants who failed our attention check (or did not finish enough of the survey to reach the attention check) and an additional 8 participants who said they responded randomly at some point during the experiment. This left us with a total of 993 participants in total (486 M, 483 F, 30 transgender/non-binary/other; age: M = 41.46, s.d. 15.06; politics: 507 liberals, 476 conservatives). This experiment was also pre-registered (pre-registration available at https://aspredicted.org/86W_BY4).
We once again used the same 16 pre-tested true and false news headlines extra ‘misleading’ news headlines.
Following our pre-registered analysis plan, we ran a one-way (accuracy versus control versus non-financial accuracy motivation) ANOVA with the percentage of incongruent-true articles rated as true as the dependent variable, followed up by Tukey post-hoc tests. We also ran one-way ANOVAs with truth discernment and partisan bias dependent variables and followed up with post-hoc tests.
We conducted moderation analysis on the pooled dataset by testing for an interaction between the condition and political ideology (liberal versus conservative) in a linear regression. To test the relative importance of each predictor, we ran a relative importance analysis using the ‘reliampo’ package in R. Bootstrapped confidence intervals were calculated for ‘lmg’ variables using 1,000 bootstraps.
Power analysis for moderation effects
Using effect sizes from the IDA and the principle that 16 times the sample size is needed to detect an attenuated interaction effect60,61, a post-hoc power analysis found that we needed 2,336 participants to detect an interaction effect for the amount of politically incongruent news rated as true, 5,984 participants to detect an interaction effect for truth discernment, 7,488 for partisan bias and 50,336 to detect an interaction for sharing discernment. Thus, moderation effects should be interpreted with caution.
Signal detection analysis
As another robustness check, we also conducted supplemental analysis using signal detection modelling15. This analysis found that incentives increased participants’ discrimination between true and false news (for both politically congruent and politically incongruent headlines), and also increased the threshold by which people accepted politically incongruent headlines as true (Supplementary Appendix 12). In sum, analysis using signal detection modelling yielded highly similar results to our main analysis.
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Anonymized data, Qualtrics files and stimuli are available on the OSF at https://osf.io/75sqf.
The R code necessary to reproduce our results is available on the OSF at https://osf.io/75sqf.
Lewandowsky, S., Ecker, U. K. & Cook, J. Beyond misinformation: understanding and coping with the ‘post-truth’ era. J. Appl. Res. Mem. Cogn. 6, 353–369 (2017).
Van Bavel, J. J. et al. Political psychology in the digital (mis)information age: a model of news belief and sharing. Soc. Issues Policy Rev. 15, 84–113 (2021).
Biddlestone, M., Azevedo, F. & van der Linden, S. Climate of conspiracy: a meta-analysis of the consequences of belief in conspiracy theories about climate change. Curr. Opin. Psychol. 46, 101390 (2022).
Van der Linden, S., Leiserowitz, A., Rosenthal, S. & Maibach, E. Inoculating the public against misinformation about climate change. Glob. Chall. 1, 1600008 (2017).
Pierri, F. et al. Online misinformation is linked to early COVID-19 vaccination hesitancy and refusal. Sci. Rep. 12, 5966 (2022).
Loomba, S., de Figueiredo, A., Piatek, S. J., de Graaf, K. & Larson, H. J. Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nat. Hum. Behav. 5, 337–348 (2021).
Rathje, S., He, J. K., Roozenbeek, J., Van Bavel, J. J. & van der Linden, S. Social media behavior is associated with vaccine hesitancy. PNAS Nexus 1, pgac207 (2022).
Dechêne, A., Stahl, C., Hansen, J. & Wänke, M. The truth about the truth: a meta-analytic review of the truth effect. Pers. Soc. Psychol. Rev. 14, 238–257 (2010).
Pennycook, G., Cannon, T. D. & Rand, D. G. Prior exposure increases perceived accuracy of fake news. J. Exp. Psychol. Gen. 147, 1865–1880 (2018).
Pennycook, G. & Rand, D. G. The psychology of fake news. Trends Cogn. Sci. 25, 388–402 (2021).
van der Linden, S. et al. How can psychological science help counter the spread of fake news? Span. J. Psychol. 24, e25 (2021).
Robertson, C. E., Pretus, C., Rathje, S., Harris, E. & Van Bavel, J. J. How social identity shapes conspiratorial belief. Curr. Opin. Psychol. 47, 101423 (2022).
Van Bavel, J. J. & Pereira, A. The partisan brain: an identity-based model of political belief. Trends Cogn. Sci. 22, 213–224 (2018).
Aslett, K. et al. Measuring belief in fake news in real-time. In Proc. Workshop on Misinformation Integrity in Social Networks 2021 (eds Pueyo, L. G. et al.) (CEUR-WS, 2021).
Batailler, C., Brannon, S. M., Teas, P. E. & Gawronski, B. A signal detection approach to understanding the identification of fake news. Perspect. Psychol. Sci. 17, 78–98 (2022).
Gawronski, B. Cognitive sciences. Trends Cogn. Sci. 25, 723 (2021).
Traberg, C. S. & van der Linden, S. Birds of a feather are persuaded together: perceived source credibility mediates the effect of political bias on misinformation susceptibility. Pers. Individ. Differ. 185, 111269 (2022).
van der Linden, S., Panagopoulos, C. & Roozenbeek, J. You are fake news: political bias in perceptions of fake news. Media Cult. Soc. 42, 460–470 (2020).
Kunda, Z. The case for motivated reasoning. Psychol. Bull. 108, 480–498 (1990).
Taber, C. S. & Lodge, M. Motivated skepticism in the evaluation of political beliefs. Am. J. Polit. Sci. 50, 755–769 (2006).
Druckman, J. N. & McGrath, M. C. The evidence for motivated reasoning in climate change preference formation. Nat. Clim. Change 9, 111–119 (2019).
Tappin, B. M., Pennycook, G. & Rand, D. G. Thinking clearly about causal inferences of politically motivated reasoning: why paradigmatic study designs often undermine causal inference. Curr. Opin. Behav. Sci. 34, 81–87 (2020).
Bayes, R., Druckman, J. N., Goods, A. & Molden, D. C. When and how different motives can drive motivated political reasoning. Polit. Psychol. 41, 1031–1052 (2020).
van der Linden, S. Misinformation: susceptibility, spread, and interventions to immunize the public. Nat. Med. 28, 460–467 (2022).
Druckman, J. N. The politics of motivation. Crit. Rev. 24, 199–216 (2012).
Garrett, R. K. & Bond, R. M. Conservatives’ susceptibility to political misperceptions. Sci. Adv. 7, eabf1234 (2021).
Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B. & Lazer, D. Fake news on Twitter during the 2016 US presidential election. Science 363, 374–378 (2019).
Guess, A., Nagler, J. & Tucker, J. Less than you think: prevalence and predictors of fake news dissemination on Facebook. Sci. Adv. 5, eaau4586 (2019).
Lawson, M. A. & Kakkar, H. Of pandemics, politics, and personality: the role of conscientiousness and political ideology in the sharing of fake news. J. Exp. Psychol. Gen. 151, 1154–1177 (2022).
Pereira, A. & Van Bavel, J. Identity concerns drive belief in fake news. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/7vc5d (2018).
van der Linden, S., Panagopoulos, C., Azevedo, F. & Jost, J. T. The paranoid style in American politics revisited: an ideological asymmetry in conspiratorial thinking. Polit. Psychol. 42, 23–51 (2021).
Roozenbeek, J. et al. Susceptibility to misinformation is consistent across question framings and response modes and better explained by myside bias and partisanship than analytical thinking. Judgm. Decis. Mak. 17, 547–573 (2022).
Pereira, A., Harris, E. & Van Bavel, J. J. Identity concerns drive belief: the impact of partisan identity on the belief and dissemination of true and false news. Group Process. Intergroup Relat. 26, 24–47 (2023).
Mosleh, M. & Rand, D. G. Measuring exposure to misinformation from political elites on Twitter. Nat. Commun. 13, 7144 (2022).
Jost, J. T., Glaser, J., Kruglanski, A. W. & Sulloway, F. J. Political conservatism as motivated social cognition. Psychol. Bull. 129, 339–375 (2003).
Baron, J. & Jost, J. T. False equivalence: are liberals and conservatives in the United States equally biased? Perspect. Psychol. Sci. 14, 292–303 (2019).
Imhoff, R. et al. Conspiracy mentality and political orientation across 26 countries. Nat. Hum. Behav. 6, 392–403 (2022).
Bullock, J. G. & Lenz, G. Partisan bias in surveys. Annu. Rev. Polit. Sci. 22, 325–342 (2019).
Prior, M., Sood, G. & Khanna, K. You cannot be serious: the impact of accuracy incentives on partisan bias in reports of economic perceptions. Q. J. Polit. Sci. 10, 489–518 (2015).
Jakesch, M., Koren, M., Evtushenko, A. & Naaman, M. The role of source and expressive responding in political news evaluation. In Computation and Journalism Symposium (2019).
Panizza, F. et al. Lateral reading and monetary incentives to sort out scientific (dis)information. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/vuqd3 (2021).
Speckmann, F. & Unkelbach, C. Monetary incentives do not reduce the repetition-induced truth effect. Psychon. Bull. Rev. 29, 1045–1052 (2022).
Brashier, N. & Rand, D. Illusory truth occurs even with incentives for accuracy. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/83m9y (2021).
Melnikoff, D. E. & Strohminger, N. The automatic influence of advocacy on lawyers and novices. Nat. Hum. Behav. 4, 1258–1264 (2020).
Brashier, N. M. & Marsh, E. J. Judging truth. Annu. Rev. Psychol. 71, 499–515 (2020).
Rathje, S., Van Bavel, J. J. & van der Linden, S. Out-group animosity drives engagement on social media. Proc. Natl Acad. Sci. USA 118, e2024292118 (2021).
Yu, X., Wojcieszak, M. & Casas, A. Partisanship on social media: in-party love among American politicians, greater engagement with out-party hate among ordinary users. Polit. Behav. https://doi.org/10.1007/s11109-022-09850-x (2023).
Rathje, S., Robertson, C., Brady, W. & Van Bavel, J. J. People think that social media platforms do (but should not) amplify divisive content. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/gmun4 (2022).
Osmundsen, M., Bor, A., Vahlstrup, P. B., Bechmann, A. & Petersen, M. B. Partisan polarization is the primary psychological motivation behind political fake news sharing on Twitter. Am. Polit. Sci. Rev. 115, 999–1015 (2021).
Ren, Z. B., Dimant, E. & Schweitzer, M. Beyond belief: how social engagement motives influence the spread of conspiracy theories. J. Exp. Soc. Psychol. 104, 104421 (2023).
Brady, W. J., Crockett, M. J. & Van Bavel, J. J. The MAD model of moral contagion: the role of motivation, attention, and design in the spread of moralized content online. Perspect. Psychol. Sci. https://doi.org/10.1177/1745691620917336 (2019).
Pennycook, G. & Rand, D. G. Lazy, not biased: susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition https://doi.org/10.1016/j.cognition.2018.06.011 (2018).
Vegetti, F. & Mancosu, M. The impact of political sophistication and motivated reasoning on misinformation. Polit. Commun. 37, 678–695 (2020).
Guess, A. M. et al. A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proc. Natl Acad. Sci. USA 117, 15536–15545 (2020).
Peer, E., Rothschild, D., Gordon, A., Evernden, Z. & Damer, E. Data quality of platforms and panels for online behavioral research. Behav. Res. Methods 54, 1643–1662 (2022).
Pennycook, G., Binnendyk, J., Newton, C. & Rand, D. A practical guide to doing behavioural research on fake news and misinformation. Collabra Psychol. 7, 25293 (2021).
Gawronski, B. Partisan bias in the identification of fake news. Trends Cogn. Sci. 25, 723–724 (2021).
Pennycook, G. & Rand, D. G. Lack of partisan bias in the identification of fake (versus real) news. Trends Cogn. Sci. 25, 725–726 (2021).
Pennycook, G. & Rand, D. G. Fighting misinformation on social media using crowdsourced judgments of news source quality. Proc. Natl Acad. Sci. USA 116, 2521–2526 (2019).
Gelman, A. You need 16 times the sample size to estimate an interaction than to estimate a main effect. Statistical Modeling, Causal Inference, and Social Science https://statmodeling.stat.columbia.edu/2018/03/15/need-16-times-sample-size-estimate-interaction-estimate-main-effect/#comment-685111/ (2018).
Blake, K. R. & Gangestad, S. On attenuated interactions, measurement error, and statistical power: guidelines for social and personality psychologists. Pers. Soc. Psychol. Bull. 46, 1702–1711 (2020).
Wetzels, R., van Ravenzwaaij, D. & Wagenmakers, E.-J. in The Encyclopedia of Clinical Psychology (eds Cautin, R. L. & Lilienfeld, S. O.) 1–11 (Wiley, 2015).
Altay, S., Hacquin, A.-S. & Mercier, H. Why do so few people share fake news? It hurts their reputation. New Media Soc. 24, 1303–1324 (2022).
Curran, P. J. & Hussong, A. M. Integrative data analysis: the simultaneous analysis of multiple data sets. Psychol. Methods 14, 81 (2009).
S. Rathje, J. et al. Letter to the Editors of Psychological Science: Meta-analysis Reveals That Accuracy Nudges Have Little to No Effect for U.S. Conservatives: Regarding Pennycook et al. (2020) (OSF, 2022).
Pretus, C. et al. The role of political devotion in sharing partisan misinformation. Preprint at Research Square https://doi.org/10.21203/rs.3.rs-1665189/v1 (2021).
Tonidandel, S. & LeBreton, J. M. Relative importance analysis: a useful supplement to regression analysis. J. Bus. Psychol. 26, 1–9 (2011).
Pennycook, G. et al. Shifting attention to accuracy can reduce misinformation online. Nature 592, 590–595 (2021).
Robertson, C. E., Pretus, C., Rathje, S., Harris, E. & Van Bavel, J. J. How social identity shapes conspiratorial belief. Curr. Opin. Psychol. 47, 101423 (2022).
Borukhson, D., Lorenz-Spreen, P. & Ragni, M. When does an individual accept misinformation? An extended investigation through cognitive modeling. Comput. Brain Behav. 5, 244–260 (2022).
Guess, A. M., Nyhan, B. & Reifler, J. Exposure to untrustworthy websites in the 2016 US election. Nat. Hum. Behav. 4, 472–480 (2020).
Bishop, G. F. The Illusion of Public Opinion: Fact and Artifact in American Public Opinion Polls (Rowman & Littlefield Publishers, 2004).
Edwards, A. L. The Social Desirability Variable in Personality Assessment and Research (Dryden Press, 1957).
Acerbi, A., Altay, S. & Mercier, H. Research note: fighting misinformation or fighting for information? Harvard Kennedy School (HKS) Misinformation Review https://doi.org/10.37016/mr-2020-87 (2022).
Roozenbeek, J., van der Linden, S., Goldberg, B., Rathje, S. & Lewandowsky, S. Psychological inoculation improves resilience against misinformation on social media. Sci. Adv. 8, eabo6254 (2022).
Bak-Coleman, J. B. et al. Combining interventions to reduce the spread of viral misinformation. Nat. Hum. Behav. 6, 1372–1380 (2022).
Globig, L. K., Holtz, N. & Sharot, T. Changing the incentive structure of social media platforms to halt the spread of misinformation. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/26j8w (2022).
Capraro, V. & Celadin, T. “I think this news is accurate”: endorsing accuracy decreases the sharing of fake news and increases the sharing of real news. Pers. Soc. Psychol. Bull. https://doi.org/10.1177/01461672221117691 (2022).
Pretus, C. et al. The misleading count: an identity-based intervention to mitigate the spread of partisan misinformation. Preprint at PsyArXiv https://doi.org/10.31234/osf.io/7j26y (2022).
Mosleh, M., Pennycook, G. & Rand, D. G. Self-reported willingness to share political news articles in online surveys correlates with actual sharing on Twitter. PLoS ONE 15, e0228882 (2020).
Druckman, J. N. & Levendusky, M. S. What do we measure when we measure affective polarization? Public Opin. Q. 83, 114–122 (2019).
We are grateful for support from a Gates Cambridge Scholarship awarded to S.R. (grant #OPP1144), a British Academy Postdoctoral Fellowship awarded to J.R. (#PF21\210010), a John Templeton Foundation Grant (#61378) awarded to J.J.V.B., a Russell Sage Foundation Grant awarded to S.R. and JV.B., and an Infodemic grant awarded to S.V.L. (UK Government, #SCH-00001-3391). The funders had no role in study design, data collection and analysis, decision to publish or preparation of the paper.
The authors declare no competing interests.
Peer review information
Nature Human Behaviour thanks Jiyoung Lee, Dustin Calvillo, Alexander Bor and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Rathje, S., Roozenbeek, J., Van Bavel, J.J. et al. Accuracy and social motivations shape judgements of (mis)information. Nat Hum Behav 7, 892–903 (2023). https://doi.org/10.1038/s41562-023-01540-w
This article is cited by
Scientific Reports (2023)
Nature Human Behaviour (2023)
Political Behavior (2023)