Introduction

Misinformation — which we define as any information that turns out to be false — poses an inevitable challenge for human cognition and social interaction because it is a consequence of the fact that people frequently err and sometimes lie1. However, this fact is insufficient to explain the rise of misinformation, and its subsequent influence on memory and decision-making, as a major challenge in the twenty-first century2,3,4. Misinformation has been identified as a contributor to various contentious events, ranging from elections and referenda5 to political or religious persecution6 and to the global response to the COVID-19 pandemic7.

The psychology and history of misinformation cannot be fully grasped without taking into account contemporary technology. Misinformation helped bring Roman emperors to power8, who used messages on coins as a form of mass communication9, and Nazi propaganda heavily relied on the printed press, radio and cinema10. Today, misinformation campaigns can leverage digital infrastructure that is unparalleled in its reach. The internet reaches billions of individuals and enables senders to tailor persuasive messages to the specific psychological profiles of individual users11,12. Moreover, social media users’ exposure to information that challenges their worldviews can be limited when communication environments foster confirmation of previous beliefs — so-called echo chambers13,14. Although there is some controversy about echo chambers and their impact on people’s beliefs and behaviours12,15, the internet is an ideal medium for the fast spread of falsehoods at the expense of accurate information16. However, the prevalence of misinformation cannot be attributed only to technology: conventional efforts to combat misinformation have also not been as successful as hoped2 — these include educational efforts that focus on merely conveying factual knowledge and corrective efforts that merely retract misinformation.

For decades, science communication has relied on an information deficit model when responding to misinformation, focusing on people’s misunderstanding of, or lack of access to, facts17. Thus, a thorough and accessible explanation of facts should overcome the impact of misinformation. However, the information deficit model ignores the cognitive, social and affective drivers of attitude formation and truth judgements18,19,20. For example, some individuals deny the existence of climate change or reject vaccinations despite being aware of a scientific consensus to the contrary21,22. This rejection of science is not the result of mere ignorance but is driven by factors such as conspiratorial mentality, fears, identity expression and motivated reasoning — reasoning driven more by personal or moral values than objective evidence19,23,24,25,26. Thus, to understand the psychology of misinformation and how it might be countered, it is essential to consider the cognitive architecture and social context of individual decision makers.

In this Review, we describe the cognitive, social and affective processes that make misinformation stick and leave people vulnerable to the formation of false beliefs. We review the theoretical models that have been proposed to explain misinformation’s resistance to correction. We provide guidance on countering misinformation, including educational and pre-emptive interventions, refutations and psychologically informed technological solutions. Finally, we return to the broader societal trends that have contributed to the rise of misinformation and discuss its practical implications on journalism, education and policymaking.

Different types of misinformation exist — for example, misinformation that goes against scientific consensus or misinformation that contradicts simple, objectively true facts. Moreover, the term disinformation is often specifically used for the subset of misinformation that is spread intentionally27. More research is needed on the extent to which different types of misinformation might be associated with differential psychological impacts and barriers for revision, and to establish the extent to which people infer intentionality and how this might affect their processing of the false information. Thus, in this Review we do not draw a sharp distinction between misinformation and disinformation, or different types of misinformation. We use the term misinformation as an umbrella term referring to any information that turns out to be false and reserve the term disinformation for misinformation that is spread with intention to harm or deceive.

Drivers of false beliefs

The formation of false beliefs all but requires exposure to false information. However, lack of access to high-quality information is not necessarily the primary precursor to false-belief formation; a range of cognitive, social and affective factors influence the formation of false beliefs (Fig. 1). False beliefs generally arise through the same mechanisms that establish accurate beliefs28,29. When deciding what is true, people are often biased to believe in the validity of information30, and ‘go with their gut’ and intuitions instead of deliberating31,32. For example, in March 2020, 31% of Americans agreed that COVID-19 was purposefully created and spread33, despite the absence of any credible evidence for its intentional development. People are likely to have encountered conspiracy theories about the source of the virus multiple times, which might have contributed to this widespread belief because simply repeating a claim makes it more believable than presenting it only once34,35. This illusory truth effect arises because people use peripheral cues such as familiarity (a signal that a message has been encountered before)36, processing fluency (a signal that a message is either encoded or retrieved effortlessly)37,38 and cohesion (a signal that the elements of a message have references in memory that are internally consistent)39 as signals for truth, and the strength of these cues increases with repetition. Thus, repetition increases belief in both misinformation and facts40,41,42,43. Illusory truth can persist months after first exposure44, regardless of cognitive ability45 and despite contradictory advice from an accurate source46 or accurate prior knowledge18,47.

Fig. 1: Drivers of false beliefs.
figure 1

Some of the main cognitive (green) and socio-affective (orange) factors that can facilitate the formation of false beliefs when individuals are exposed to misinformation. Not all factors will always be relevant, but multiple factors often contribute to false beliefs.

Another ‘shortcut’ for truth might involve defaulting to one’s own personal views. Overall belief in news headlines is higher when the news headlines complement the reader’s worldview48. Political partisanship can also contribute to false memories for made-up scandals49. However, difficulties discerning true from false news headlines can also arise from intuitive (or ‘lazy’) thinking rather than the impact of worldviews48. In one study, participants received questions (‘If you’re running a race and you pass the person in second place, what place are you in?’) with intuitive, but incorrect, answers (‘first place’). Participants who answered these questions correctly were better able to discern fake from real headlines than participants who answered these questions incorrectly, independently of whether the headlines aligned with their political ideology50. A link has also been reported between intuitive thinking and greater belief in COVID-19 being a hoax, and reduced adherence to public health measures51.

Similarly, allowing people to deliberate can improve their judgements. If quick evaluation of a headline is followed by an opportunity to rethink, belief in fake news — but not factual news — is reduced52. Likewise, encouraging people to ‘think like fact checkers’ leads them to rely more on their own prior knowledge instead of heuristics. For example, prior exposure to statements such as ‘Deer meat is called veal’ makes these statements seem truer than similar statements encountered for the first time, even when people know the truth (in this case that the correct term is venison47). However, asking people to judge whether the statement is true at initial exposure protects them from subsequently accepting contradictions of well-known facts53.

The information source also provides important social cues that influence belief formation. In general, messages are more persuasive and seem more true when they come from sources perceived to be credible rather than non-credible42. People trust human information sources more if they perceive the source as attractive, powerful and similar to themselves54. These source judgements are naturally imperfect — people believe in-group members more than out-group members55, tend to weigh opinions equally regardless of the competence of those expressing them56 and overestimate how much their beliefs overlap with other people’s, which can lead to the perception of a false consensus57. Experts and political elites are trusted by many and have the power to shape public perceptions58,59; therefore, it can be especially damaging when leaders make false claims. For example, false claims about public health threats such as COVID-19 made by political leaders can reduce the perceived threat of the virus as well as the perceived efficacy of countermeasures, decreasing adherence to public health measures60,61.

Moreover, people often overlook, ignore, forget or confuse cues about the source of information62. For example, for online news items, a logo banner specifying the publisher (for example, a reputable media outlet or a dubious web page) has been found not to decrease belief in fake news or increase belief in factual news63. In the aggregate, groups of laypeople perform as well as professional fact checkers at categorizing news outlets as trustworthy, hyper-partisan or fake64. However, when acting alone, individuals — unlike fact checkers — tend to disregard the quality of the news outlet and judge a headline’s accuracy based primarily on the plausibility of the content63. Similarly, although people are quick to distrust others who share fake news65, they frequently forget information sources66. This tendency is concerning: even though a small number of social media accounts spread an outsized amount of misleading content67,68,69, if consumers do not remember the dubious origin, they might not discount the content accordingly.

The emotional content of the information shared also affects false-belief formation. Misleading content that spreads quickly and widely (‘virally’) on the internet often contains appeals to emotion, which can increase persuasion. For example, messages that aim to generate fear of harm can successfully change attitudes, intentions and behaviours under certain conditions if recipients feel they can act effectively to avoid the harm70. Moreover, according to a preprint that has not been peer-reviewed, ‘happy thoughts’ are more believable than neutral ones71. People seem to understand the association between emotion and persuasion, and naturally shift towards more emotional language when attempting to convince others72. For example, anti-vaccination activists frequently use emotional language73. Emotion can be persuasive because it distracts readers from potentially more diagnostic cues, such as source credibility. In one study, participants read positive, neutral and negative headlines about the actions of specific people; social judgements about the people featured in the headlines were strongly determined by emotional valence of the headline but unaffected by trustworthiness of the news source74.

Inferences about information are also affected by one’s own emotional state. People tend to ask themselves ‘How do I feel about this claim?’, which can lead to influences of a person’s mood on claim evaluation75. Using feelings as information can leave people susceptible to deception76, and encouraging people to ‘rely on their emotions’ increases their vulnerability to misinformation77. Likewise, some specific emotional states such as a happy mood can make people more vulnerable to deception78 and illusory truth79. Thus, one functional feature of a sad mood might be that it reduces gullibility80. Anger has also been shown to promote belief in politically concordant misinformation81 as well as COVID-19 misinformation82. Finally, social exclusion, which is likely to induce a negative mood, can increase susceptibility to conspiratorial content83,84.

In sum, the drivers of false beliefs are multifold and largely overlooked by a simple information deficit model. The drivers include cognitive factors, such as use of intuitive thinking and memory failures; social factors, such as reliance on source cues to determine truth; and affective factors, such as the influence of mood on credulity. Although we have focused on false-belief formation here, the psychology behind sharing misinformation is a related area of active study (Box 1).

Barriers to belief revision

A tacit assumption of the information deficit model is that false beliefs can easily be corrected by providing relevant facts. However, misinformation can often continue to influence people’s thinking even after they receive a correction and accept it as true. This persistence is known as the continued influence effect (CIE)85,86,87,88.

In the typical CIE laboratory paradigm, participants are presented with a report of an event (for example, a fire) that contains a critical piece of information related to the event’s cause (‘the fire was probably caused by arson’). That information might be subsequently challenged by a correction, which can take the form of a retraction (a simple negation, such as ‘it is not true that arson caused the fire’) or a refutation (a more detailed correction that explains why the misinformation was false). When reasoning about the event later (for example, responding to questions such as ‘what should authorities do now?’), individuals often continue to rely on the critical information even after receiving — and being able to recall — a correction89. Variants of this paradigm have used false real-world claims or urban myths90,91,92. Corrected misinformation can also continue to influence the amount a person is willing to pay for a consumer product or their propensity to promote a social media post93,94,95. The CIE might be an influential factor in the persistence of beliefs that there is a link between vaccines and autism despite strong evidence discrediting this link96,97 or that weapons of mass destruction were found in Iraq in 2003 despite no supporting evidence98. The CIE has primarily been conceptualized as a cognitive effect, with social and affective underpinnings.

Cognitive factors

Theoretical accounts of the CIE draw heavily on models of memory in which information is organized in interconnected networks and the availability of information is determined by its level of activation99,100 (Fig. 2). When information is encoded into memory and then new information that discredits it is learned, the original information is not simply erased or replaced101. Instead, misinformation and corrective information coexist and compete for activation. For example, misinformation that a vaccine has caused an unexpectedly large number of deaths might be incorporated with knowledge related to diseases, vaccinations and causes of death. A subsequent correction that the information about vaccine-caused deaths was inaccurate will also be added to memory and is likely to result in some knowledge revision. However, the misinformation will remain in memory and can potentially be reactivated and retrieved later on.

Fig. 2: Integration and retrieval accounts of continued influence.
figure 2

a | Integration account of continued influence. The correction had the representational strength to compete with or even dominate the misinformation (‘myth’) but was not integrated into the relevant mental model. Depending on the available retrieval cues, this lack of integration can lead to unchecked misinformation retrieval and reliance. b | Retrieval account of continued influence. Integration has taken place but the myth is represented in memory more strongly, and thus dominates the corrective information in the competition for activation and retrieval. Note that the two situations are not mutually exclusive: avoiding continued influence might require both successful integration and retrieval of the corrective information.

One school of thought — the integration account — suggests that the CIE arises when a correction is not sufficiently encoded and integrated with the misinformation in the memory network (Fig. 2a). There is robust evidence that integration of the correction and misinformation is a necessary, albeit not sufficient, condition for memory updating and knowledge revision100. This view implies that a successful revision requires detecting a conflict between the misinformation and the correction, the co-activation of both representations in memory, and their subsequent integration102. Evidence for this account comes from the success of interventions that bolster conflict detection, co-activation, and integration of misinformation and correction103,104. Assuming that information integration relies on processing in working memory (the short-term store used to briefly hold and manipulate information in the service of thinking and reasoning), the finding that lower working memory capacity predicts greater susceptibility to the CIE is also in line with this account105 (although it has not been replicated106). This theory further assumes that as the amount of integrated correct information increases, memory for the correction becomes stronger, at the expense of memory for the misinformation102. Thus, both the interconnectedness and the amount of correct information can influence the success of memory revision.

An alternative account is based on the premise that the CIE arises from selective retrieval of the misinformation even when corrective information is present in memory (Fig. 2b). For example, it has been proposed that a retraction causes the misinformation representation to be tagged as false107. The misinformation can be retrieved without the false tag, but the false tag cannot be retrieved without concurrent retrieval of the misinformation. One instantiation of this selective-retrieval view appeals to a dual-process mechanism, which assumes that retrieval can occur based on an automatic, effortless process signalling information familiarity (‘I think I have heard this before’) or a more strategic, effortful process of recollection that includes contextual detail (‘I read about this in yesterday’s newspaper’)108. According to this account of continued influence, the CIE can arise if there is automatic, familiarity-driven retrieval of the misinformation (for example, in response to a cue), without explicit recollection of the corrective information and associated post-retrieval suppression of the misinformation107,109.

Evidence for this account comes from studies demonstrating that the CIE increases as a function of factors associated with increased familiarity (such as repetition)107 and reduced recollection (such as advanced participant age and longer study-test delays)92. Neuroimaging studies have suggested that activity during retrieval, when participants answer inference questions about an encoded event — but not when the correction is encoded — is associated with continued reliance on corrected misinformation110,111. This preliminary neuroimaging evidence generally supports the selective-retrieval account of the CIE, although it suggests that the CIE is driven by misinformation recollection rather than misinformation familiarity, which is at odds with the dual-process interpretation.

Both of these complementary theoretical accounts of the CIE can explain the superiority of detailed refutations over retractions92,112,113. Provision of additional corrective information can strengthen the activation of correct information in memory or provide more detail to support recollection of the correction89,103, which makes a factual correction more enduring than the misinformation90. Because a simple retraction will create a gap in a person’s mental model, especially in situations that require a causal explanation (for example, a fire must be caused by something), a refutation that can fill in details of a causal, plausible, simple and memorable alternative explanation will reduce subsequent recall of the retracted misinformation.

Social and affective factors

These cognitive accounts do not explicitly consider the influence of social and affective mechanisms on the CIE. One socio-affective factor is source credibility, the perceived trustworthiness and expertise of the sources providing the misinformation and correction. Although source credibility has been to found to exert little influence on acceptance of misinformation if the source is a media outlet63,114, there is generally strong evidence that credibility has significant impact on acceptance of misinformation from non-media sources42,88,115.

The credibility of a correction source also matters for (post-correction) misinformation reliance116, although perhaps less than the credibility of the misinformation source88. The effectiveness of factual corrections might depend on perceived trustworthiness rather than perceived expertise of the correction source117,118, although perceived expertise might matter more in science-related contexts, such as health misinformation119,120. It can also be quite rational to discount a correction if the correction source is low in credibility121,122. Further complicating matters, the perceived credibility of a source varies across recipients. In extreme cases, people with strong conspiratorial ideation tendencies might mistrust any official source (for example, health authorities)19,26. More commonly, people tend to trust sources that are perceived to share their values and worldviews54,55.

A second key socio-affective factor is worldview — a person’s values and belief system that grounds their personal and sociocultural identity. Corrections attacking a person’s worldview can be ineffective123 or backfire25,124. Such corrections can be experienced as attacking one’s identity, resulting in a chain reaction of appraisals and emotional responses that hinder information revision19,125. For example, if a message is appraised as an identity threat (for example, a correction that the risks of a vaccine do not outweigh the risks of a disease might be perceived as an identity threat by a person identifying as an anti-vaxxer), this can lead to intense negative emotions that motivate strategies such as discrediting the source of the correction, ignoring the worldview-inconsistent evidence or selectively focusing on worldview-bolstering evidence24,126. However, how a person’s worldview influences misinformation corrections is still hotly debated (Box 2), and there is a developing consensus that even worldview-inconsistent corrections typically have some beneficial impact91,127,128,129,130,131.

The third socio-affective factor that influences the CIE is emotion. One study found that corrections can produce psychological discomfort that motivates a person to disregard the correction to reduce the feeling of discomfort132. Misinformation conveying negative emotions such as fear or anger might be particularly likely to evoke a CIE133,134. This influence might be due to a general negativity bias11,135 or more specific emotional influences. For example, misinformation damaging the reputation of a political candidate might spark outrage or contempt, which might promote continued influence of this misinformation (in particular among non-supporters)134. However, there seems to be little continued influence of negative misinformation on impression formation when the person subjected to the false allegation is not a disliked politician, perhaps because reliance on corrected misinformation might be seen as biased or judgemental (that is, it might be frowned upon to judge another person even though allegations have been proven false)136.

Other studies have compared emotive and non-emotive events — for example, a plane crash falsely assumed to have been caused by a terror attack, resulting in many fatalities, versus a technical fault, resulting in zero fatalities — and found no impact of misinformation emotiveness on the magnitude of the CIE137. Moreover, just as a sad mood can protect against initial misinformation belief80, it also seems to facilitate knowledge revision when a correction is encountered138. People who exhibit both subclinical depression and rumination tendencies have even been shown to exhibit particularly efficient correction of negative misinformation relative to control individuals, presumably because the salience of negative misinformation to this group facilitates revision139.

Finally, there is evidence that corrections can also benefit from emotional recalibration. For example, when misinformation downplays a risk or threat (for example, misinformation that a serious disease is relatively harmless), corrections that provide a more accurate risk evaluation operate partly through their impact on emotions such as hope, anger and fear. This emotional mechanism might help correction recipients realign their understanding of the situation with reality (for example, to realize they have underestimated the real threat)113,140. Likewise, countering disinformation that seeks to fuel fear or anger can benefit from a downward adjustment of emotional arousal; for example, refutations of vaccine misinformation can reduce anti-vaccination attitudes by mitigating misinformation-induced anger141.

Interventions to combat misinformation

As discussed in the preceding section, interventions to combat misinformation must overcome various cognitive, social and affective barriers. The most common type of correction is a fact-based correction that directly addresses inaccuracies in the misinformation and provides accurate information90,102,112,142 (Fig. 3). A second approach is to address the logical fallacies common in some types of disinformation — for example, corrections that highlight inherently contradictory claims such as ‘global temperature cannot be measured accurately’ and ‘temperature records show it has been cooling’ (Fig. 4). Such logic-based corrections might offer broader protection against different types of misinformation that use the same fallacies and misleading tactics21,143. A third approach is to undermine the plausibility of the misinformation or the credibility of its source144. Multiple approaches can be combined into a single correction — for example, highlighting both the factual and logical inaccuracies in the misinformation or undermining source credibility and underscoring factual errors94,95,145. However, most research to date has considered each approach separately and more research is required to test synergies between these strategies.

Fig. 3: Barriers to belief updating and strategies to overcome them (part 1).
figure 3

How various barriers to belief updating can be overcome by specific communication strategies applied during correction, using event and health misinformation as examples. Colour shading is used to show how specific strategies are applied in the example corrections.

Fig. 4: Barriers to belief updating and strategies to overcome them (part 2).
figure 4

How various barriers to belief updating can be overcome by specific communication strategies applied during correction, using climate change misinformation as an example. Colour shading is used to show how specific strategies are applied in the example corrections.

More generally, two strategies that can be distinguished are pre-emptive intervention (prebunking) and reactive intervention (debunking). Prebunking seeks to help people recognize and resist subsequently encountered misinformation, even if it is novel. Debunking emphasizes responding to specific misinformation after exposure to demonstrate why it is false. The effectiveness of these corrections is influenced by a range of factors, and there are mixed results regarding their relative efficacy. For example, in the case of anti-vaccination conspiracy theories, prebunking has been found to be more effective than debunking146. However, other studies have found debunking to outperform prebunking87,95,142. Reconciling these findings might require considering both the specific type of correction and its placement in time. For example, when refuting climate misinformation, one study found that fact-based debunking outperformed fact-based prebunking, whereas logic-based prebunking and debunking were equally effective147.

Some interventions, particularly those in online contexts, are hybrid or borderline cases. For example, if a misleading social media post is tagged with ‘false’148 and appears alongside a comment with a corrective explanation, this might count as both prebunking (owing to the tag, which is likely to have been processed before the post) and debunking (owing to the comment, which is likely to have been processed after the post).

Prebunking interventions

The simplest prebunking interventions involve presenting factually correct information149,150, a pre-emptive correction142,151 or a generic misinformation warning99,148,152,153 before the misinformation. More sophisticated interventions draw on inoculation theory, a framework for pre-emptive interventions154,155,156. This theory applies the principle of vaccination to knowledge, positing that ‘inoculating’ people with a weakened form of persuasion can build immunity against subsequent persuasive arguments by engaging people’s critical-thinking skills (Fig. 5).

Fig. 5: Inoculation theory applied to misinformation.
figure 5

‘Inoculation’ treatment can help people prepare for subsequent misinformation exposure. Treatment typically highlights the risks of being misled, alongside a pre-emptive refutation. The refutation can be fact-based, logic-based or source-based. Inoculation has been shown to increase misinformation detection and facilitate counterarguing and dismissal of false claims, effectively neutralizing misinformation. Additionally, inoculation can build immunity across topics and increase the likelihood of people talking about the issue targeted by the refutation (post-inoculation talk).

An inoculation intervention combines two elements. The first element is warning recipients of the threat of misleading persuasion. For example, a person could be warned that many claims about climate change are false and intentionally misleading. The second element is identifying the techniques used to mislead or the fallacies that underlie the false arguments to refute forthcoming misinformation157,158. For example, a person might be taught that techniques used to mislead include selective use (‘cherry-picking’) of data (for example, only showing temperatures from outlier years to create the illusion that global temperatures have dropped) or the use of fake experts (for example, scientists with no expertise in climate science). Understanding how those misleading persuasive techniques are applied equips a person with the cognitive tools to ward off analogous persuasion attempts in the future.

Because one element of inoculation is highlighting misleading argumentation techniques, its effects can generalize across topics, providing an ‘umbrella’ of protection159,160. For example, an inoculation against a misleading persuasive technique used to cast doubt on science demonstrating harm from tobacco was found to convey resistance against the same technique when used to cast doubt on climate science143. Moreover, inoculated people are more likely to talk about the target issue than non-inoculated people, an outcome referred to as post-inoculation talk161. Post-inoculation talk is more likely to be negative than talk among non-inoculated people, which promotes misinformation resistance both within and between individuals because people’s evaluations tend to weight negative information more strongly than positive information162.

Inoculation theory has also been used to explain how strategies designed to increase information literacy and media literacy could reduce the effects of misinformation. Information literacy — the ability to effectively find, understand, evaluate and use information — has been linked to the ability to detect misleading news163 and reduced sharing of misinformation164. Generally, information literacy and media literacy (which focuses on knowledge and skills for the reception and dissemination of information through the media) interventions are designed to improve critical thinking165 and the application of such interventions to spaces containing many different types of information might help people identify misinformation166.

One successful intervention focused on lateral reading — consulting external sources to examine the origins and plausibility of a piece of information, or the credibility of an information source115,167,168. A separate non-peer-reviewed preprint suggests that focusing on telltale signs of online misinformation (including lexical cues, message simplicity and blatant use of emotion) can help people identify fake news169. However, research to date suggests that literacy interventions do not always mitigate the effects of misinformation170,171,172,173. Whereas most work has used relatively passive inoculation and literacy interventions, applications that engage people more actively have shown promise — specifically, app-based or web-based games174,175,176,177. More work is needed to consider what types of literacy interventions are most effective for conferring resistance to different types of misinformation in the contemporary media and information landscape178.

In sum, the prebunking approach provides a great tool to act pre-emptively and help people build resistance to misinformation in a relatively general manner. However, the advantage of generalizability can also be a weakness, because it is often specific pieces of misinformation that cause concern, which call for more specific responses.

Debunking interventions

Whereas pre-emptive interventions can equip people to recognize and resist misinformation, reactive interventions retrospectively target concrete instances of misinformation. For example, if a novel falsehood that a vaccine can lead to life-threatening side effects in pregnant women begins to spread, then this misinformation must be addressed using specific counter-evidence. Research broadly finds that direct corrections are effective in reducing — although frequently not eliminating — reliance on the misinformation in a person’s reasoning86,87. The beneficial effects of debunking can last several weeks92,100,179, although the effects can wear off quicker145. There is also evidence that corrections that reduce misinformation belief can have downstream effects on behaviours or intentions94,95,180,181 — such as a person’s inclination to share a social media post or their voting intentions — but not always91,96,182.

Numerous best practices for debunking have emerged90,145,183. First, the most important element of a debunking correction is to provide a factual account that ideally includes an alternative explanation for why something happened85,86,99,102,184. For example, if a fire was thought to have been caused by negligence, then providing a causal alternative (‘there is evidence for arson’) is more effective than a retraction (‘there was no negligence’). In general, more detailed refutations work better than plain retractions that do not provide any detail on why the misinformation is incorrect92,100,112,113. It can be beneficial to lead with the correction rather than repeat the misinformation to prioritize the correct information and set a factual frame for the issue. However, a preprint that has not been peer-reviewed suggests that leading with the misinformation can be just as, or even more, effective if no pithy fact is available150.

Second, the misinformation should be repeated to demonstrate how it is incorrect and to make the correction salient. However, the misinformation should be prefaced with a warning99,148 and repeated only once in order not to boost its familiarity unnecessarily104. It is also good to conclude by repeating and emphasizing the accurate information to reinforce the correction185.

Third, even though credibility matters less for correction sources compared with misinformation sources88, corrections are ideally delivered by or associated with high-credibility sources116,117,118,119,120,186. There is also emerging evidence that corrections are more impactful when they come from a socially connected source (for example, a connection on social media) rather than a stranger187.

Fourth, corrections should be paired with relevant social norms, including injunctive norms (‘protecting the vulnerable by getting vaccinated is the right thing to do’) and descriptive norms (‘over 90% of parents are vaccinating their children’)188, as well as expert consensus (‘doctors and medical societies around the world agree that vaccinations are important and safe’)189,190,191,192. One study found a benefit to knowledge revision if corrective evidence was endorsed by many others on social media, thus giving the impression of normative backing193.

Fifth, the language used in a correction is important. Simple language and informative graphics can facilitate knowledge revision, especially if fact comprehension might be otherwise difficult or if the person receiving the correction has a strong tendency to counterargue194,195,196,197. When speaking directly to misinformed individuals, empathic communication should be used rather than wielding expertise to argue directives198,199.

Finally, it has been suggested that worldview-threatening corrections can be made more palatable by concurrently providing an identity affirmation145,200,201. Identity affirmations involve a message or task (for example, writing a brief essay about one’s strengths and values) that highlights important sources of self-worth. These exercises are assumed to protect and strengthen the correction recipient’s self-esteem and the value of their identity, thereby reducing the threat associated with the correction and associated processing biases. However, evidence for the utility of identity affirmations in the context of misinformation corrections is mixed194, so firm recommendations cannot yet be made.

In sum, debunking is a valuable tool to address specific pieces of misinformation and largely reduces misinformation belief. However, debunking will not eliminate the influence of misinformation on people’s reasoning at a group level. Furthermore, even well-designed debunking interventions might not have long-lasting effects, thus requiring repeated intervention.

Corrections on social media

Misinformation corrections might be especially important in social media contexts because they can reduce false beliefs not just in the target of the correction but among everyone that sees the correction — a process termed observational correction119. Best practices for corrections on social media echo many best practices offline112, but also include linking to expert sources and correcting quickly and early202. There is emerging evidence that online corrections can work both pre-emptively and reactively, although this might depend on the type of correction147.

Notably, social media corrections are more effective when they are specific to an individual piece of content rather than a generalized warning148. Social media corrections are effective when they come from algorithmic sources203, from expert organizations such as a government health agency119,204,205 or from multiple other users on social media206. However, particular care must be taken to avoid ostracizing people when correcting them online. To prevent potential adverse effects on people’s online behaviour, such as sharing of misleading content, gentle accuracy nudges that prompt people to consider the accuracy of the information they encounter or highlight the importance of sharing only true information might be preferable to public corrections that might be experienced as embarrassing or confrontational181,207.

In sum, social media users should be aware that corrections can be effective in this arena and have the potential to reduce false beliefs in people they are connected with as well as bystanders. By contrast, confronting strangers is less likely to be effective. Given the effectiveness of algorithmic corrections, social media companies and regulators should promote implementation and evaluation of technical solutions to misinformation on social media.

Practical implications

Even if optimal prebunking or debunking interventions are deployed, no intervention can be fully effective or reach everyone with the false belief. The contemporary information landscape brings particular challenges: the internet and social media have enabled an exponential increase in misinformation spread and targeting to precise audiences14,16,208,209. Against this backdrop, the psychological factors discussed in this Review have implications for practitioners in various fields — journalists, legislators, public health officials and healthcare workers — as well as information consumers.

Implications for practitioners

Combatting misinformation involves a range of decisions regarding the optimal approach (Fig. 6). When preparing to counter misinformation, it is important to identify likely sources. Although social media is an important misinformation vector210, traditional news organizations can promote misinformation via opinion pieces211, sponsored content212 or uncritical repetition of politician statements213. Practitioners must anticipate the misinformation themes and ensure suitable fact-based alternative accounts are available for either prebunking or a quick debunking response. Organizations such as the International Fact-Checking Network or the World Health Organization often form coalitions in the pursuit of this endeavour214.

Fig. 6: Strategies to counter misinformation.
figure 6

Different strategies for countering misinformation are available to practitioners at different time points. If no misinformation is circulating but there is potential for it to emerge in the future, practitioners can consider possible misinformation sources and anticipate misinformation themes. Based on this assessment, practitioners can prepare fact-based alternative accounts, and either continue monitoring the situation while preparing for a quick response, or deploy pre-emptive (prebunking) or reactive (debunking) interventions, depending on the traction of the misinformation. Prebunking can take various forms, from simple warnings to more involved literacy interventions. Debunking can start either with a pithy counterfact that recipients ought to remember or with dismissal of the core ‘myth’. Debunking should provide a plausible alternative cause for an event or factual details, preface the misinformation with a warning and explain any logical fallacies or persuasive techniques used to promote the misinformation. Debunking should end with a factual statement.

Practitioners must be aware that simple retractions will be insufficient to mitigate the impact of misinformation, and that the effects of interventions tend to wear off over time92,145,152. If possible, practitioners must therefore be prepared to act repeatedly179. Creating engaging, fact-based narratives can provide a foundation for effective correction215,216. However, a narrative format is not a necessary ingredient140,217, and anecdotes and stories can also be misleading218.

Practitioners can also help audiences discriminate between facts and opinion, which is a teachable skill170,219. Whereas most news consumers do not notice or understand content labels forewarning that an article is news, opinion or advertising220,221, more prominent labelling can nudge readers to adjust their comprehension and interpretation accordingly. For example, labelling can lead readers to be more sceptical of promoted content220. However, even when forewarnings are understood, they do not reliably eliminate the content’s influence99,153.

If pre-emptive correction is not possible or ineffective, practitioners should take a reactive approach. However, not every piece of misinformation needs to be a target for correction. Due to resource limitations and opportunity costs, corrections should focus on misinformation that circulates among a substantive portion of the population and carries potential for harm183. Corrections do not generally increase false beliefs among individuals who were previously unfamiliar with the misinformation222. However, if the risk of harm is minimal, there is no need to debunk misinformation that few people are aware of, which could potentially raise the profile of its source.

Implications for information consumers

Information consumers also have a role to play in combatting misinformation by avoiding contributing to its spread. For instance, people must be aware that they might encounter not only relatively harmless misinformation, such as reporting errors, outdated information and satire, but also disinformation campaigns designed to instil fear or doubt, discredit individuals, and sow division2,26,223,224. People must also recognize that disinformation can be psychologically targeted through profit-driven exploitation of personal data and social media algorithms12. Thoughtless sharing can amplify misinformation that might confuse and deceive others. Sharing misinformation can also contribute to the financial rewards sought by misinformation producers, and deepen ideological divides that disenfranchise voters, encourage violence and, ultimately, harm democratic processes2,170,223,225,226.

Thus, while engaged with content, individuals should slow down, think about why they are engaging and interrogate their visceral response. People who thoughtfully seek accurate information are more likely to successfully avoid misinformation compared with people who are motivated to find evidence to confirm their pre-existing beliefs50,227,228. Attending to the source and considering its credibility and motivation, along with lateral reading strategies, also increase the likelihood of identifying misinformation115,167,171. Given the benefits of persuading onlookers through observational correction, everyone should be encouraged to civilly, carefully and thoughtfully correct online misinformation where they encounter it (unless they deem it a harmless fringe view)119,206. All of these recommendations are also fundamental principles of media literacy166. Indeed, a theoretical underpinning of media literacy is that understanding the aims of media protects individuals from some adverse effects of being exposed to information through the media, including the pressure to adopt particular beliefs or behaviours170.

Implications for policymakers

Ultimately, even if practitioners and information consumers apply all of these strategies to reduce the impact of misinformation, their efforts will be stymied if media platforms continue to amplify misinformation14,16,208,209,210,211,212,213. These platforms include social media platforms such as YouTube, which are geared towards maximizing engagement even if this means promoting misinformation229, and traditional media outlets such as television news channels, where misinformation can negatively impact audiences. For example, two non-peer-reviewed preprints have found that COVID-19 misinformation on Fox News was causally associated with reduced adherence to public health measures and a larger number of COVID-19 cases and deaths230,231. It is, therefore, important to scrutinize whether the practices and algorithms of media platforms are optimized to promote misinformation or truth.

In this space, policymakers should consider enhanced regulation. These regulations might include penalties for creating and disseminating disinformation where intentionality and harm can be established, and mandating platforms to be more proactive, transparent and effective in their dealings with misinformation. With regards to social media specifically, companies should be encouraged to ban repeat offenders from their platforms, and to generally make engagement with and sharing of low-quality content more difficult12,232,233,234,235. Regulation must not result in censorship, and proponents of freedom of speech might disagree with attempts to regulate content. However, freedom of speech does not include the right to amplification of that speech. Furthermore, being unknowingly subjected to disinformation can be seen as a manipulative attack on freedom of choice and the right to be well informed236. These concerns must be balanced. A detailed summary of potential regulatory interventions can be found elsewhere237,238.

Other strategies have the potential to reduce the impact of misinformation without regulation of media content. Undue concentration of ownership and control of both social and traditional media facilitate the dissemination of misinformation239. Thus, policymakers are advised to support a diverse media landscape and adequately fund independent public broadcasters. Perhaps the most important approach to slowing the spread of misinformation is substantial investment in education, particularly to build information literacy skills in schools and beyond240,241,242,243. Another tool in the policymaker’s arsenal is interventions targeted more directly at behaviour, such as nudging policies and public pledges to honour the truth (also known as self-nudging) for policymakers and consumers alike12,244,245.

Overall, solutions to misinformation spread must be multipronged and target both the supply (for example, more efficient fact-checking and changes to platform algorithms and policies) and the consumption (for example, accuracy nudges and enhanced media literacy) of misinformation. Individually, each intervention might only incrementally reduce the spread of misinformation, but one preprint that has not been peer-reviewed suggests that combinations of interventions can have a substantial impact246.

More broadly speaking, any intervention to strengthen public trust in science, journalism, and democratic institutions is an intervention against the impacts of misinformation247,248. Such interventions might include enhancing transparency in science249,250 and journalism251, more rigorous fact-checking of political advertisements252, and reducing the social inequality that breeds distrust in experts and contributes to vulnerability to misinformation253,254.

Summary and future directions

Psychological research has built solid foundational knowledge of how people decide what is true and false, form beliefs, process corrections, and might continue to be influenced by misinformation even after it has been corrected. However, much work remains to fully understand the psychology of misinformation.

First, in line with general trends in psychology and elsewhere, research methods in the field of misinformation should be improved. Researchers should rely less on small-scale studies conducted in the laboratory or a small number of online platforms, often on non-representative (and primarily US-based) participants255. Researchers should also avoid relying on one-item questions with relatively low reliability256. Given the well-known attitude–behaviour gap — that attitude change does not readily translate into behavioural effects — researchers should also attempt to use more behavioural measures, such as information-sharing measures, rather than relying exclusively on self-report questionnaires93,94,95. Although existing research has yielded valuable insights into how people generally process misinformation (many of which will translate across different contexts and cultures), an increased focus on diversification of samples and more robust methods is likely to provide a better appreciation of important contextual factors and nuanced cultural differences7,82,205,257,258,259,260,261,262,263.

Second, most existing work has focused on explicit misinformation and text-based materials. Thus, the cognitive impacts of other types of misinformation, including subtler types of misdirection such as paltering (misleading while technically saying the truth)95,264,265,266, doctored images267, deepfake videos268 and extreme patterns of misinformation bombardment223, are currently not well understood. Non-text-based corrections, such as videos or cartoons, also deserve more exploration269,270.

Third, additional translational research is needed to explore questions about causality, including the causal impacts of misinformation and corrections on beliefs and behaviours. This research should also employ non-experimental methods230,231,271, such as observational causal inference (research aiming to establish causality in observed real-world data)272, and test the impact of interventions in the real world145,174,181,207. These studies are especially needed over the long term — weeks to months, or even years — and should test a range of outcome measures, for example those that relate to health and political behaviours, in a range of contexts. Ultimately, the success of psychological research into misinformation should be linked not only to theoretical progress but also to societal impact273.

Finally, even though the field has a reasonable understanding of the cognitive mechanisms and social determinants of misinformation processing, knowledge of the complex interplay between cognitive and social dynamics is still limited, as is insight into the role of emotion. Future empirical and theoretical work would benefit from development of an overarching theoretical model that aims to integrate cognitive, social and affective factors, for example by utilizing agent-based modelling approaches. This approach might also offer opportunities for more interdisciplinary work257 at the intersection of psychology, political science274 and social network analysis275, and the development of a more sophisticated psychology of misinformation.