Skip to main content
Open Access

Psychological Underpinnings of Misinformation Countermeasures

A Systematic Scoping Review

Published Online:https://doi.org/10.1027/1864-1105/a000407

Abstract

Abstract: There has been substantial scholarly effort to (a) investigate the psychological underpinnings of why individuals believe in misinformation, and (b) develop interventions that hamper their acceptance and spread. However, there is a lack of systematic integration of these two research lines. We conducted a systematic scoping review of empirically tested psychological interventions (N = 176) to counteract misinformation. We developed an intervention map and analyzed boosting, inoculation, identity management, nudging, and fact-checking interventions as well as various subdimensions. We further examined how these interventions are theoretically derived from the two most prominent psychological accounts for misinformation susceptibility: classical and motivated reasoning. We find that the majority of misinformation studies examined fact-checking interventions, are poorly linked to basic psychological theory and not geared towards reducing motivated reasoning. Based on this, we outline future research avenues for effective psychological countermeasures against misinformation.

The Russian war against Ukraine, COVID-19, election outcomes, and many other political events from the recent past have been all linked to one topic: misinformation. While scholars heavily debate whether misinformation threatens the intellectual wellbeing of a society (Lewandowsky et al., 2017) or if these concerns are best understood as a moral panic unsupported by empirical findings (Jungherr & Schroeder, 2021), substantial indications of it being at least partly responsible for the aforementioned events (see, e.g., House of Commons, 2019; Mueller, 2019) generated massive research efforts. Much of psychological misinformation research has focused on two questions: What makes individuals susceptible to misinformation? And which interventions can be developed in order to help individuals identify misinformation correctly or change existing misperceptions? Although both questions are conceptually related, there have been only few attempts to systematically connect basic and applied research perspectives (see, e.g., Pennycook & Rand, 2021; van der Linden, 2022).

In the present study, we conducted a systematic scoping review to map and systematize the field of psychological misinformation interventions, refine the connection between basic and applied research, and identify potential shortcomings of the current state of intervention research. We categorize existing intervention approaches and analyze the intervention with a focus on (a) the theoretical foundation of intervention research, (b) the intercultural generalizability of existing findings, and (c) the long-term orientation of intervention evaluation. Our paper is structured in three parts. First, we outline the two most prominent psychological accounts of misinformation susceptibility, classical and motivated reasoning, in order to set a foundation for the theoretical understanding of potential intervention mechanisms. In the second part of this paper, we map and structure existing psychological misinformation intervention research based on a systematic scoping review. In the third part, we analyze the intervention landscape, identify shortcomings, and provide recommendations for future research.

Misinformation and Information Disorders

Not only the societal and political impact of misinformation, but also its conceptualization remain important and contested issues (Wardle & Derakhshan, 2017). Misinformation in its core refers to misleading, inaccurate, or false information, which is spread unintentionally. However, many scholars use “misinformation” as an umbrella term for different information disorders including disinformation, fake news, and propaganda (Pennycook & Rand, 2021). Misinformation also shares common elements with conspiracy myths such that conspiracy myths can be false and misleading, whereas misinformation is not necessarily embedded in an ideology (Faragó et al., 2019).

Psychological Drivers of Misinformation Susceptibility

The identification of misinformation in social media, but also in other nondigital environments, provides a crucial challenge for individuals. Recent scholarly work has emphasized two psychological processes that can enhance individuals’ susceptibility to misinformation. These are superficial information processing and selective information processing (Bryanov & Vziatysheva, 2021; Chen et al., 2021; Nyhan & Reifler, 2019; Pennycook & Rand, 2021; van Bavel et al., 2021; van der Linden, 2022). While superficial processing is well encapsulated in the framework of classical reasoning, selective processing is the core process of motivated reasoning.

Based on dual-process theories such as the elaboration likelihood model (Petty & Cacioppo, 1986), the classical reasoning account posits that information processing is shaped by two modes that differ in the amount of cognitive effort put into the processing: In an effortful processing mode, information is systematically used and scrutinized to deliberatively arrive at conclusions. In an effortless processing mode, information is used in a more superficial and parsimonious way to arrive at conclusions using cognitive short-cuts or heuristics. The individual’s motivation and ability to invest cognitive resources determine the processing mode and eventually the outcome. Numerous studies indicate that processing information in an effortless and superficial way makes it more likely that individuals falsely believe in misinformation, whereas a more deliberate information processing increases the likelihood that individuals spot errors or inconsistencies (Chen et al., 2021; Pennycook & Rand, 2021; van der Linden, 2022).

On the other hand, the motivated reasoning account posits that information processing is selective, non-truth convergent, and driven by goals that are distinct and independent of accuracy motivation (Kahan, 2015; Kunda, 1990). Motivated reasoning can be the consequence of experiencing high levels of defense motivation due to cognitive dissonance, resistance to change, or identity threat (see Jonas et al., 2014). When individuals are motivated to defend their personal or social identity, they are more inclined to believe information that is in line with existing, identity-relevant beliefs and less inclined to believe information contradicting them (Kunda, 1990). Motivated reasoning is discussed as a major cause of misinformation susceptibility (Nyhan & Reifler, 2019; van Bavel et al., 2021) as well as a major psychological barrier to combat misinformation (MacFarlane et al., 2020). Since classical and motivated reasoning are described as main reasons for why individuals are susceptible to misinformation, they should also be reflected in research dealing with attempts to attenuate this susceptibility. One main focus of this systematic scoping review therefore lies at a closer examination on the references of misinformation intervention research on these two accounts.

Psychological Interventions Against Misinformation

The scope of interventions and tools addressing misinformation (susceptibility) has been constantly growing over the last years as a result of not only academic research, but also efforts from governments, and nongovernmental organizations as well as civil society initiatives. Our systematic scoping review focuses on psychological interventions against misinformation on the individual micro-level that have been empirically tested. This includes all efforts before, during, or after individuals come into contact with misinformation. With this systematic review we want to illuminate (a) which psychological interventions exist on the micro-level, (b) how well they are connected to theoretical accounts on misinformation susceptibility, especially classical and motivated reasoning, (c) where theses interventions have been tested, and (d) if the temporal stability of their effects has been evaluated. These foci are driven by several reasons. Existing reviews lack systematic data collection, thus failing to provide a comprehensive overview of the intervention landscape for psychological misinformation. Second, the current state of interventions against misinformation is heterogenous in terms of their effectiveness (for example, discordant findings between Aslett et al., 2022 versus Kim & Dennis, 2019). We anticipate a deeper comprehension on why, when, and for whom interventions do and do not work, if we can explain their functioning with theories recurring to why individuals are susceptible to misinformation in the first place. Classical and motivated reasoning are our primary focus due to their prominence in many scholarly disciplines. Third, there are ongoing concerns questioning the generalizability of research findings conducted in the Global North for countries of the Global South (Altay et al., 2023; Wassermann & Madrid-Morales, 2022). Interventions that seem to work robustly in countries of the Global North, such as inoculation, completely fail when applied to countries of the Global South (Harjani et al., 2023). While various factors contribute to differing findings, regional context appears significant. Our aim is to offer an initial overview of potential research bias in studied regions, encouraging research beyond Western areas. Lastly, misinformation demands durable solutions. Micro-interventions targeting individuals can help, but we must gauge their lasting impact beyond experiments. Our aim is therefore to outline the extent of interventions tested for long-term effects.

Method

In order to map and analyze psychological interventions against misinformation, we conducted a systematic scoping literature review according to the PRISMA 2020 statement. This included three steps (literature search, systematic screening, categorization), which will be outlined in the following.

Literature Search

As a starting point to our literature search, we identified existing narrative reviews about misinformation interventions in order to inform our search string (Kozyreva et al., 2020; Lorenz-Spreen et al., 2020; Lyons et al., 2021; Treen et al., 2020). The search string we used was the following: ((misinform* OR disinform* OR fake news) AND (debunk* OR *warn* OR verify* OR enhance* OR skill OR inoculat* OR correct* OR fact-check* OR countermeasure* OR prevent* OR intervent* OR mitigat* OR litera* OR affirm* OR boost* OR nudg* OR counterargu * OR persuas* OR norm* OR “motivated reasoning” OR “classical reasoning” OR “dual process”) AND (media OR journalis* OR online OR “public opinion”)). We retrieved empirical works from the three interdisciplinary databases Web of Science, PsycInfo, and Scopus. We carried out our literature collection on October 4, 2021, which yielded an initial result of 4,708 references. Although we believe that the choice of databases as well as our extensive search string covered the majority of empirical studies on misinformation interventions, we complemented our dataset by manually adding references that we identified through meta-analyses and reviews. We identified another 91 references. After removal of duplicates, 3,088 references remained in total.

Systematic Screening

We aimed to include empirical research containing a prevention or intervention attempt against individual misperceptions stemming from misinformation or fostering competencies important to identify such misinformation. The inclusion and exclusion criteria are depicted in Table 1.

Table 1 Inclusion and exclusion criteria

The systematic screening was conducted with Covidence, a systematic review management tool (https://www.covidence.org). Four research assistants and two PhD researchers conducted a two-stage screening process: title/abstract screening and full-text screening. Following an extensive training on the inclusion and exclusion criteria, all 3,088 references were double-screened in Stage 1. References with differing votes were jointly reviewed; 347 references moved to Stage 2. In Stage 2, 176 references were independently screened and deemed eligible for extraction and categorization (refer to Figure 1).

Figure 1 PRISMA flow chart.

Categorization

After the final set of articles was specified, the following variables were coded for each paper: journal discipline, sample size, sample country, sample age, study design, main intervention category, subintervention categories, longevity of success, and theoretical foundation (ESM 2). To analyze the theoretical foundation, we investigated whether authors referred to at least one theory in order to explain the operating principle of their tested intervention in the theory section. We take the interdisciplinarity of the research field into account and categorize theories regardless of their disciplinary background. Analysis was done using R.

Results

Our final dataset consisted of 176 papers, including 254 studies with a total number of 375 experimental interventions. The resulting misinformation intervention map with main and subcategories is outlined in Figure 2. Overall, 73.3% of the papers were published between 2019 and 2021 indicating that research on misinformation interventions has developed rapidly in recent years. Most studies investigated fact-checking interventions (62.1%), followed by boosting (14.9%), nudging (10.1%), inoculation (6.1%), and identity management (1.6%). Of all the interventions in our dataset, 5.1% include a mix of different intervention approaches. The vast majority of studies on misinformation interventions have been conducted using US samples (71.0%), followed by studies conducted in Germany (5.1%), Australia (4.5%), and the UK (2.8%); 5.7% of the papers operated with samples from different countries. An overrepresentation of German samples due to the inclusion criterion of papers written in German is unlikely, since all nine papers operating with German samples are written in English. In terms of sample age, 76% of substudies were conducted with adult populations, 21% with students, and 3% with adolescents. One substudy tested its intervention with a senior sample. Only 8.3% of the interventions examined the long-term effectiveness of the interventions – most of them studies on inoculation (6.1%). Of the journals in which misinformation intervention research is published, 37% fall into the area of communication science, followed by psychology (24%) and political science (16%). Whereas papers on inoculation and nudging are more often published in psychology journals, works on boosting and fact-checking are more likely to appear in communication science outlets. For an overview about all disciplines involved in misinformation intervention research, see the Electronic Supplementary Material, ESM 1.

Figure 2 Intervention map.

With 126 papers, 177 studies including 233 interventions, fact-checking is by far the most researched intervention against misinformation in our sample (62.1% of all interventions). Fact-checking groups interventions that rebut inaccurate claims or existing misperceptions and take place after individuals have been in contact with misinformation. We differentiate between flagging (n = 49; 21.0%), social invalidation (n = 21; 9.0%), and expert correction (n = 163; 70.0%) as subcategories to fact-checking. Flags are graphical elements that visually highlight misinformation at the time of their exposure (Garrett & Poulsen, 2019). In contrast to other forms of fact-checks, flags only indicate that a claim is false or disputed, but do not offer an alternative explanation or further background information. Flags typically refer to either expert sources (“Article disputed by [fact-checking organization name]”, n = 41; 83.7%) or peer sources (“Article identified as false by [x number of social media] friends”, n = 8; 16.3%; Garrett & Poulsen, 2019). Social invalidation denounces misinformation in the comment section underneath a posting, not only by pointing to falsehoods but also offering corrections. It is provided by a fellow network user, not the social media platform (e.g., Martel et al., 2021). The third and most extensive subcategory of fact-checking interventions are expert corrections. Thereby we summarize all attempts to refute misinformation in greater detail with contextual information by some professional entity such as a fact-checking or scientific organization. Expert correction can take the form of a simple rebuttal (n = 140; 86.0%) such as in many texts published by fact-checking organizations like Snopes (United States), where misinformation is corrected and underlined with contextual information (Chung & Kim, 2021). In narrative correction (n = 9; 5.5%), the corrective information is embedded in a compelling story in order to immerse the recipient into its content and minimize psychological reactance (Huang & Wang, 2022). Moreover, existing misperceptions can be corrected with consensus corrections (n = 13; 8.0%), which emphasize the existing expert consensus after being confronted with misinformation. By communicating, for example, that 97% of climate scientists agree about the existence of the anthropogenic climate change, existing misperceptions about the true causes of climate change should be corrected (Chockalingam et al., 2021). The most elaborated form of expert correction is debunking (n =1; 0.6%), which follows a defined sequence of stating correcting facts, warning about the misinformation, explaining its fallacy, and closing with facts (Lewandowsky et al., 2020). As with many terms in misinformation research, debunking got used excessively in order to describe a broad range of corrective attempts, most of them not in line with the debunking process described by Lewandowsky et al. (2020). Our sample includes four papers using the term in their title, but only one of their interventions mirrors the debunking logic described above (Yousuf et al., 2021).

Overall, 34 papers with a total number of 41 studies examined 56 boosting interventions. This accounts for 14.9% of all researched interventions against misinformation in our dataset and represents the second most researched category of misinformation interventions. Boosting aims at strengthening skills and knowledge structures of individuals in order to lower their susceptibility to misinformation (Lorenz-Spreen et al., 2020). These interventions are typically implemented before individuals get in touch with misinformation. We distinguish between knowledge enhancement (n = 13; 23.2%) and literacy interventions (n = 43; 76.8%). Knowledge enhancement indicates the acquisition of facts for specific topics such as climate change, vaccinations, economics, or politics independently of the occurrence of misinformation. For example, Cook et al. (2017) and van der Linden et al. (2017) educated their participants about the scientific consensus regarding the anthropogenic climate change prior to presenting misinformation. Guan et al. (2021) showed their participants a short video mini-lecture either with information by the World Health Organization about the origins and characteristics of COVID-19 or about the concept of conspiracy myths as well as reasons why people believe them.

In contrast to knowledge enhancement, literacy interventions teach meta-skills that can be applied to different topics. We define literacy as the ability to access, analyze, evaluate, create, and use all forms of communication. It serves as an umbrella term for different forms interventions: information literacy (n = 16; 38.1%), news literacy (n = 17; 40.5%), digital literacy (n = 2; 4.8%), and science literacy interventions (n = 5; 11.9%). Two additional interventions (4.8%) combine different literacy interventions on one intervention (Badrinathan, 2021; Nygren et al., 2021). Information literacy describes the ability to understand, find, evaluate, and use information (Association for College and Research Libraries, 2000). Interventions summarized under this term teach participants to critically read and evaluate information by providing simple guidelines (Guess et al., 2020), larger curricula (McGrew, 2020), or educational games (Yang et al., 2021). News literacy covers the understanding of the role news play in a society, the motivation to seek out news, to critically evaluate news, and also to produce them (Malik et al., 2013). News literacy interventions remind users to deliberately select their media environment as well as to critically evaluate the news they consume (Tully et al., 2020; Vraga et al., 2021). They further educate participants about fake news and related concepts such as deepfakes (Hwang et al., 2021) as well as provide tips on how to spot misleading advertisement techniques (Burls et al., 2019). Digital literacy describes the proficiency of applying and using digital devices and tools (Jones-Jang et al., 2021). During an educational lecture consisting of different literacy modules, Nygren et al. (2021) familiarized the participants with an image verification tool to strengthen digital literacy skills. Finally, science literacy interventions teach how scientific information is produced, how the media reshapes and communicates scientific evidence, and how individuals encounter that information (Howell & Brossard, 2021). In a study by Salvatore and Morton (2021), participants received a brief tutorial on the ecological fallacy explaining that probabilistic information does not need to pertain to every single case in order to be true before being confronted with misinformation, while Tseng et al. (2021) tested a reading guide that taught students to critique scientific claims.

Overall, 15 papers including 29 studies tested a total number of 38 nudging interventions, which covers 10.1% of all interventions in our dataset. Nudging interventions provide small incentives in the communicative environment that enhance the likelihood of individuals to identify misinformation. To be considered as a nudge, the intervention must be small-scaled and cheap to avoid, meaning that alternative options must not be excluded (Thaler & Sunstein, 2008). Nudging interventions are presented at the same time as (mis-)information. We distinguish between social norm nudges (n = 7; 18.4%), credibility nudges (n = 5; 13.2%), accuracy nudges (n = 19; 50%), and lateral reading nudges (n = 7; 18.4%). Social norm nudges remind participants about normative standards or desired behavior in the evaluation of information. In a study by Andı and Akesson (2020), participants were nudged with the statement that, “…most responsible people think twice before sharing content with their friends and followers” in order to improve their sharing discernment (p. 8). Gimpel et al. (2021) tested the effects of injunctive and descriptive norms on the amount of fake news that are reported by participants. Credibility nudges drag the user’s attention to the credibility of the platform, source, or content where they encounter a message. Contrary to flags, they are allocated to all information regardless of their veracity. Kim and Dennis (2019) as well as Dias et al. (2020) did this by highlighting the source of an article in order to make them more salient to the user. Accuracy nudges prompt the user to examine the content in question more deliberatively. They can be realized by reminding social media users about the importance of sharing only content that they perceive as accurate (Pennycook et al., 2020), or by letting them rate and reason the accuracy of a headline (Jahanbakhsh et al., 2021). In a field experiment, Tsipursky et al. (2018) tested the effect of a “pro-truth-pledge” that participants had to declare on their subsequent news sharing accuracy on Facebook.

Finally, lateral reading nudges motivate users to seek alternative sources in order to verify an information in question. This can happen in form of simple reminders (Kobayashi et al., 2021) or with tools such as BalancedView, which automatically display related articles from alternative sources next to the article in question (Thornhill et al., 2019). Usually, lateral reading is conceptualized as an integral part of media literacy (see, e.g., McGrew et al., 2019). However, because we only included studies explicitly dealing with misinformation, studies that cover lateral reading boosts in order to generally improve credibility assessments of information were excluded from our dataset.

Overall, 13 papers with a total number of 19 studies in our dataset examined 23 inoculation interventions. In comparison to other intervention categories, inoculation covers 6.1% of all researched interventions against misinformation and therefore make up the fourth most researched category. Developed in the field of persuasion, inoculation is a psychological strategy to attain resistance toward attitudinal change in individuals (McGuire, 1964). We differentiate between classic inoculation (n = 4; 17.4%), warnings (n = 8; 34.8%), and strategic inoculation interventions (n = 11; 47.8%). Classic inoculation consists of two elements: (1) A forewarning announces that in the following, held beliefs will be questioned, and (2) a refutational preemption, in which mild versions of counterarguments against the held belief are presented but directly refuted (McGuire, 1964). The combination of forewarning and refutational preemption is supposed to unleash internal counterarguing, in which an individual bolsters existing attitudes and eventually reaches a state of strengthened resistance against persuasive attacks (Pfau et al., 2006). Applying a classic inoculation treatment, Zerback et al. (2021) displayed a forewarning about an upcoming persuasive attempt and subsequently informed about the exact arguments that were used during the eventual misinformation attack (refutational preemption). Over the past few years, the necessity of a forewarning for a successful inoculation has been questioned since many studies run without it (Roozenbeek & van der Linden, 2019). Jolley and Douglas (2017) as well as Xiao and Su (2021) use refutational preemptions in order to prebunk conspiracy myths and vaccine misperceptions without explicitly stating a forewarning. Warnings are single inoculation elements. In our dataset, we find eight interventions testing warnings without any additions. Wojdynski et al. (2019) warned their participants about the existence of fake news before assessing the perceived credibility of true and incorrect articles. Strategy-based inoculations rely on the assumption that inoculation can also be successful when people already hold misperceptions. Zerback et al. (2021) warned and informed their participants about persuasive strategies in astroturfing comments, while Guan et al. (2021) first confronted participants with mild versions of misinformation, then explained common loopholes and fallacies before presenting a more persuasive version of misinformation.

Of 11 strategic inoculation interventions in our sample, eight are gamified. BadNews, an inoculation game, puts players in the role of a social media misinformation troll who collects as many followers as possible by applying typical misinformation strategies to make posts sensational and potentially viral (e.g., Basol et al., 2020; Roozenbeek & van der Linden, 2019).

Finally, identity management studies are relatively rare in misinformation intervention research. Only three papers with six studies of our sample examined six identity interventions as a tool against misinformation, which covers 1.6% of our total intervention sample. Identity management interventions aim to reduce biased information processing by altering the way a person perceive themselves. It takes place before the person is confronted with belief-incongruent information that would usually elicit feelings of personal or social identity threat (Lyons et al., 2021). We differentiate between perspective taking (n = 1; 16.6%) and self-affirmation (n = 5; 83.3%) interventions.

Perspective taking prompts participants to walk in the shoes of an outgroup member and imagine their position. Guan et al. (2021) asked participants in the context of COVID-19 misinformation to imagine a pleasant conversation with a Chinese person and subsequently assessed conspiratorial beliefs against China and its residents. Self-affirmation interventions aim at securing the sense of self-integrity of a person within another domain prior to the confrontation of belief-incongruent information in order to minimize perceived identity threat and reactance (Nyhan & Reifler, 2019). This can be implemented by inviting participants to reflect on a value that is personally relevant to them (Reavis et al., 2017).

Finally, our sample contains 12 papers with 16 studies that combine different kinds of misinformation interventions (n = 19). The majority makes use of a combination of nudging and fact-checking. Kim and Dennis (2019) test the effectiveness of a heightened visibility of the source (by displaying it larger) in combination with a source rating. Boosting and inoculation are combined by van der Linden et al. (2017) and Williams and Bond (2020). Both taught participants about climate change consensus before subjecting them to an inoculation treatment. Lastly, Carnahan et al. (2018) combined self-affirmations (identity management) with expert corrections (fact-checking) in order to make participants less reactant when confronted with attitude-inconsistent corrections.

Theoretical Foundation of Interventions

In total, 278 (77.2%) of all interventions in our dataset are not linked to any basic theory about susceptibility to misinformation. This means that there is no explicit reference to any theoretical model or assumption explaining why the intervention is expected to have an effect. A total of 25 (7.2%) interventions are linked to the classical reasoning account. For example, flags are described as heuristic cues that can help to guide superficial information processing in an overloaded environment (Gaozhao, 2021). Another argumentation linked to classical reasoning is that literacy interventions motivate students to deliberately process information in media environments (Tseng et al., 2021). The most direct link to the classical reasoning account can be found in studies on accuracy nudges that are thought to enhance deliberation by increasing accuracy motivation (Kobayashi et al., 2021; Martel et al., 2021). Only five interventions (0.5%) are linked to the motivated reasoning account. The most explicit example is an identity-affirmation intervention that derives its assumptions about the efficacy of the intervention from self-affirmation theory and from the motivated reasoning account (Nyhan & Reifler, 2019). By strengthening self-worth within another domain, the urge to engage in motivated reasoning should be reduced. Other links are more indirect, for example, by ascribing the success of expert fact-checking conducted by members of one’s own social group to a decline in identity threat and, thus, motivated reasoning (Chockalingam et al., 2021). Finally, 48 (15.1%) interventions are linked to other theories (see Figure 3). Most importantly, 12 (52.2%) inoculation interventions are directly linked to inoculation theory. Links to other theories are less frequent. For example, flags are also linked to reputation theory (Kim et al., 2019) and an information literacy tutorial is theorized to rely on observational learning (Axelsson et al., 2021).

Figure 3 Theoretical foundation of interventions per main category. Numerals depict total number of interventions per main and subcategory. Lifted chart sections represent interventions linked to classical (diagonal lines) or motivated reasoning (grid). Proportion of interventions linked to other theories is depicted with dots. Blank proportions indicate no theoretical foundation.

Discussion

In the present paper, we conducted a systematic scoping review of misinformation intervention studies and mapped an intervention landscape based on 375 experimental tests. Our review highlights significant variation in research coverage across different intervention types. Fact-checking interventions garnered the most attention, encompassing 62.1% of our dataset. By contrast, identity management interventions were represented by only 1.6%. On the basis of this overview, we posit six challenges for future research on misinformation interventions.

Challenge 1: Lack of Theoretical Connection

Three out of four interventions in our dataset are not linked to classical and motivated reasoning or any theoretical mode in order to explain its effect. Although classical and motivated reasoning are the leading theories for explaining misinformation susceptibility, interventions tackling this susceptibility are rarely explicitly built on this evidence. But also other theories are only rarely referred to in intervention research. The intervention category best grounded in theory, inoculation, links its intervention mechanism to inoculation theory, a prominent theory of persuasion research (Ivanov et al., 2015), which has yet not been theoretically integrated with other theoretical accounts such as classical or motivated reasoning. In the case of fact-checking interventions, studies linking intervention effectiveness to theory reference a diverse range of theories. For instance, Bayesian updating (Carnahan et al., 2021), social learning (Guilbeault et al., 2021), and narrative transportation (Huang & Wang, 2022) are cited. This could either indicate that fact-checking subcategories are distinct in their functioning or that scholars chose theories rather arbitrarily to substantiate their interventions. These findings all indicate a problematic relation between applied and basic research on misinformation and underline the need for scholars to examine not only whether an intervention is working but also why it is working. Providing reasonable explanations for the effect mechanisms underlying treatments enables us to understand not only the interventions with significant effects but also failed ones as well as to stipulate the design of innovative approaches.

Challenge 2: Little to No Means Against Motivated Reasoning

Although motivated reasoning is considered an important cause for misinformation susceptibility in basic research (Altay et al., 2023), it is almost not addressed by interventions. One promising intervention category in this regard is identity management, which aims at decreasing defense motivation. A study of exemplary function could be seen in the work by Nyhan and Reifler (2019), who tested a self-affirmation intervention that reduced identity threat and defense motivation. Identity management interventions originally come from social psychology and were designed to decrease prejudice against outgroups. It might be fruitful to transfer other social psychology concepts such as bias awareness (Perry et al., 2015) in order to craft novel and powerful anti-misinformation tools that decrease defense motivation and consequently motivated reasoning. The currently small number of identity management studies could be explained by the relative novelty of the misinformation interventions as a research field as well as the heterogenous effects of the existing studies (see, e.g., Lyons, 2018).

Challenge 3: Limited Generalizability of Results Outside the United States

We believe it crucial to examine misinformation interventions with more culturally diverse samples since 71.0% of the studies in our sample have been conducted with US samples and only two studies in countries of the Global South, namely, Indonesia and India (Guess et al., 2020; Rustan, 2020). Our finding empirically underlines what experts of the field agree upon: Misinformation research concentrates too much on the United States (Altay et al., 2023). We must keep in mind that misinformation is not an individual malaise but implemented in a specific cultural and political context with varying norms, traditions, rules, and needs. Being a polarized society with a two-party system, findings from an US context might hardly be transferable to other WEIRD (western, educated, industrialized, rich, and democratic) countries with multiparty political systems, not to mention non-Western countries. In Global South societies, corrections for misinformation might need different features than in the West. Cultural norms, such as refraining from correcting elders, could hinder fact-checking effectiveness. Moreover, in some countries like Russia the state itself is a spreader of misinformation. This severely complicates the implementation and maintenance of public media services with high journalistic and ethical standards, literacy education programs, or fact-checking institutions (Wassermann & Madrid-Morales, 2022).

Challenge 4: Limited Studies for Age Groups Other Than Adults

Most of the interventions (76%) in our dataset have been tested using adult populations. Since there is preliminary evidence about the specific vulnerability of older citizens for misinformation (Shu et al., 2018), we need more intervention research for this target group.

Challenge 5: Limited Knowledge About the Longevity of Effects

Our findings indicate that only 8.3% of interventions examine long-term effects. This is unproblematic for interventions that are specifically geared toward short-term effect such as nudging interventions. However, other categories of interventions (e.g., boosting) are conceptualized to provide changes that extend a specific situation. Future research needs to include follow-up tests to measure the longevity of effects and to identify reasonable intervals for booster sessions based on the theoretical logic of the interventions.

Challenge 6: Interdisciplinarity Might Lead to Disconnected Research Strings

Misinformation intervention research primarily involves communication science, followed by psychology and political science. However, we identified nine disciplines striving to combat misinformation. This interdisciplinary nature has dual implications. On one hand, diverse perspectives foster innovative interventions. On the other hand, however, varied disciplines could lead to disjointed research, resulting in an inconsistent state of knowledge. To harness interdisciplinarity, we advocate increased collaborative efforts across disciplines.

Limitations

Although the findings of our systematic scoping review rely on a rich and profound set of data, they are a snapshot in time. Taking into account that most papers in our sample were published between 2019 and 2021, the number of publications might be even higher for the upcoming years.

Another limitation arises from aligning psychological theories with interdisciplinary research. To address this, we categorized theoretical foundations regardless of disciplinary origins. While a small portion (15.1%) of research referenced theories beyond classical and motivated reasoning, no discernible pattern emerged. Notably, classical reasoning and motivated reasoning are significant explanations for misinformation susceptibility in fields beyond psychology, including communication science. Also, we did not examine the interventions regarding their effectiveness. A meta-analysis of the effectiveness of interventions displayed within the misinformation intervention map is an important work for future research.

Finally, we would like to emphasize that an individual’s resilience against misinformation is likely to benefit from a combination of different interventions. Some interventions might even unfold their full potential only in combination with other interventions. Self-affirmation, an intervention technique that usually performs highly inconsistently in studies (Lyons et al., 2021), shows significant effects when combined with corrections (Carnahan et al., 2021). The effectiveness of psychological interventions depends also on boundary conditions on the meso- and macro-level, such as social media platform regulations, legal frameworks, and political systems. A joint approach of many fields and disciplines with a rich toolkit is the way forward.

We wish to thank Jasmin Richter, Yasmin Mergen, Clara Stoll, Ines Elzer, Paula Heidemeyer, Emily Ahrens, and Vladimir Bojarskich for their valuable contributions to the data collection and categorization, the visualizations, as well as the preparation of the manuscript.

Author Biographies

Carolin-Theresa Ziemer is a PhD student at Friedrich Schiller University Jena, Germany, working on psychological interventions to counteract misinformation, disinformation, and ideological bias.

Tobias Rothmund is professor of psychology of communication and media use at Friedrich Schiller University Jena, Germany. His research focuses on ideologies, motivated reasoning, and radicalization.

References References marked with an asterisk indicate studies included in the systematic scoping review (see ESM 1 for a complete list of references included in the systematic scoping review).

  • Altay, S., Berriche, M., Heuer, H., Farkas, J., & Rathje, S. (2023). A survey of expert views on misinformation: Definitions, determinants, solutions, and future of the field. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-119 First citation in articleCrossrefGoogle Scholar

  • *Andı, S., & Akesson, J. (2020). Nudging away false news: Evidence from a social norms experiment. Digital Journalism, 9(1), 106–125. https://doi.org/10.1080/21670811.2020.1847674 First citation in articleCrossrefGoogle Scholar

  • Aslett, K., Guess, A. M., Bonneau, R., Nagler, J., & Tucker, J. A. (2022). News credibility labels have limited average effects on news diet quality and fail to reduce misperceptions. Science Advances, 8(18), Article eabl3844. First citation in articleCrossrefGoogle Scholar

  • Association for College and Research Libraries (ACRL). (2000). Information literacy competency standards for higher education. American Library Association. http://www.ala.org/acrl/standards/informationliteracycompetency First citation in articleGoogle Scholar

  • *Axelsson, C.-A. W., Guath, M., & Nygren, T. (2021). Learning how to separate fake from real news: Scalable digital tutorials promoting students’ civic online reasoning. Future Internet, 13(3), 1–18. https://doi.org/10.3390/fi13030060 First citation in articleCrossrefGoogle Scholar

  • *Badrinathan, S. (2021). Educative interventions to combat misinformation: Evidence from a field experiment in India. The American Political Science Review, 115(4), 1325–1341. https://doi.org/10.1017/S0003055421000459 First citation in articleCrossrefGoogle Scholar

  • *Basol, M., Roozenbeek, J., & van der Linden, S. (2020). Good news about bad news: Gamified inoculation boosts confidence and cognitive immunity against fake news. Journal of Cognition, 3(1). https://doi.org/10.5334/joc.91 First citation in articleCrossrefGoogle Scholar

  • Bryanov, K., & Vziatysheva, V. (2021). Determinants of individuals’ belief in fake news: A scoping review determinants of belief in fake news. PLoS One, 16(6). https://doi.org/10.1371/journal.pone.0253717 First citation in articleCrossrefGoogle Scholar

  • *Burls, N., Pegion, K., & Cook, J. (2019). Misconception-based learning to cement learning. Innovations in Teaching & Learning Conference Proceedings, 11. https://doi.org/10.13021/itlcp.2019.2502 First citation in articleCrossrefGoogle Scholar

  • *Carnahan, D., Bergan, D. E., & Lee, S. (2021). Do corrective effects last? Results from a longitudinal experiment on beliefs toward immigration in the US. Political Behavior, 43(3), 1227–1246. https://doi.org/10.1007/s11109-020-09591-9 First citation in articleCrossrefGoogle Scholar

  • *Carnahan, D., Hao, Q., Jiang, X., & Lee, H. (2018). Feeling fine about being wrong: The influence of self-affirmation on the effectiveness of corrective information. Human Communication Research, 44(3), 274–298. https://doi.org/10.1093/hcr/hqy001 First citation in articleCrossrefGoogle Scholar

  • Chen, C.-Y., Kearney, M., & Chang, S.-L. (2021). Comparative approaches to mis/misinformation| belief in or identification of false news according to the elaboration likelihood model. International Journal of Communication Systems, 15, 1263–1285. First citation in articleGoogle Scholar

  • *Chockalingam, V., Wu, V., Berlinski, N., Chandra, Z., Hu, A., Jones, E., Kramer, J., Li, X. S., Monfre, T., Ng, Y. S., Sach, M., Smith-Lopez, M., Solomon, S., Sosanya, A., & Nyhan, B. (2021). The limited effects of partisan and consensus messaging in correcting science misperceptions. Research & Politics, 8(2). https://doi.org/10.1177/20531680211014980 First citation in articleCrossrefGoogle Scholar

  • *Chung, M., & Kim, N. (2021). When I learn the news is false: How fact-checking information stems the spread of fake news via third-person perception. Human Communication Research, 47(1). https://doi.org/10.1093/hcr/hqaa010 First citation in articleCrossrefGoogle Scholar

  • *Cook, J., Lewandowsky, S., & Ecker, U. K. H. (2017). Neutralizing misinformation through inoculation: Exposing misleading argumentation techniques reduces their influence. PLoS One, 12(5). https://doi.org/10.1371/journal.pone.0175799 First citation in articleCrossrefGoogle Scholar

  • *Dias, N., Pennycook, G., & Rand, D. G. (2020). Emphasizing publishers does not effectively reduce susceptibility to misinformation on social media. Harvard Kennedy School Misinformation Review, 1(1). https://doi.org/10.37016/mr-2020-001 First citation in articleCrossrefGoogle Scholar

  • Faragó, L., Kende, A., & Krekó, P. (2019). We only believe in news that we doctored ourselves: The connection between partisanship and political fake news. Social Psychology, 51(2), 1–14. First citation in articleGoogle Scholar

  • *Gaozhao, D. (2021). Flagging fake news on social media: An experimental study of media consumers’ identification of fake news. Government Information Quarterly, 38(3). https://doi.org/10.1016/j.giq.2021.101591 First citation in articleCrossrefGoogle Scholar

  • *Garrett, R. K., & Poulsen, S. (2019). Flagging Facebook falsehoods: Self-identified humor warnings outperform fact checker and peer warnings. Journal of Computer-Mediated Communication, 24(5), 240–258. https://doi.org/10.1093/jcmc/zmz012 First citation in articleCrossrefGoogle Scholar

  • *Gimpel, H., Heger, S., Olenberger, C., & Utz, L. (2021). The effectiveness of social norms in fighting fake news on social media. Journal of Management Information Systems, 38(1), 196–221. https://doi.org/10.1080/07421222.2021.1870389 First citation in articleCrossrefGoogle Scholar

  • *Guan, T., Liu, T., & Yuan, R. (2021). Facing misinformation: Five methods to counter conspiracy theories amid the Covid-19 pandemic. Comunicar, 29(69), 71–83. https://doi.org/10.3916/C69-2021-06 First citation in articleCrossrefGoogle Scholar

  • *Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., & Sircar, N. (2020). A digital literacy intervention increases discernment between mainstream and false news in the United States and India. Proceedings of the National Academy of Sciences of the United States of America, 117(27), 15536–15545. https://doi.org/10.1073/pnas.1920498117 First citation in articleCrossrefGoogle Scholar

  • Guilbeault, D., Woolley, S., & Becker, J. (2021). Probabilistic social learning improves the public’s judgments of news veracity. PLoS One, 16(3), Article e0247487. https://doi.org/10.1371/journal.pone.0247487 First citation in articleCrossrefGoogle Scholar

  • Harjani, T., Basol, M.-S., Roozenbeek, J., & van der Linden, S. (2023). Gamified inoculation against misinformation in India: A randomized control trial. Journal of Trial and Error. https://doi.org/10.36850/e12 First citation in articleCrossrefGoogle Scholar

  • House of Commons. (2019). Misinformation and “fake news”: Final report. House of Commons Digital, Culture, Media and Sport Committee. https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf First citation in articleGoogle Scholar

  • Howell, E. L., & Brossard, D. (2021). (Mis)informed about what? What it means to be a science-literate citizen in a digital world. Proceedings of the National Academy of Sciences of the United States of America, 118(5). https://doi.org/10.1073/pnas.1912436117 First citation in articleCrossrefGoogle Scholar

  • *Huang, Y., & Wang, W. R. (2022). When a story contradicts: correcting health misinformation on social media through different message formats and mechanisms. Information, Communication and Society, 25(8), 1192–1209. https://doi.org/10.1080/1369118X.2020.1851390 First citation in articleCrossrefGoogle Scholar

  • *Hwang, Y., Ryu, J. Y., & Jeong, S.-H. (2021). Effects of misinformation using deepfake: The protective effect of literacy education. Cyberpsychology, Behavior and Social Networking, 24(3), 188–193. https://doi.org/10.1089/cyber.2020.0174 First citation in articleCrossrefGoogle Scholar

  • Ivanov, B., Sims, J. D., Compton, J., Miller, C. H., Parker, K. A., Parker, J. L., Harrison, K. J., & Averbeck, J. M. (2015). The general content of postinoculation talk: Recalled issue-specific conversations following inoculation treatments. Western Journal of Communication, 79(2), 218–238. https://doi.org/10.1080/10570314.2014.943423 First citation in articleCrossrefGoogle Scholar

  • *Jahanbakhsh, F., Zhang, A. X., Berinsky, A. J., Pennycook, G., Rand, D. G., & Karger, D. R. (2021). Exploring lightweight interventions at posting time to reduce the sharing of misinformation on social media. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1), 1–42. https://doi.org/10.1145/3449092 First citation in articleCrossrefGoogle Scholar

  • *Jolley, D., & Douglas, K. M. (2017). Prevention is better than cure: Addressing anti-vaccine conspiracy theories. Journal of Applied Social Psychology, 47(8), 459–469. https://doi.org/10.1111/jasp.12453 First citation in articleCrossrefGoogle Scholar

  • Jonas, E., McGregor, I., Klackl, J., Agroskin, D., Fritsche, I., Holbrook, C., Nash, K., Proulx, T., & Quirin, M. (2014). Threat and defense: From anxiety to approach. In M. P. ZannaJ. M. OlsonEds., Advances in experimental social psychology (Vol. 49, pp. 219–286). Elsevier. https://doi.org/10.1016/B978-0-12-800052-6.00004-4 First citation in articleCrossrefGoogle Scholar

  • Jones-Jang, S. M., Mortensen, T., & Liu, J. (2021). Does literacy help identification of fake news? Information literacy helps, but other literacies don’t. The American Behavioral Scientist, 65(2), 371–388. https://doi.org/10.1177/0002764219869406 First citation in articleCrossrefGoogle Scholar

  • Jungherr, A., & Schroeder, R. (2021). Misinformation and the structural transformations of the public arena: Addressing the actual challenges to democracy. Social Media + Society, 7(1), 2056305121988928. https://doi.org/10.1177/2056305121988928 First citation in articleCrossrefGoogle Scholar

  • Kahan, D. M. (2015). The politically motivated reasoning paradigm, part 1: What politically motivated reasoning is and how to measure it. In R. A. ScottS. M. KosslynN. PinkertonEds., Emerging trends in the social and behavioral sciences: An interdisciplinary, searchable, and linkable resource (pp. 1–16). John Wiley & Sons. First citation in articleGoogle Scholar

  • *Kim, A., & Dennis, A. R. (2019). Says who? The effects of presentation format and source rating on fake news in social media. The Mississippi Quarterly, 43(3), 1025–1039. https://doi.org/10.25300/MISQ/2019/15188 First citation in articleCrossrefGoogle Scholar

  • *Kim, A., Moravec, P. L., & Dennis, A. R. (2019). Combating fake news on social media with source ratings: The effects of user and expert reputation ratings. Journal of Management Information Systems, 36(3), 931–968. https://doi.org/10.1080/07421222.2019.1628921 First citation in articleCrossrefGoogle Scholar

  • *Kobayashi, T., Taka, F., & Suzuki, T. (2021). Can “Googling” correct misbelief? Cognitive and affective consequences of online search. PLoS One, 16(9). https://doi.org/10.1371/journal.pone.0256575 First citation in articleCrossrefGoogle Scholar

  • Kozyreva, A., Lewandowsky, S., & Hertwig, R. (2020). Citizens versus the internet: Confronting digital challenges with cognitive tools. Psychological Science in the Public Interest, 21(3), 103–156. https://doi.org/10.1177/1529100620946707 First citation in articleCrossrefGoogle Scholar

  • Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480 First citation in articleCrossrefGoogle Scholar

  • Lewandowsky, S., Cook, J., Ecker, U. K. H., Albarracín, D., Amazeen, M. A., Kendeou, P., Lombardi, D., Newman, E. J., Pennycook, G., Porter, E., Rand, D. G., Rapp, D. N., Reifler, J., Roozenbeek, J., Schmid, P., Seifert, C. M., Sinatra, G. M., Swire-Thompson, B., van der Linden, S., Vraga, E. K., … Zaragoza, M. S. (2020). The debunking handbook 2020. https://sks.to/db2020. https://doi.org/10.17910/b7.1182 First citation in articleCrossrefGoogle Scholar

  • Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369. https://doi.org/10.1016/j.jarmac.2017.07.008 First citation in articleCrossrefGoogle Scholar

  • Lorenz-Spreen, P., Lewandowsky, S., Sunstein, C. R., & Hertwig, R. (2020). How behavioural sciences can promote truth, autonomy and democratic discourse online. Nature Human Behaviour, 4(11), 1102–1109. https://doi.org/10.1038/s41562-020-0889-7 First citation in articleCrossrefGoogle Scholar

  • Lyons, B. (2018). Reducing group alignment in factual disputes? The limited effects of social identity interventions. Science Communication, 40(6), 789–807. First citation in articleCrossrefGoogle Scholar

  • Lyons, B. A., Farhart, C. E., Hall, M. P., Kotcher, J., Levendusky, M., Miller, J. M., Nyhan, B., Raimi, K. T., Reifler, J., Saunders, K. L., Skytte, R., & Zhao, X. (2021). Self-affirmation and identity-driven political behavior. Journal of Experimental Political Science, 1–16. https://doi.org/10.1017/XPS.2020.46 First citation in articleCrossrefGoogle Scholar

  • MacFarlane, D., Hurlstone, M. J., & Ecker, U. K. H. (2020). Protecting consumers from fraudulent health claims: A taxonomy of psychological drivers, interventions, barriers, and treatments. Social Science & Medicine, 259. https://doi.org/10.1016/j.socscimed.2020.112790 First citation in articleCrossrefGoogle Scholar

  • Malik, M., Cortesi, S., & Gasser, U. (2013). The challenges of defining “news literacy”. Berkman Center for Internet & Society. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2342313 First citation in articleCrossrefGoogle Scholar

  • *Martel, C., Mosleh, M., & Rand, D. G. (2021). You’re definitely wrong, maybe: Correction style has minimal effect on corrections of misinformation online. Media and Communication, 9(1), 120–133. https://doi.org/10.17645/mac.v9i1.3519 First citation in articleCrossrefGoogle Scholar

  • *McGrew, S. (2020). Learning to evaluate: An intervention in civic online reasoning. Computers & Education, 145. https://doi.org/10.1016/j.compedu.2019.103711 First citation in articleCrossrefGoogle Scholar

  • McGrew, S., Smith, M., Breakstone, J., Ortega, T., & Wineburg, S. (2019). Improving university students’ web savvy: An intervention study. British Journal of Educational Psychology, 89(3), 485–500. First citation in articleCrossrefGoogle Scholar

  • McGuire, W. J. (1964). Inducing resistance to persuasion: Some contemporary approaches. In L. BerkowitzEd., Advances in experimental social psychology (Vol. 1, pp. 191–229). Academic Press. https://doi.org/10.1016/S0065-2601(08)60052-0 First citation in articleCrossrefGoogle Scholar

  • Mueller, R. S. (2019). Report on the investigation into Russian interference in the 2016 presidential election. US Department of Justice. https://www.justice.gov/archives/sco/file/1373816/download First citation in articleGoogle Scholar

  • *Nygren, T., Guath, M., Axelsson, C.-A. W., & Frau-Meigs, D. (2021). Combatting visual fake news with a professional fact-checking tool in education in France, Romania, Spain and Sweden. Information, 12(5). https://doi.org/10.3390/info12050201 First citation in articleCrossrefGoogle Scholar

  • *Nyhan, B., & Reifler, J. (2019). The roles of information deficits and identity threat in the prevalence of misperceptions. Journal of Elections, Public Opinion and Parties, 29(2), 222–244. https://doi.org/10.1080/17457289.2018.1465061 First citation in articleCrossrefGoogle Scholar

  • *Pennycook, G., McPhetres, J., Zhang, Y., Lu, J. G., & Rand, D. G. (2020). Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy-nudge intervention. Psychological Science, 31(7), 770–780. https://doi.org/10.1177/0956797620939054 First citation in articleCrossrefGoogle Scholar

  • Pennycook, G., & Rand, D. G. (2021). The psychology of fake news. Trends in Cognitive Sciences, 25(5), 388–402. https://doi.org/10.1016/j.tics.2021.02.007 First citation in articleCrossrefGoogle Scholar

  • Perry, S. P., Murphy, M. C., & Dovidio, J. F. (2015). Modern prejudice: Subtle, but unconscious? The role of Bias Awareness in Whites’ perceptions of personal and others’ biases. Journal of Experimental Social Psychology, 61, 64–78. https://doi.org/10.1016/j.jesp.2015.06.007 First citation in articleCrossrefGoogle Scholar

  • Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. In L. BerkowitzEd., Advances in experimental social psychology (Vol. 19, pp. 123–205). Academic Press. https://doi.org/10.1016/S0065-2601(08)60214-2 First citation in articleCrossrefGoogle Scholar

  • Pfau, M., Compton, J., Parker, K. A., An, C., Wittenberg, E. M., Ferguson, M., Horton, H., & Malyshev, Y. (2006). The conundrum of the timing of counterarguing effects in resistance: Strategies to boost the persistence of counterarguing output. Communication Quarterly, 54(2), 143–156. First citation in articleCrossrefGoogle Scholar

  • *Reavis, R. D., Ebbs, J. B., Onunkwo, A. K., & Sage, L. M. (2017). A self-affirmation exercise does not improve intentions to vaccinate among parents with negative vaccine attitudes (and may decrease intentions to vaccinate). PLoS One, 12(7), Article e0181368. https://doi.org/10.1371/journal.pone.0181368 First citation in articleCrossrefGoogle Scholar

  • *Roozenbeek, J., & van der Linden, S. (2019). The fake news game: Actively inoculating against the risk of misinformation. Journal of Risk Research, 22(5), 570–580. https://doi.org/10.1080/13669877.2018.1443491 First citation in articleCrossrefGoogle Scholar

  • *Rustan, A. (2020). Communication through Indonesian social media: Avoiding hate speeches, intolerance, and hoaxes. Journal of Social Studies Education Research, 11(2), 174–185. First citation in articleGoogle Scholar

  • *Salvatore, J., & Morton, T. A. (2021). Evaluations of science are robustly biased by identity concerns. Group Processes & Intergroup Relations, 24(4), 568–582. https://doi.org/10.1177/1368430221996818 First citation in articleCrossrefGoogle Scholar

  • Shu, K., Wang, S., & Liu, H. (2018). Understanding user profiles on social media for fake news detection. 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), 430–435. First citation in articleCrossrefGoogle Scholar

  • Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. Yale University Press. First citation in articleGoogle Scholar

  • *Thornhill, C., Meeus, Q., Peperkamp, J., & Berendt, B. (2019). A digital nudge to counter confirmation bias. Frontiers in Big Data, 2(11). https://doi.org/10.3389/fdata.2019.00011 First citation in articleCrossrefGoogle Scholar

  • Treen, K. M. D., Williams, H. T. P., & O’Neill, S. J. (2020). Online misinformation about climate change. Wiley Interdisciplinary Reviews. Climate Change, 11(5). https://doi.org/10.1002/wcc.665 First citation in articleCrossrefGoogle Scholar

  • *Tseng, A. S., Bonilla, S., & MacPherson, A. (2021). Fighting “bad science” in the information age: The effects of an intervention to stimulate evaluation and critique of false scientific claims. Journal of Research in Science Teaching, 58(8), 1152–1178. https://doi.org/10.1002/tea.21696 First citation in articleCrossrefGoogle Scholar

  • *Tsipursky, G., Votta, F., & Roose, K. M. (2018). Fighting fake news and post-truth politics with behavioral science: The pro-truth pledge. Behavior and Social Issues, 27, 47–70. https://doi.org/10.5210/bsi.v27i0.9127 First citation in articleCrossrefGoogle Scholar

  • *Tully, M., Vraga, E. K., & Bode, L. (2020). Designing and testing news literacy messages for social media. Mass Communication and Society, 23(1), 22–46. https://doi.org/10.1080/15205436.2019.1604970 First citation in articleCrossrefGoogle Scholar

  • van Bavel, J. J., Harris, E. A., Pärnamets, P., Rathje, S., Doell, K. C., & Tucker, J. A. (2021). Political psychology in the digital (mis)information age: A model of news belief and sharing. Social Issues and Policy Review, 15(1), 84–113. https://doi.org/10.1111/sipr.12077 First citation in articleCrossrefGoogle Scholar

  • van der Linden, S. (2022). Misinformation: Susceptibility, spread, and interventions to immunize the public. Nature Medicine, 28(3), 460–467. https://doi.org/10.1038/s41591-022-01713-6 First citation in articleCrossrefGoogle Scholar

  • *van der Linden, S., Leiserowitz, A., Rosenthal, S., & Maibach, E. (2017). Inoculating the public against misinformation about climate change. Global Challenges (Hoboken, NJ), 1(2). https://doi.org/10.1002/gch2.201600008 First citation in articleCrossrefGoogle Scholar

  • *Vraga, E. K., Bode, L., & Tully, M. (2021). The effects of a news literacy video and real-time corrections to video misinformation related to sunscreen and skin cancer. Health Communication. https://doi.org/10.1080/10410236.2021.1910165 First citation in articleCrossrefGoogle Scholar

  • Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking (Council of Europe report DGI(2017)09). Council of Europe. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c First citation in articleGoogle Scholar

  • Wassermann, H., & Madrid-Morales, D. (2022). Misinformation in the Global South. Wiley. First citation in articleCrossrefGoogle Scholar

  • *Williams, M. N., & Bond, C. M. C. (2020). A preregistered replication of “Inoculating the public against misinformation about climate change”. Journal of Environmental Psychology, 70. https://doi.org/10.1016/j.jenvp.2020.101456 First citation in articleCrossrefGoogle Scholar

  • *Wojdynski, B. W., Binford, M. T., & Jefferson, B. N. (2019). Looks real, or really fake? Warnings, visual attention and detection of false news articles. Open Information Science, 3(1), 166–180. https://doi.org/10.1515/opis-2019-0012 First citation in articleCrossrefGoogle Scholar

  • *Xiao, X., & Su, Y. (2021). Integrating reasoned action approach and message sidedness in the era of misinformation: The case of HPV vaccination promotion. Journal of Health Communication, 26(6), 371–380. https://doi.org/10.1080/10810730.2021.1950873 First citation in articleCrossrefGoogle Scholar

  • *Yang, S., Lee, J. W., Kim, H.-J., Kang, M., Chong, E., & Kim, E.-M. (2021). Can an online educational game contribute to developing information literate citizens? Computers & Education, 161. https://doi.org/10.1016/j.compedu.2020.104057 First citation in articleCrossrefGoogle Scholar

  • *Yousuf, H., van der Linden, S., Bredius, L., Ted van Essen, G. A., Sweep, G., Preminger, Z., van Gorp, E., Scherder, E., Narula, J., & Hofstra, L. (2021). A media intervention applying debunking versus non-debunking content to combat vaccine misinformation in elderly in the Netherlands: A digital randomised trial. EClinicalMedicine, 35. https://doi.org/10.1016/j.eclinm.2021.100881 First citation in articleCrossrefGoogle Scholar

  • *Zerback, T., Töpfl, F., & Knöpfle, M. (2021). The disconcerting potential of online misinformation: Persuasive effects of astroturfing comments and three strategies for inoculation against them. New Media & Society, 23(5), 1080–1098. https://doi.org/10.1177/1461444820908530 First citation in articleCrossrefGoogle Scholar

  • Ziemer, C.-T., & Rothmund, T. (2023). Psychological Underpinnings of Disinformation Countermeasures: A systematic scoping review [Data, materials]. https://osf.io/6sf9z First citation in articleGoogle Scholar