Skip to main content
Free AccessEditorial

On the Death of Implicit Association Tests (IATs)

Published Online:https://doi.org/10.1027/1015-5759/a000778

“The IAT Is Dead, Long Live the IAT […]” is the title of an article that might reflect the impression of many researchers who are unsure about whether IATs1 are useful measures or not (Jost, 2019, p. 10). A Web of Science search on the number of IAT-related publications shows that although IAT research has increased over the years, there are several ups and downs (Figure 1). This editorial provides insights into some issues that may partly explain this phenomenon and encourage researchers to use in-depth analyses to help identify the conditions in which IATs may be useful.

Figure 1 IAT-related publications per year (1980–2022).

IATs

IATs (Greenwald et al., 1998) have attracted an enormous amount of research interest. They were designed to assess automatic implicit associations (e.g., the race IAT addresses implicit racial preferences; e.g., Greenwald et al., 1998) between two target concepts and an attribute dimension by using participants’ reaction times (and depending on the specific scoring algorithm, also errors). IATs ask participants to sort stimuli that consecutively appear in the middle of the computer screen into four different categories: (a) two contrasted target concept categories that form the target dimension (e.g., in the race IAT: White people vs. Black people) and (b) two contrasted attribute categories that form the attribute dimension (e.g., in the race IAT: Good vs. Bad).

IATs consist of several blocks (usually five or seven). Using the race IAT with seven blocks as an example, the procedure can be explained as follows. Blocks 1, 2, and 5 are the so-called single or practice blocks, which introduce the target or attribute discrimination. In these blocks, the categories of either the target concepts or the attribute dimension are presented in the upper corners of each side (left and right) of the display screen.

Participants are instructed to respond to exemplars of each category by pressing a key on the same side as the label. Blocks 3 and 4 as well as 6 and 7 are the so-called combined blocks in which the attribute discrimination is paired with the target discrimination (in these blocks, participants must assign words from all four categories). Thus, on the race IAT, in Blocks 3 and 4 (compatible phase), participants must respond to WHITE FACES and GOOD WORDS with one key and to BLACK FACES and BAD WORDS with the other key. In Blocks 6 and 7 (incompatible phase), participants must respond to BLACK FACES and GOOD WORDS with one key and to WHITE FACES and BAD WORDS with the other key.2

The rationale behind IATs is that the sorting task should be easier and thus completed more quickly if the two concepts that share one response key are strongly associated. If two concepts are only weakly associated, sorting them into one category should be more difficult and should therefore be conducted more slowly. Traditionally IAT effects are computed with one of 10 scoring algorithms (Greenwald et al., 2003a, 2003b) that, in the case of the so-called D measures, for example, represent the difference in reaction times between the incompatible phase and the compatible phase divided by their overall standard deviation (see Röhner & Thoss, 2019, for a comprehensive overview and a tutorial). IAT effects are used to indicate the strength of the association between the concepts (e.g., preferences for Whites over Blacks in the race IAT).

What Killed IATs? Potentially Lethal Diseases of IATs

Huge interest and enthusiasm for using IATs emerged after their introduction. One reason may be that researchers hoped to be able to overcome some of the limitations of explicit measures (e.g., response biases or biases due to a lack of self-insight; e.g., Gregg et al., 2013) by using implicit measures and specifically IATs. However, criticism of the central characteristics of IATs may explain why researchers sometimes refrain from using them or are at least unsure about whether they should.

Before we introduce the potentially lethal diseases of IATs, we would like to state that although we follow the previous literature in speaking about the validity or reliability of IATs, it should be understood that, in principle, IATs represent a method (like forced-choice tests) and not a single measure. It is not possible to specify the validity or reliability of the method as such, but one can do so only for a specific IAT. Thus, the results of studies may reflect general trends, but nevertheless, each IAT has to be considered as an individual case (e.g., self-esteem IATs may have quality criteria that differ from race IATs; see, e.g., Kurdi et al., 2021; Schimmack, 2021).

Disease Number One: Difficulties in Defining the Implicitness of IATs

What is implicit in IATs is still a subject of discussion among researchers (e.g., Gawronski et al., 2022). For example, implicit measures as assessed with IATs have sometimes been defined by their introspective inaccessibility and relatedly by being caused by processes people are unaware of (Greenwald & Banaji, 1995). This assumption was underpinned mainly by meta-analyses showing weak associations between implicit and explicit measures (e.g., Cameron et al., 2012; Greenwald et al., 2009). However, many reasons other than unawareness can contribute to these weak associations (Gawronski et al., 2007). Moreover, recent research has shown that people are somewhat aware of their own implicit attitudes (Hahn et al., 2014) and can predict their own IAT effects with remarkable accuracy across a broad range of attitude targets and with far more accuracy than third-party observers could do (Morris & Kurdi, 2022). This finding speaks against the assumption that people are unaware of their implicit attitudes.

Thus, whether or not IAT effects are introspectively inaccessible and caused by processes people are unaware of is questionable. On the basis of these claims, the definition of IATs (Greenwald & Banaji, 1995) should probably be reworked. Similarly, the claim that IAT effects result from uncontrollable processes (for an introduction, see De Houwer, 2006) has been empirically disproved by research showing that participants can fake on IATs (e.g., Röhner et al., 2022). This lack of clarity about what is implicit in IATs can be viewed as the first disease that has contributed to the death of IATs.

Disease Number Two: Doubts Concerning the Validity of IATs

Several meta-analyses have provided evidence of the criterion-related validity of IATs in predicting behavior (e.g., Forscher et al., 2019; Greenwald et al., 2009). Meta-analyses have also demonstrated that interventions that produced immediate changes in IAT effects had no durable effects that persisted beyond a couple of days (Röhner & Lai, 2021). Also, changes were often relatively weak (|ds| < .30; Forscher et al., 2019). Thus, research has indicated that IAT effects seem quite robust at least concerning long-term changes.

Nevertheless, researchers have asked what IATs actually measure. Changes in IAT effects do not necessarily translate into changes in behavior (Forscher et al., 2019), a result that indicates that IATs do not exclusively measure implicit associations. This finding calls into question the construct-related validity of IATs. Although the construct-related validity of IATs for measuring automatic associations has been documented in a number of studies (e.g., Bar-Anan & Nosek, 2014; Greenwald et al., 1998), confounds in IAT effects, such as task-switching costs (e.g., Mierke & Klauer, 2001), figure-ground asymmetries (e.g., Rothermund & Wentura, 2001), and items’ cross-categories (e.g., Steffens & Plewe, 2001) have also been revealed. Such method-specific variance is combined with construct-specific variance in traditional IAT effects, although both types of variances are actually based on different processes (e.g., Klauer et al., 2007). Further, IATs and explicit measures (e.g., self-reports) are highly correlated (Schimmack, 2021) and they can even be used as indicators of the same latent construct, a fact that seems to contradict the claim that IATs have discriminant validity regarding explicit measures. Some authors argue that these results do not contradict the discriminant validity of IATs per se but rather support well-established theoretical approaches (e.g., De Houwer et al., 2020) that suggest that implicit and explicit measures are overlapping constructs (e.g., Kurdi et al., 2021). Nevertheless, the related concerns about the validity of IATs may have worsened their health.

Disease Number Three: Doubts Concerning the Reliability of IATs

The reliability of IATs has also been questioned (e.g., Schimmack, 2021). It is important to note that the reliability of IATs is not low per se. First, it depends on the type of reliability under investigation. Greenwald and Lai (2020) reported meta-analytic results that referred to a high internal consistency of α = .80 but to moderate test-retest reliability of r = .50. Second, the test-retest reliability is not generally low but varies greatly across IATs (e.g., large test-retest reliabilities for IATs measuring political attitudes; r = .70; Greenwald et al., 2009, but lower test-retest reliabilities for IATs measuring stereotypes; r = .50; Greenwald et al., 2020). Third, research circumstances impact the reliability of IATs (e.g., Greenwald et al., 2022). However, considering the moderate size of test-retest reliabilities for IATs, it would be inadequate to use a single IAT observation as an accurate diagnostic of an individual’s implicit association. This restriction on the use of IATs may also have contributed to their death.

Disease Number Four: Doubts Concerning the Noncontrollability/Nonfakeability of IATs

The specific kind of measurement in IATs has inspired the suggestion that IATs are uncontrollable and thus immune to faking (e.g., Greenwald et al., 1998). This assumption has failed to withstand empirical evidence that IATs are actually fakeable (e.g., Röhner, 2014). Although fakeability is a problem that affects several measures and is not a unique disadvantage of IATs, it raises questions about the nature of IAT effects being indicators of uncontrollable or unconscious processes (e.g., Gawronski et al., 2022).

Early research on the fakeability of IATs assumed that although faking might be possible in IATs, it can be easily controlled because people are only able to apply a single and quite obvious faking strategy (responding more slowly in a certain IAT phase; Cvencek et al., 2010). However, recent research has demonstrated that fakers apply a variety of faking strategies (slowing down, accelerating, increasing errors, and reducing errors) in different IAT phases (compatible and incompatible). Therefore, the detection of faking is more complex than previously assumed (e.g., Röhner et al., 2023), and faking might impair the validity of IATs. The disappointment about the controllability and fakeability of IATs along with the abovementioned diseases may have helped kill IATs.

Attempts to Resurrect IATs

Several attempts to improve IAT research and thus to resurrect IATs have been made. Two general attempts include replacing IATs and following best practice recommendations when using them.

It might be tempting to think about replacing IATs with other measures as an easy way to handle potential problems with IATs. Alternatives and variations of IATs have been developed (e.g., Single Category Implicit Association Tests, SC-IATs, Karpinski & Steinman, 2006; Go/No-Go Association Tasks, GNATs, Nosek & Banaji, 2001). However, IATs not only have repeatedly been able to outperform these alternatives but also these measures cannot be used interchangeably (e.g., Bar-Anan & Nosek, 2014).

Several problems related to IATs can be avoided when best practice recommendations for their use are followed (Greenwald et al., 2022). These recommendations include the construction of IATs, their administration, and the reporting of IAT-related research findings. Regarding the construction of IATs, it should be self-evident that IAT construction should be done by experts following best-practice recommendations (e.g., selection of categories with comparable familiarity). With regard to the administration of IATs, there are also several recommendations that should be followed (e.g., counterbalancing of combined IAT phases). When reporting IAT research, the recommendations may help increase the interpretability and understanding of research findings (e.g., correct reporting of scoring procedures).

Besides these general attempts to strengthen IATs, specific attempts that are related to the specific diseases described above have also been made.

Resurrection Number One: Tackle Definition Issues

There is a lively debate about what IATs do and do not measure based on the understanding of what kinds of phenomena they measure (e.g., implicit in the sense of something uncontrollable vs. something unconscious vs. both; e.g., Gawronski et al., 2022). Clarifying these definitions and using them strictly can improve research and the understanding of research results. For example, because evidence contradicts the assumptions of uncontrollability and unconsciousness, the term “unintentionally revealed associations” may help carry the meaning by respecting the current state of knowledge (Morris & Kurdi, 2022). Working on clear definitions might be the first way to help resurrect IATs.

Resurrection Number Two: Tackle Validity Issues

Using traditional scoring algorithms (Greenwald et al., 2003a, 2003b) to compute IAT effects intermingles different aspects of participants’ performance because all cognitive processes that are associated with this performance are included in a single score (overall IAT performance; e.g., D2 score). The complexity underlying participants’ performance should be reflected in scoring algorithms that allow the different processes to be decomposed. Several models have been suggested to disentangle the underlying processes in IATs. For example, the Quadruple process model (Quad model; Conrey et al., 2005), and the diffusion model (e.g., Klauer et al., 2007; Röhner & Ewers, 2016b) have been applied to IATs.3 Disentangling IAT-related processes may help improve the validity of IATs, thereby helping to cure them further.

Resurrection Number Three: Tackle Reliability Issues

With respect to the moderate test-retest reliability, it has been suggested that IATs be repeated and the results be averaged (a procedure that parallels measuring blood pressure; Greenwald et al., 2022). Research has demonstrated that using this procedure increases the test-retest reliability to r = .89 (Greenwald et al., 2020). However, repeated measurement increases people’s ability to fake on IATs (e.g., Röhner et al., 2011) and the risk of dropouts. Also, recent research using an IAT modeling approach that is based on the geometric similarity representation (GSR) model has demonstrated that unreliability in IATs is almost entirely attributable to the scoring of IAT effects (Kvam et al., 2022) and that using the GSR model increases the test-retest reliability of IATs from r = .80 to r = .90. Using such approaches to increase the reliability of IATs might help revitalize them.

Resurrection Number Four: Tackle Noncontrollability/Nonfakeability Issues

With respect to the fakeability of IATs, researchers have developed several faking indices (e.g., Röhner et al., 2023). However, research using machine learning demonstrated that faking indices were outperformed by processes such as participants’ speed-accuracy tradeoff that were disentangled with diffusion models to detect faking (Röhner et al., 2022). These results reflect theory and earlier empirical findings on faking in IATs that have been revealed to involve deliberately adapting speed and accuracy (e.g., Röhner et al., 2013). Tutorials and software have rendered the computation of such in-depth analyses much easier (e.g., Röhner & Ewers, 2016a; Röhner & Thoss, 2018). Developing and using faking indices may improve the health of IATs.

Conclusion

IATs have been suggested to assess implicit associations, but they are not the solution to all measurement problems: they have problems too. Several issues need further clarification (e.g., definition of implicitness, and disentangling the processes that are intermingled in traditional IAT effects). With this editorial, we tried to give a comprehensive overview of the most prominent criticisms IATs have to face and complement them with suggestions to improve IAT-related research. On the basis of the current state of empirical findings, it would be a mistake to declare IATs the cure-all – but it Implicit Association Tests would also be a mistake to hastily abandon IATs without understanding all the processes and factors that might explain the contradictory findings on IAT characteristics, such as quality criteria. In this sense, we want to repeat: “The IAT Is Dead, Long Live the IAT […]” (Jost, 2019, p. 10) and encourage researchers to submit studies that further uncover the processes contributing to IAT effects, investigate quality criteria of IATs, and help to define under which circumstances IATs can provide useful insights.

1We agree with Schnabel and colleagues (2008) suggestion to refer to IATs in the plural to indicate that the term IAT refers to different applications of one general procedure rather than to one specific test.

2The presentation of the combined IAT phases can be counterbalanced in IATs. Hence, researchers can decide whether participants will work on the compatible phase first and afterward on the incompatible one (the sequence of IATs as explained above) or whether the phases should be presented the other way around.

3A detailed description of these models is beyond the scope of this editorial.

References

  • Bar-Anan, Y., & Nosek, B. A. (2014). A comparative investigation of seven indirect attitude measures. Behavior Research Methods, 46, 668–688. https://doi.org/10.3758/s13428-013-0410-6 First citation in articleCrossrefGoogle Scholar

  • Cameron, C. D., Brown-Iannuzzi, J. L., & Payne, B. K. (2012). Sequential priming measures of implicit social cognition: A meta-analysis of associations with behavior and explicit attitudes. Personality and Social Psychology Review, 16, 330–350. https://doi.org/10.1177/1088868312440047 First citation in articleCrossrefGoogle Scholar

  • Conrey, F. R., Sherman, J. W., Gawronski, B., Hugenberg, K., & Groom, C. J. (2005). Separating multiple processes in implicit social cognition: The Quad model of implicit task performance. Journal of Personality and Social Psychology, 89, 469–487. https://doi.org/10.1037/0022-3514.89.4.469 First citation in articleCrossrefGoogle Scholar

  • Cvencek, D., Greenwald, A. G., Brown, A. S., Gray, N. S., & Snowden, R. J. (2010). Faking of the Implicit Association Test is statistically detectable and partly correctable. Basic and Applied Social Psychology, 32, 302–314. https://doi.org/10.1080/01973533.2010.519236 First citation in articleCrossrefGoogle Scholar

  • De Houwer, J. (2006). What are implicit measures and why are we using them? In R. W. WiersA. W. StacyEds., The handbook of implicit cognition and addiction (pp. 11–28). Sage. First citation in articleCrossrefGoogle Scholar

  • De Houwer, J., Teige-Mocigemba, S., Spruyt, A., & Moors, A. (2009). Implicit measures: A normative analysis and review. Psychological Bulletin, 135, 347–368. https://doi.org/10.1037/a0014211 First citation in articleCrossrefGoogle Scholar

  • De Houwer, J., Van Dessel, P., & Moran, T. (2020). Attitudes beyond associations: On the role of propositional representations in stimulus evaluation. In B. GawronskiEd., Advances in experimental social psychology (Vol. 61, pp. 127–184). Academic Press. https://doi.org/10.1016/bs.aesp.2019.09.004 First citation in articleCrossrefGoogle Scholar

  • Forscher, P. S., Lai, C. K., Axt, J. R., Ebersole, C. R., Herman, M., Devine, P. G., & Nosek, B. A. (2019). A meta-analysis of procedures to change implicit measures. Journal of Personality and Social Psychology, 117, 522–559. https://doi.org/10.1037/pspa0000160 First citation in articleCrossrefGoogle Scholar

  • Gawronski, B., LeBel, E. P., & Peters, K. R. (2007). What do implicit measures tell us? Scrutinizing the validity of three common assumptions. Perspectives on Psychological Science, 2, 181–193. https://doi.org/10.1111/j.1745-6916.2007.00036.x First citation in articleCrossrefGoogle Scholar

  • Gawronski, B., Ledgerwood, A., & Eastwick, P. W. (2022). Implicit bias ≠ bias on implicit measures. Psychological Inquiry, 33, 139–155. https://doi.org/10.1080/1047840X.2022.2106750 First citation in articleCrossrefGoogle Scholar

  • Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychological Review, 102, 4–27. https://doi.org/10.1037/0033-295X.102.1.4 First citation in articleCrossrefGoogle Scholar

  • Greenwald, A. G., Brendl, M., Cai, H., Cvencek, D., Dovidio, J. F., Friese, M., Hahn, A., Hehman, E., Hofmann, W., Hughes, S., Hussey, I., Jordan, C., Jost, J., Kirby, T. A., Lai, C. K., Lang, J. W. B., Lindgren, K. P., Maison, D., Ostafin, B. D., … Wiers, R. W. (2020). The Implicit Association Test at age 20: What is known and what is not known about implicit bias. https://doi.org/10.31234/osf.io/bf97c First citation in articleCrossrefGoogle Scholar

  • Greenwald, A. G., Brendl, M., Cai, H., Cvencek, D., Dovidio, J. F., Friese, M., Hahn, A., Hehman, E., Hofmann, W., Hughes, S., Hussey, I., Jordan, C., Kirby, T. A., Lai, C. K., Lang, J. W. B., Lindgren, K. P., Maison, D., Ostafin, B. D., Rae, J. R., … Wiers, R. W. (2022). Best research practices for using the Implicit Association Test. Behavior Research Methods, 54, 1161–1180. https://doi.org/10.3758/s13428-021-01624-3 First citation in articleCrossrefGoogle Scholar

  • Greenwald, A. G., & Lai, C. K. (2020). Implicit social cognition. Annual Review of Psychology, 71, 419–445. https://doi.org/10.1146/annurev-psych-010419-050837 First citation in articleCrossrefGoogle Scholar

  • Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: The implicit Association Test. Journal of Personality and Social Psychology, 74, 1464–1480. https://doi.org/10.1037/0022-3514.74.6.1464 First citation in articleCrossrefGoogle Scholar

  • Greenwald, A. G., Nosek, B. A., & Banaji, M. R. (2003a). Understanding and using the Implicit Association Test: I. An improved scoring algorithm. Journal of Personality and Social Psychology, 85, 197–216. https://doi.org/10.1037/0022-3514.85.2.197 First citation in articleCrossrefGoogle Scholar

  • Greenwald, A. G., Nosek, B. A., & Banaji, M. R. (2003b). “Understanding and using the Implicit Association Test: I. An improved scoring algorithm”: Correction to Greenwald et al. (2003). Journal of Personality and Social Psychology, 85, 481. https://doi.org/10.1037/h0087889 First citation in articleCrossrefGoogle Scholar

  • Greenwald, A. G., Poehlman, T. A., Uhlmann, E. L., & Banaji, M. R. (2009). Understanding and using the Implicit Association Test: III. Meta-analysis of predictive validity. Journal of Personality and Social Psychology, 97, 17–41. https://doi.org/10.1037/a0015575 First citation in articleCrossrefGoogle Scholar

  • Gregg, A. P., Klymowsky, J., Owens, D., & Perryman, A. (2013). Let their fingers do the talking? Using the Implicit Association Test in market research. International Journal of Market Research, 55, 487–503. https://doi.org/10.2501/IJMR-2013-013 First citation in articleCrossrefGoogle Scholar

  • Hahn, A., Judd, C. M., Hirsh, H. K., & Blair, I. V. (2014). Awareness of implicit attitudes. Journal of Experimental Psychology: General, 143, 1369–1392. https://doi.org/10.1037/a0035028 First citation in articleCrossrefGoogle Scholar

  • Jost, J. T. (2019). The IAT is dead, long live the IAT: Context-sensitive measures of implicit attitudes are indispensable to social and political psychology. Current Directions in Psychological Science, 28, 10–19. https://doi.org/10.1177/0963721418797309 First citation in articleCrossrefGoogle Scholar

  • Karpinski, A., & Steinman, R. B. (2006). The Single Category Implicit Association Test as a measure of implicit social cognition. Journal of Personality and Social Psychology, 91, 16–32. https://doi.org/10.1037/0022-3514.91.1.16 First citation in articleCrossrefGoogle Scholar

  • Klauer, K. C., Voss, A., Schmitz, F., & Teige-Mocigemba, S. (2007). Process components of the Implicit Association Test: A diffusion-model analysis. Journal of Personality and Social Psychology, 93, 353–368. https://doi.org/10.1037/0022-3514.93.3.353 First citation in articleCrossrefGoogle Scholar

  • Kurdi, B., Ratliff, K. A., & Cunningham, W. A. (2021). Can the Implicit Association Test serve as a valid measure of automatic cognition? A response to Schimmack (2021). Perspectives on Psychological Science, 16, 422–434. https://doi.org/10.1177/1745691620904080 First citation in articleCrossrefGoogle Scholar

  • Kvam, P. D., Smith, C., Irving, L. H., & Sokratous, K. (2022). Improving the reliability and validity of the IAT with a dynamic model driven by associations. https://doi.org/10.31234/osf.io/ke7cp First citation in articleCrossrefGoogle Scholar

  • Mierke, J., & Klauer, K. C. (2001). Implicit association measurement with the IAT: Evidence for effects of executive control processes. Experimental Psychology, 48, 107–122. https://doi.org/10.1026//0949-3946.48.2.107 First citation in articleLinkGoogle Scholar

  • Morris, A., & Kurdi, B. (2022). Awareness of implicit attitudes: Large-scale investigations of mechanism and scope. https://doi.org/10.31234/osf.io/dmjfq First citation in articleCrossrefGoogle Scholar

  • Nosek, B. A., & Banaji, M. R. (2001). The go/no-go association task. Social Cognition, 19, 625–666. https://doi.org/10.1521/soco.19.6.625.20886 First citation in articleCrossrefGoogle Scholar

  • Röhner, J. (2014). Faking the Implicit Association Test (IAT): Predictors, processes, and detection (Dissertation). TU Chemnitz. First citation in articleGoogle Scholar

  • Röhner, J., & Ewers, T. (2016a). How to analyze (faked) Implicit Association Test data by applying diffusion model analyses with the fast-dm software: A companion to Röhner & Ewers (2016). The Quantitative Methods for Psychology, 12, 220–231. https://doi.org/10.20982/TQMP.12.3.P220 First citation in articleCrossrefGoogle Scholar

  • Röhner, J., & Ewers, T. (2016b). Trying to separate the wheat from the chaff: Construct- and faking-related variance on the Implicit Association Test (IAT). Behavior Research Methods, 48, 243–258. https://doi.org/10.3758/s13428-015-0568-1 First citation in articleCrossrefGoogle Scholar

  • Röhner, J., Holden, R. R., & Schütz, A. (2023). IAT faking indices revisited: Aspects of replicability and differential validity. Behavior Research Methods, 55, 670–693. https://doi.org/10.3758/s13428-022-01845-0 First citation in articleCrossrefGoogle Scholar

  • Röhner, J., & Lai, C. K. (2021). A diffusion model approach for understanding the impact of 17 interventions on the Race Implicit Association Test. Personality and Social Psychology Bulletin, 47, 1374–1389. https://doi.org/10.1177/0146167220974489 First citation in articleCrossrefGoogle Scholar

  • Röhner, J., Schröder-Abé, M., & Schütz, A. (2011). Exaggeration is harder than understatement, but practice makes perfect!. Experimental Psychology, 58, 464–472. https://doi.org/10.1027/1618-3169/a000114 First citation in articleLinkGoogle Scholar

  • Röhner, J., Schröder-Abé, M., & Schütz, A. (2013). What do fakers actually do to fake the IAT? An investigation of faking strategies under different faking conditions. Journal of Research in Personality, 47, 330–338. https://doi.org/10.1016/j.jrp.2013.02.009 First citation in articleCrossrefGoogle Scholar

  • Röhner, J., & Thoss, P. (2018). EZ: An easy way to conduct a more fine-grained analysis of faked and nonfaked Implicit Association Test (IAT) data. The Quantitative Methods for Psychology, 14, 17–37. https://doi.org/10.20982/tqmp.14.1.p017 First citation in articleCrossrefGoogle Scholar

  • Röhner, J., & Thoss, P. J. (2019). A tutorial on how to compute traditional IAT effects with R. The Quantitative Methods for Psychology, 15, 134–147. First citation in articleCrossrefGoogle Scholar

  • Röhner, J., Thoss, P. J., & Schütz, A. (2022). Lying on the dissection table: Anatomizing faked responses. Behavior Research Methods, 54, 2878–2904. https://doi.org/10.3758/s13428-021-01770-8 First citation in articleCrossrefGoogle Scholar

  • Rothermund, K., & Wentura, D. (2001). Figure-ground asymmetries in the Implicit Association Test (IAT). Zeitschrift für Experimentelle Psychologie, 48, 94–106. https://doi.org/10.1026/0949-3946.48.2.94 First citation in articleLinkGoogle Scholar

  • Schimmack, U. (2021). The Implicit Association Test: A method in search of a construct. Perspectives on Psychological Science, 16, 396–414. https://doi.org/10.1177/1745691619863798 First citation in articleCrossrefGoogle Scholar

  • Schnabel, K., Asendorpf, J. B., & Greenwald, A. G. (2008). Using Implicit Association Tests for the assessment of implicit personality self-concept. In G. J. BoyleG. MatthewsH. SaklofskeEds., The SAGE handbook of personality theory and assessment, Vol. 2. Personality measurement and testing (pp. 508–528). Sage. First citation in articleCrossrefGoogle Scholar

  • Steffens, M. C., & Plewe, I. (2001). Items’ cross-category associations as a confounding factor in the Implicit Association Test. Zeitschrift für Experimentelle Psychologie, 48, 123–134. https://doi.org/10.1026/0949-3946.48.2.123 First citation in articleLinkGoogle Scholar