Skip to main content
Open AccessOriginal Articles and Reviews

Bending Our Ethics Code

Avoidable Deception and Its Justification in Psychological Research

Published Online:https://doi.org/10.1027/1016-9040/a000431

Abstract

Abstract. Deception of research participants has long been and remains a hot-button issue in the behavioral sciences. At the same time, the field of psychology is fortunate to have an ethics code to rely on in determining whether and how to use and report on deception of participants. Despite ongoing normative controversies, the smallest common denominator among psychologists is that deception ought to be a last resort – to be used only when there is no other defensible way to study a question or phenomenon. Going beyond previous normative discussions or inquiries into the mere prevalence of deception, we ask the fundamental question whether common practice is compatible with this interpretation of our field’s ethical standards. Findings from an empirical literature review – focusing on the feasibility of nondeceptive alternative procedures and the presence of explicit justifications for the use of deception – demonstrate that there is a notable gap between the last resort interpretation of our ethical standards and common practice in psychological research. The findings are discussed with the aim of identifying viable ways in which researchers, journal editors, and the scientific associations crafting our ethics codes may narrow this gap.

For any professional group or individual, ethical standards and codes can be an important framework and guidance. A corresponding need among psychologists was already recognized by the American Psychological Association (APA) shortly after World War II and the head of the committee responsible for drafting the very first APA ethics code, Nicholas Hobbs, stated back in 1948 that “psychologists as a group feel the need for a formulation of standards for professional practice to encourage the highest endeavor of members of the group, to ensure public welfare, to promote sound relationships with allied professions, to reduce intra-group misunderstandings, to promote professional standing of the group as a whole” (p. 80). Undoubtedly, psychology as a scientific field has matured since then, as have the field’s ethical standards, which now represent “a bedrock of the profession” (Joyce & Rankin, 2010, p. 466).

Among the most fundamental principles of this ethics code – that is, “the topmost aspirational level of ethical behavior” (Francis, 2009, p. 65) – is honesty. Correspondingly, the reliance on deception of participants in psychological research is among the practices explicitly addressed by the ethics code, spelling out the conditions under which deception may be justifiable. Our goal herein is to scrutinize whether the common practice in psychological research is aligned with these conditions, that is, whether the use of deception is clearly limited to the specified conditions and justified accordingly. As such, we do not reiterate or engage in ongoing extensive discussions on normative aspects of deception (i.e., whether and when deception may be justifiable), nor are we primarily concerned with the prevalence of deception in psychological research.1 Instead, we address and dissect empirically whether the use of and reporting on deception – once it is relied on – is compatible with our field’s ethical standards. To this end, we define deception in line with the “consensus [that] has emerged across disciplinary borders that intentional and explicit provision of erroneous information – in other words, lying – is deception, whereas withholding information about research hypotheses, the range of experimental manipulations, or the like ought not to count as deception” (Hertwig & Ortmann, 2008, p. 222). In what follows, we thus only consider acts of commission (but not acts of omission) as deception (Ortmann, 2019).

The Rules on and Use of Deception in Psychological Research

Prominently, the most recent APA ethics code states that “psychologists do not conduct a study involving deception unless they have determined that the use of deceptive techniques is justified by the study’s significant prospective scientific, educational, or applied value and that effective nondeceptive alternative procedures are not feasible” (American Psychological Association, 2017, section 8.07). Of note, other psychological societies take a similar stand, as, for example, the European Federation of Psychologists’ Associations (EFPA, see section 3.4 of their Meta-Code of Ethics). A somewhat less detailed but to all intents and purposes equivalent statement can already be found in the ethics code published by the APA in 1959: “Only when a problem is significant and can be investigated in no other way is the psychologist justified in giving misinformation to research subjects” (American Psychological Association, 1959, principle 16). Thus, the ethics code in our field explicitly limits the use of deception to cases of significant prospective value in which nondeceptive alternatives are not feasible – and it has done so since before most of the psychologists active in research today even studied psychology (indeed, statistically speaking, before most of them were born).

Over these decades, there have been recurring debates about whether these rules are sufficient, that is, whether deceptive research practices should be used and can be justified at all. Although very different positions have been taken on the two most prominently debated dimensions – namely, the extent of “harm done to the subject” and “harm done to the profession”2 (Baumrind, 1985) – there is at least implicit consensus among many psychologists that deception can be necessary under very specific circumstances to uphold validity or avoid still more serious ethical breaches. As such, even those arguing that deception can be necessary (Bortolotti & Mameli, 2006; Bröder, 1998; Christensen, 1988; Cook & Yamagishi, 2008; Pittenger, 2002) consistently acknowledge that it must be a well-justified “last resort” (Kimmel, 2011; Kimmel et al., 2011) and interpret the ethics code correspondingly.

However, some have argued that the “last resort” interpretation of the ethics code is undermined by the observation that deception has long been and remains a practice that is not limited to a few exceptions. Although estimates of its prevalence vary greatly depending on the subdiscipline(s) considered and the exact definition of what constitutes deception, the average estimates have virtually never been notably below 20% of studies (Seeman, 1969) and there is no indication of a decline (Hertwig & Ortmann, 2001; Kimmel, 2001; Smith et al., 2009) – leading some to go so far as to argue that deception must be justifiable because it is so prevalent both in research and everyday life (Benham, 2008). Indeed, as early as the mid-1960s, deception had become “an integral part of psychological research” (Stricker, 1967, p. 13) and it has been argued that it remains “difficult to reconcile the still relatively high prevalence of its use with the notion that deception is reserved for those cases in which the study’s prospective value justifies its use and effective alternatives are not feasible” (Hertwig & Ortmann, 2008, p. 223).

However, we argue that the mere prevalence of deception in psychological research actually bears limited insight. For one, there is too much subjectivity involved in attaching a specific expected prevalence to the term “last resort.” Second, and more importantly, it is at least thinkable that the use of deception, albeit more prevalent than some might expect a “last resort” to be, is always fully aligned with our ethics code in that all these studies are of significant value and, more crucially still, nondeceptive alternatives were never feasible. Stated simply, the prevalence of deception does not, in and of itself, much help to answer the fundamental question of whether psychologists, as a group, actually abide by their own rules. Indeed, one could argue that, in the long run, the trust placed in psychologists by other psychologists, other scientists, policymakers, and the public at large is less determined by whether we use deception in research at all but rather by whether or not we do so – prevalently or not – in line with our own ethics code. In giving psychologists the benefit of the doubt, one would thus expect that deception is always explicitly justified by (i) a study’s significant value and (ii) a thorough explanation of why nondeceptive alternatives were unavailable or clearly inferior.

The first of these two necessary conditions, a study’s significant value (i), is undeniably highly subjective and thus difficult, if not impossible, to judge in general. Also, one could take the lenient stance that the condition must have been fulfilled to a sufficient extent if a study is deemed worthy of publication. In any case, in considering studies published in peer-reviewed journals, we are going to start out with the most lenient assumption possible, namely that all researchers resorting to deception have determined that their study’s value is sufficient to warrant the use of deception and that a sufficient proportion of peers (some representative sample of whom served as editors and reviewers) agree. We will therefore not consider this criterion further and instead scrutinize whether the second condition is commonly met: the presence of a convincing, explicit justification detailing why deception was the only viable option in a given study.

Is Deception a Well-Justified “Last Resort”?

To gain insight on whether the use of deception in psychological research is aligned with our ethics code, we reviewed published studies involving deception from two lively fields of research, namely dishonesty/behavioral ethics and (individual differences in) prosocial behavior. Besides the practical advantage of being able to resort to two very recent meta-analyses from each of these fields (Gerlach et al., 2019; Thielmann et al., 2020) which had already coded whether studies used deception, a crucial advantage is that both areas incorporate several disciplines within psychology (e.g., applied psychology, evolutionary psychology, experimental/cognitive psychology, methodology, personality psychology, social psychology) and beyond. Correspondingly, the studies we reviewed were published in a wide range of renowned journals within psychology (e.g., Journal of Applied Psychology, Journal of Environmental Psychology, Journal of Experimental Psychology: General, Journal of Personality and Social Psychology, Organizational Behavior and Human Decision Processes, Psychological Science) and beyond (e.g., Academy of Management Journal, Administrative Science Quarterly, Journal of Marketing Research, Proceedings of the National Academy of Sciences of the United States of America, Psychiatry Research). As such, even though the studies reviewed are limited to two particular research areas, they are not merely representative of only one narrow subdiscipline within psychology.

From both meta-analyses, we sampled published articles that had been coded as using deception by the respective authors of the meta-analyses. Specifically, we considered all 35 published articles involving deception from the meta-analysis by Gerlach et al. (2019) and additionally drew a random sample of 50 articles (out of 183 in total) from the meta-analysis by Thielmann et al. (2020). The total sample thus comprised 85 articles reporting on 120 studies that used deception.3 For each study, in turn, we coded the two criteria discussed in detail below, namely (i) whether nondeceptive alternatives were feasible and (ii) whether deception was explicitly justified in the corresponding publication. As a first step, all studies were first coded by one of two research assistants with regard to whether deception was explicitly acknowledged or a justification provided (ii) given that this would potentially alter how one may judge the feasibility of nondeceptive alternatives. Next, we (the authors) coded the availability of feasible nondeceptive alternatives (i), after thoroughly discussing the criteria for coding and resolving any disagreement. The full coding table across all studies and variables coded is provided on the Open Science Framework (https://osf.io/6cmnr/).

Feasibility of Nondeceptive Alternatives

In judging the availability of nondeceptive alternatives, we discriminated between cases in which one could at least argue that deception served to uphold validity or to avoid other ethically problematic procedures versus those cases in which alternatives were clearly available, though potentially increasing researchers’ costs in terms of time, money, and/or effort – all of which may certainly represent practical hurdles, but none of which are generally sufficient to render a procedure impossible. We will return to discussing the question of such increased costs below.

In reviewing all studies, we found that in 98 (82%) studies, nondeceptive alternatives were clearly feasible. Note that this is a lower bound estimate given that we typically classified borderline and debatable cases as rendering alternatives unfeasible. For example, we considered real interaction between individuals to be unfeasible in fMRI studies (Bereczkei et al., 2015), even though exactly this has previously been and thus can be done (Bilek et al., 2015).4 Similarly, we considered a real rather than fake game of Cyberball (Kouchaki & Wareham, 2015) – involving actual other players and thus the potential occurrence of real ostracism – to be ethically no more defensible than the deceptive variant (with no actual other players). Despite these rather lenient judgment criteria, the clear majority of cases could have avoided deception. To give some examples, Table 1 lists some of the more frequently recurring types of deception among the studies reviewed along with nondeceptive alternatives.

Table 1 Common examples of deception in the studies reviewed and potential nondeceptive alternatives

Explicit Justifications

Given the finding that deception was commonly avoidable, one would expect to find explicit, thorough justifications for the use of deception in the corresponding publications, that is, arguments detailing why nondeceptive alternatives were not a viable option. In stark contrast to this expectation, a statement even resembling such a justification could be identified in only 26 (22%) studies (indeed, most did not even explicitly acknowledge the use of deception). If one further requires justifications to be actually defensible ones, for instance, arguing for deception to uphold some aspect of validity (e.g., Schurr & Ritov, 2016) or avoid other ethically problematic procedures – rather than admitting that one was saving time, money, or effort – the total number of cases providing any justification for deceiving participants amounted to 9 (8%). If one focuses only on the 98 studies identified above as involving clearly available nondeceptive alternatives, a total of 17 (17%) provided any statement resembling a justification, and 5 (5%) provided a defensible one.

Summary and Discussion

In summary, our brief review of published studies involving deception reveals that the modal case is deceiving participants despite available nondeceptive (albeit sometimes more costly) alternatives with no (actual) justification provided whatsoever. Thus, even leniently assuming that all studies reviewed were of sufficient value and importance to warrant deception, virtually none are clearly and indubitably aligned with the “last resort” interpretation of our profession’s ethics code. Specifically, we identified four studies (3%) in which nondeceptive alternatives were arguably not feasible and which provided a viable justification for the use of deception.

Before turning to some thoughts on where to go from here, some counterarguments deserve attention. First, one can come across the (often implicit) argument that some institutional review board (IRB) must have approved the study and that the use of deception was therefore aligned with the ethics code, alleviating the need for any further explicit justification. However, this argument remains at odds with our observation that nondeceptive alternatives would have been available in a clear majority of cases. Whereas an IRB may have received some explicit justification (despite its absence in the corresponding publication, see above) and may have positively assessed the significant value of any study, we are hard-pressed to see how IRBs could have agreed that no alternative nondeceptive procedures were feasible when the exact opposite was most commonly the case and often quite obviously so. In fact, given the inordinate extent of paperwork involved both for researchers and IRB members, it borders on the absurd that IRBs do not appear to fulfill this very basic gatekeeping function.

Second, some may argue that feasibility is also a matter of financial resources (which, in turn, determine time, effort, etc.). By implication, since such resources are often scarce, saving them might justify deception. Indeed, some of the statements found in publications may be interpreted in the corresponding spirit of not wanting to “waste” participants, for example, “Because we were interested in the number of [monetary units] that the proposer was willing to share with the responder as a measure of fair behavior, all participants were assigned the role of ‘proposer’.” (van der Schalk et al., 2012, p. 3) or “Because the focus of this research was on deception, all participants played the advisor role against a computer program.” (Zhong, 2011, p. 10). Such arguments may seem particularly convincing whenever the to-be-studied behavior is extremely rare so that a non-deceptive variant may ultimately require running thousands of participants. However, lack of time and/or money are very delicate candidates for justifying ethically questionable behavior which, we suspect, is exactly why these are not among the exceptions mentioned in the ethics code. It would arguably undermine the purpose of an ethics code if one were to accept any lack of resources (or indeed the mere possibility of saving resources), per se, as a viable justification for unethical conduct. For one, especially if the to-be-studied behavior is extremely rare, there need to be particularly strong arguments for a study’s “significant prospective scientific, educational, or applied value” (a necessary condition for the use of deception as per the APA ethics code). More crucially still, we argue that if a study truly yields such value, funding and resources are most unlikely to constitute a severely limiting factor.5

Third, one may point out that, in training psychologists and future researchers, ethical considerations do not exactly play a prominent role, thus leaving many quite unaware of the actual rules. Indeed, we are regularly confronted with third-year psychology students who are not only visibly shocked when shown the verbatim statement on deception from the ethics code but who respond that the one lesson they learned from their undergraduate laboratory courses was the true art of crafting believable (yet entirely false) cover stories to mislead participants. Although we must thus acknowledge that ethical considerations are not exactly being enforced consistently by teachers and supervisors, we do point out that every single study reviewed above was conducted in a field of research that is concerned, sometimes exclusively, with honesty, fairness, and social, moral, or ethical norms and dilemmas. Clearly, lack of awareness of ethical aspects does not appear a viable excuse in this field.

Fourth, some may object that deception can come in more or less severe forms with the former causing relatively limited harm (Cook & Yamagishi, 2008; Kimmel, 2011) and thus, possibly not even counting as deception. First off, we must note again that we herein followed the consensual definition that only acts of commission count as deception (see Ortmann, 2019), whereas acts of omission, do not (necessarily). But even setting aside that we are relying on a well-established definition of what constitutes deception (and relied on meta-analyses which had previously coded which studies involved deception), this argument is questionable: The severity of deception is highly subjective and ultimately so difficult to judge that it would represent a gaping loophole. In any case, we can attest to the fact that a majority of the studies reviewed herein actively provided false information to participants, typically in a way that fundamentally changed participants’ representation of the task at hand (see Table 1). Consider, for instance, the commonplace example of studies falsely claiming that participants were interacting with another participant (e.g., Cohen et al., 2011, Study 2; Cornelissen et al., 2011, Study 1; Kouchaki & Smith, 2014, Study 3; Schönbrodt & Gerstenberg, 2012, Study 4; Utz, 2004, Study 2; Wang et al., 2017), typically in allocating resources (money) between the two as is the case in the widely studied Dictator Game (Forsythe et al., 1994). If no such other participant actually exists or one does not actually interact with someone, this is not “only” an active and intentional lie, but also fundamentally alters the representation of the task: It would arguably change one’s behavior entirely knowing that one is not actually sharing an endowment with another participant (who was merely unlucky in being randomly assigned the role of the recipient), but essentially deciding how much of one’s endowment to return to the experimenter. Other recurring examples are the use of confederates (e.g., Gino et al., 2009, Studies 1 and 2; Kato et al., 2012; Piff et al., 2012, Study 6; Sandoval et al., 2016; Uziel & Hefetz, 2014, Study 3; Velez, 2015) and the provision of false/bogus feedback (e.g., Gu et al., 2013, Studies 1–3; Joireman et al., 2009, Studies 1 and 2; Tazelaar et al., 2004, Studies 1 and 2; Wood et al., 1973) or other types and forms of completely false claims (e.g., Gino et al., 2010, Study 1; Gino & Galinsky, 2012, Studies 1–4; Gino & Wiltermuth, 2014, Studies 2 and 3; Van Lange & Visser, 1999; Wood et al., 1973). We maintain that none of these can be reasonably argued to be “mild” forms of deception (whatever that may be exactly) and thus invoking the present counterargument cannot change the conclusion that deception is often used despite available nondeceptive alternatives without explicit justifications.

Finally, one may argue that our brief review of the literature is simply not representative of other research areas or subdisciplines within psychology, especially those that rely on deception only very rarely. For one, we do not claim that the problem identified applies equally to all areas of behavioral research, but merely provide an existence proof: There is a non-trivial discrepancy between the “last resort” interpretation of our ethics code and documented research practices. Any such discrepancy calls for a discussion of its potential consequences for trust in the profession and whether and which steps may be taken to reduce this discrepancy. Moreover, independent of how broad and thus representative the reviewed research areas are, their practices are legitimized through publication in some of the most highly respected outlets in the behavioral sciences. Thus, even if the two research areas we studied happened to be the only two bad apples in the otherwise unblemished barrel, they are the ones on display and thus most likely to trigger imitation by other (future) behavioral scientist (as the saying goes, one bad apple may spoil the bunch).

Conclusions and Where to Go From Here?

Even those arguing that deception can be necessary and thus cannot be abandoned altogether typically consider it a “last resort.” However, even assuming that psychologists exclusively conduct studies of “significant prospective scientific, educational, or applied value,” our findings demonstrate a gap between our ethics code and published studies. This very fact, we maintain, may be detrimental to trust in our profession. Indeed, one of the first insights we appear to instill in our undergraduates (be it as participants or experimenter trainees) is that we cannot be trusted to always/fully abide by our own ethical standards. How then, can we expect to be trusted by our peers, let alone the general public?

Although ethical dilemmas are, by definition, fraught with difficulties, we actually have a positive outlook to offer from our brief literature review: In many cases, deception can be avoided. Very often, this will require only a second thought and/or the willingness to invest some (more) time and money (see Table 1). This is actually very good news because it implies that we can render common practice compatible with our ethics code, in turn limiting deception to the cases it is actually reserved for and thus truly making it a “last resort.” Initially, this will require that researchers and authors take their responsibility to the ethics code more seriously and that IRBs require a far more thorough explanation of why nondeceptive alternatives are truly unfeasible rather than (we suspect) accepting scarce resources as sufficient.

As an aside, concerning the problem of resources, let us also point out that hypothetical scenarios or non-monetary incentives can constitute a viable alternative that requires neither deception nor extensive resources. Often enough, hypothetical situations produce highly comparable results both in economic games (Thielmann et al., 2020) and in more complex paradigms – even those pointed to as necessitating deception, such as Milgram’s infamous obedience study (Geller, 1978). Of course, hypothetical scenarios bear limitations (Hertwig & Ortmann, 2001) as opposed to assessing truly consequential behavior, simply because participants may not be able or willing to anticipate how they would behave – even for very simple choices such as which of two chocolates to buy (Klein & Hilbig, 2019). Thus, we do not recommend hypothetical designs over truly consequential ones (on the contrary), but we do recommend them over the use of deception – especially because hypothetical versus consequential is a matter of validity that can be criticized by reviewers and editors and likely fixed in a replication, unlike a breach of ethical standards. Moreover, whenever incentives are strictly necessary (as is the case in the cheating paradigms studied in one of the meta-analyses we considered; Gerlach et al., 2019), non-monetary incentives can be a viable low-cost option. For example, experiments have used the possibility to skip a few boring tasks as an alternative, non-monetary incentive and found results highly comparable to those obtained with monetary incentives (Hilbig & Zettler, 2015; Moshagen et al., 2020).

However, mere appeals encouraging researchers to think more carefully about alternatives to deception or reminding IRBs of the rules they ought to be enforcing may admittedly change very little. Realistically, a true shift in practice can only be achieved if the gatekeepers of our science – the editors and reviewers – start demanding explicit justifications for the use of deception and pushing back against poorly justified cases. At the very least, whenever reviewers do spot unnecessary use of deception, editors should take the reviewers’ concerns seriously. By contrast, mere lip service will not do the trick: Flagship journals of our profession, such as the Journal of Applied Psychology, Journal of Experimental Psychology: General, or Journal of Personality and Social Psychology, require adherence to the ethics code,6 even asking authors to certify (by signature) that they “have complied with the APA ethical principles regarding research with human participants,” but nonetheless publish studies using deception despite clearly available nondeceptive alternatives without any explicit justifications provided (as our review demonstrates).

Possibly, rather than or in addition to asking authors to sign a whole host of documents (the details of which are probably read by many as thoroughly as the notorious terms and conditions of a product one urgently needs), journals should require – upon submission – an explicit statement in every article that no deception was used (e.g., “This study did not involve deception of participants.”) or an explanation of why it was indeed an unavoidable “last resort.”7 Take, for example, the Zeitschrift für Psychologie, which has recently implemented the requirement that authors confirm the following upon submitting manuscripts: “In dealing with (human) participants, I/we have obtained appropriate informed consent, refrained from deception of participants, and fully debriefed participants. Any deviation from these rules is based on an explicit justification which is given in the manuscript and, additionally, the submission cover letter” (see Ethics and research transparency statement, https://www.hgf.io/zfp). This essentially imitates the approach recently implemented by a growing number of journals to enforce adherence to open science practices, for example, by asking authors to explicitly approve that they have specified how they determined their sample size or that they reported all data exclusions (if any). We predict that such an approach will reduce the prevalence of deception overall and especially the cases in which it is used needlessly, eventually leading to more research conducted in line with our profession’s ethics code.

Finally, as the empirical fact of authors certifying their compliance with the ethics code by signature arguably also implies, an alternative conclusion to our entire case is that researchers are actually in the clear because the rules entail sufficient interpretative wiggle room. Indeed, one could reconcile the practice demonstrated above with the ethical rules by assigning “significant value” to most if not all studies and, more importantly, allowing “not feasible” to include reasons other than upholding validity/experimental control or avoiding still more serious ethical breaches (e.g., reasons involving lack of resources or lack of ideas). Ultimately, then, deception is not a “last resort” and the ethics code itself would actually uphold its fundamental principle of honesty best by acknowledging that when it comes to deception, anything goes. By contrast, we remain optimistic that few would accept the argument that lack of money rendered the alternative of reporting one’s actual income on a tax form “unfeasible” or that lack of time rendered adhering to the speed limit “unfeasible.” Thus, at least by implication, we argue that one ought to expect nothing less from the ethics code of a profession that defines itself by wanting to understand people and improve their lives.

We thank Arndt Bröder and Stefan Pfattheicher for extremely helpful and thorough feedback on earlier versions of the manuscript as well as Oliver Lowack and Alexander Nicolay for assistance in coding studies.

Benjamin E. Hilbig is a professor of psychology and head of the Cognitive Psychology Lab, University of Koblenz-Landau. His research focuses on judgment and decision-making, social and ethical behavior, statistical modeling, and personality.

Isabel Thielmann is a postdoctoral researcher at the Cognitive Psychology Lab, University of Koblenz-Landau. Her research focuses on understanding individual differences in ethical and prosocial behavior.

Robert Böhm is a professor of applied social psychology and behavioral science at the Department of Psychology and Department of Economics, University of Copenhagen. His research interests are at the intersection of social psychology and behavioral economics, with a main focus on human decisions in social interactions.

1When it comes to these aspects, our case only requires (i) that one agrees on the minimal normative position that the use of deception should be limited in some way and (ii) that deception is actually used by some psychologists sometimes (though potentially varying greatly depending on discipline, research topic etc.).

2Of note, even setting aside ethical considerations, it is likely also self-serving for researchers to avoid deception because it may well endanger (rather than serve to uphold) validity: Once participants are accustomed to and indeed expect deception in our laboratories, one cannot reasonably argue that they will still be willing to take the rules and instructions of any experiment at face value. In turn, it is questionable what one is actually measuring if, based on their prior experiences, participants construct an entirely different (and unknown) perception of the situation at hand.

3The proportion of articles reporting at least one study coded to have used deception was 21% and 33% in the two meta-analyses, respectively. Note, again, that the prevalence is not essential for our investigation which requires only that deception was used in a particular study.

4Note, also, that there are still more feasible nondeceptive alternatives than using two fMRI scanners at the same time (as in Bilek et al., 2015). For example, for most paradigms used in the study of prosociality one can rely on pre-assessment of partner behavior or the so-called strategy method, that is, assess behavioral responses to all potential partner behaviors, with the actual matching taking place post hoc (Selten, 1967).

5We do acknowledge that this line of argument may be questionable whenever financially or structurally disadvantaged researchers (e.g., from low income countries) are responsible. However, this was not the case for any of the studies included in our review as first authors’ affiliations were all from high-income or upper-middle-income countries (according to per capita gross national income threshold levels established by the World Bank).

6All state in their submission guidelines that “Authors are required to state in writing that they have complied with APA ethical standards in the treatment of their sample, human or animal, or to describe the details of treatment.”

7To further “nudge” authors and thus avoid mindless confirmation of such a requirement, journals could request authors to indicate the exact position of the “deception statement” within the manuscript. This approach has been successfully implemented with regard to information on Open Science practices, for example, by the Journal of Experimental Social Psychology.

References

  • American Psychological Association. (1959). Ethical standards of psychologists. American Psychologist, 14(6), 279–282. https://doi.org/10.1037/h0048469 First citation in articleCrossrefGoogle Scholar

  • American Psychological Association. (2017). Ethical principles of psychologists and code of conduct. https://www.apa.org/ethics/code/ethics-code-2017.pdf First citation in articleGoogle Scholar

  • Barends, A. J., de Vries, R. E., & van Vugt, M. (2019). Power influences the expression of Honesty-Humility: The power-exploitation affordances hypothesis. Journal of Research in Personality, 82, Article 103856. https://doi.org/10.1016/j.jrp.2019.103856 First citation in articleCrossrefGoogle Scholar

  • Baumrind, D. (1985). Research using intentional deception: Ethical issues revisited. American Psychologist, 40(2), 165–174. https://doi.org/10.1037/0003-066X.40.2.165 First citation in articleCrossrefGoogle Scholar

  • Benham, B. (2008). The ubiquity of deception and the ethics of deceptive research. Bioethics, 22, 147–156. First citation in articleCrossrefGoogle Scholar

  • Bereczkei, T., Papp, P., Kincses, P., Bodrogi, B., Perlaki, G., Orsi, G., & Deak, A. (2015). The neural basis of the Machiavellians’ decision making in fair and unfair situations. Brain and Cognition, 98, 53–64. https://doi.org/10.1016/j.bandc.2015.05.006 First citation in articleCrossrefGoogle Scholar

  • Bilek, E., Ruf, M., Schafer, A., Akdeniz, C., Calhoun, V. D., Schmahl, C., Demanuele, C., Tost, H., Kirsch, P., & Meyer-Lindenberg, A. (2015). Information flow between interacting human brains: Identification, validation, and relationship to social expertise. Proceedings of the National Academy of Sciences of the United States of America, 112(16), 5207–5212. https://doi.org/10.1073/pnas.1421831112 First citation in articleCrossrefGoogle Scholar

  • Bortolotti, L., & Mameli, M. (2006). Deception in psychology: Moral costs and benefits of unsought self-knowledge. Accountability in Research, 13(3), 259–275. https://doi.org/10.1080/08989620600848561 First citation in articleCrossrefGoogle Scholar

  • Bröder, A. (1998). Deception can be acceptable. American Psychologist, 53(7), 805–806. https://doi.org/10.1037/h0092168 First citation in articleCrossrefGoogle Scholar

  • Bucciol, A., & Piovesan, M. (2011). Luck or cheating? A field experiment on honesty with children. Journal of Economic Psychology, 32(1), 73–78. https://doi.org/10.1016/j.joep.2010.12.001 First citation in articleCrossrefGoogle Scholar

  • Christensen, L. (1988). Deception in psychological research: When is its use justified? Personality and Social Psychology Bulletin, 14(4), 664–675. https://doi.org/10.1177/0146167288144002 First citation in articleCrossrefGoogle Scholar

  • Cohen, T. R., Wolf, S. T., Panter, A. T., & Insko, C. A. (2011). Introducing the GASP scale: A new measure of guilt and shame proneness. Journal of Personality and Social Psychology, 100(5), 947–966. https://doi.org/10.1037/a0022641 First citation in articleCrossrefGoogle Scholar

  • Cook, K. S., & Yamagishi, T. (2008). A defense of deception on scientific grounds. Social Psychology Quarterly, 71(3), 215–221. https://doi.org/10.1177/019027250807100303 First citation in articleCrossrefGoogle Scholar

  • Cornelissen, G., Dewitte, S., & Warlop, L. (2011). Are social value orientations expressed automatically? Decision making in the dictator game. Personality and Social Psychology Bulletin, 37(8), 1080–1090. https://doi.org/10.1177/0146167211405996 First citation in articleCrossrefGoogle Scholar

  • Fiedler, S., Glöckner, A., Nicklisch, A., & Dickert, S. (2013). Social value orientation and information search in social dilemmas: An eye-tracking analysis. Organizational Behavior and Human Decision Processes, 120(2), 272–284. https://doi.org/10.1016/j.obhdp.2012.07.002 First citation in articleCrossrefGoogle Scholar

  • Fischbacher, U., & Föllmi-Heusi, F. (2013). Lies in disguise – An experimental study on cheating. Journal of the European Economic Association, 11(3), 525–547. https://doi.org/10.1111/jeea.12014 First citation in articleCrossrefGoogle Scholar

  • Forsythe, R., Horowitz, J. L., Savin, N. E., & Sefton, M. (1994). Fairness in simple bargaining experiments. Games and Economic Behavior, 6(3), 347–369. First citation in articleCrossrefGoogle Scholar

  • Francis, R. D. (2009). Ethics for psychologists (2nd ed.). British Psychological Society. First citation in articleCrossrefGoogle Scholar

  • Geller, D. M. (1978). Involvement in role-playing simulations: A demonstration with studies on obedience. Journal of Personality and Social Psychology, 36, 219–235. First citation in articleCrossrefGoogle Scholar

  • Gerlach, P., Teodorescu, K., & Hertwig, R. (2019). The truth about lies: A meta-analysis on dishonest behavior. Psychological Bulletin, 145(1), 1–44. https://doi.org/10.1037/bul0000174 First citation in articleCrossrefGoogle Scholar

  • Gerpott, F. H., Balliet, D., Columbus, S., Molho, C., & De Vries, R. E. (2018). How do people think about interdependence? A multidimensional model of subjective outcome interdependence. Journal of Personality and Social Psychology, 115(4), 716–742. https://doi.org/10.1037/pspp0000166 First citation in articleCrossrefGoogle Scholar

  • Gino, F., Ayal, S., & Ariely, D. (2009). Contagion and differentiation in unethical behavior: The effect of one bad apple on the barrel. Psychological Science, 20(3), 393–398. https://doi.org/10.1111/j.1467-9280.2009.02306.x First citation in articleCrossrefGoogle Scholar

  • Gino, F., & Galinsky, A. D. (2012). Vicarious dishonesty: When psychological closeness creates distance from one’s moral compass. Organizational Behavior and Human Decision Processes, 119(1), 15–26. https://doi.org/10.1016/j.obhdp.2012.03.011 First citation in articleCrossrefGoogle Scholar

  • Gino, F., Norton, M. I., & Ariely, D. (2010). The counterfeit self: The deceptive costs of faking it. Psychological Science, 21(5), 712–720. https://doi.org/10.1177/0956797610366545 First citation in articleCrossrefGoogle Scholar

  • Gino, F., & Wiltermuth, S. S. (2014). Evil genius? How dishonesty can lead to greater creativity. Psychological Science, 25(4), 973–981. https://doi.org/10.1177/0956797614520714 First citation in articleCrossrefGoogle Scholar

  • Gross, J., & De Dreu, C. K. W. (2019). Individual solutions to shared problems create a modern tragedy of the commons. Science Advances, 5(4), Article eaau7296–1. https://doi.org/10.1126/sciadv.aau7296 First citation in articleCrossrefGoogle Scholar

  • Gu, J., Zhong, C.-B., & Page-Gould, E. (2013). Listen to your heart: when false somatic feedback shapes moral behavior. Journal of Experimental Psychology: General, 142(2), 307–312. https://doi.org/10.1037/a0029549 First citation in articleCrossrefGoogle Scholar

  • Hertwig, R., & Ortmann, A. (2001). Experimental practices in economics: A methodological challenge for psychologists? Behavioral and Brain Sciences, 24(3), 383–451. First citation in articleCrossrefGoogle Scholar

  • Hertwig, R., & Ortmann, A. (2008). Deception in social psychological experiments: Two misconceptions and a research agenda. Social Psychology Quarterly, 71(3), 222–227. https://doi.org/10.1177/019027250807100304 First citation in articleCrossrefGoogle Scholar

  • Hilbig, B. E., & Zettler, I. (2015). When the cat’s away, some mice will play: A basic trait account of dishonest behavior. Journal of Research in Personality, 57, 72–88. First citation in articleCrossrefGoogle Scholar

  • Hobbs, N. (1948). The development of a code of ethical standards for psychology. American Psychologist, 3(3), 80–84. https://doi.org/10.1037/h0060281 First citation in articleCrossrefGoogle Scholar

  • Joireman, J., Posey, D. C., Truelove, H. B., & Parks, C. D. (2009). The environmentalist who cried drought: Reactions to repeated warnings about depleting resources under conditions of uncertainty. Journal of Environmental Psychology, 29(2), 181–192. https://doi.org/10.1016/j.jenvp.2008.10.003 First citation in articleCrossrefGoogle Scholar

  • Joyce, N. R., & Rankin, T. J. (2010). The lessons of the development of the first APA ethics code: Blending science, practice, and politics. Ethics & Behavior, 20(6), 466–481. https://doi.org/10.1080/10508422.2010.521448 First citation in articleCrossrefGoogle Scholar

  • Kato, T. A., Watabe, M., Tsuboi, S., Ishikawa, K., Hashiya, K., Monji, A., Utsumi, H., & Kanba, S. (2012). Minocycline modulates human social decision-making: Possible impact of microglia on personality-oriented social behaviors. PLoS One, 7(7), e40461. https://doi.org/10.0.5.91/journal.pone.0040461 First citation in articleCrossrefGoogle Scholar

  • Kimmel, A. J. (2001). Ethical trends in marketing and psychological research. Ethics & Behavior, 11(2), 131–149. https://doi.org/10.1207/S15327019EB1102_2 First citation in articleCrossrefGoogle Scholar

  • Kimmel, A. J. (2011). Deception in psychological research – A necessary evil? The Psychologist, 24(8), 580–585. First citation in articleGoogle Scholar

  • Kimmel, A. J., Smith, N. C., & Klein, J. G. (2011). Ethical decision making and research deception in the behavioral sciences: An application of social contract theory. Ethics & Behavior, 21(3), 222–251. First citation in articleCrossrefGoogle Scholar

  • Klein, S. A., & Hilbig, B. E. (2019). On the lack of real consequences in consumer choice research. Experimental Psychology, 66(1), 68–76. https://doi.org/10.1027/1618-3169/a000420 First citation in articleLinkGoogle Scholar

  • Kouchaki, M., & Smith, I. H. (2014). The morning morality effect: the influence of time of day on unethical behavior. Psychological Science, 25(1), 95–102. https://doi.org/10.1177/0956797613498099 First citation in articleCrossrefGoogle Scholar

  • Kouchaki, M., & Wareham, J. (2015). Excluded and behaving unethically: Social exclusion, physiological responses, and unethical behavior. The Journal of Applied Psychology, 100(2), 547–556. https://doi.org/10.1037/a0038034 First citation in articleCrossrefGoogle Scholar

  • Kurzban, R., & Houser, D. (2001). Individual differences in cooperation in a circular public goods game. European Journal of Personality, 15(1), S37–S52. https://doi.org/10.1002/per.420 First citation in articleCrossrefGoogle Scholar

  • McClure, M. J., Bartz, J. A., & Lydon, J. E. (2013). Uncovering and overcoming ambivalence: The role of chronic and contextually activated attachment in two‐person social dilemmas. Journal of Personality, 81(1), 103–117. https://doi.org/10.1111/j.1467-6494.2012.00788.x First citation in articleCrossrefGoogle Scholar

  • Mischkowski, D., & Glöckner, A. (2016). Spontaneous cooperation for prosocials, but not for proselfs: Social value orientation moderates spontaneous cooperation behavior. Scientific Reports, 6, Article 21555. https://doi.org/10.1038/srep21555 First citation in articleCrossrefGoogle Scholar

  • Moshagen, M., & Hilbig, B. E. (2017). The statistical analysis of cheating paradigms. Behavior Research Methods, 49, 724–732. https://doi.org/10.3758/s13428-016-0729-x First citation in articleCrossrefGoogle Scholar

  • Moshagen, M., Zettler, I., & Hilbig, B. E. (2020). Measuring the dark core of personality. Psychological Assessment, 32, 182–196. First citation in articleCrossrefGoogle Scholar

  • Müller, S., & Moshagen, M. (2019). True virtue, self-presentation, or both? A behavioral test of impression management and overclaiming. Psychological Assessment, 31(2), 181–191. https://doi.org/10.1037/pas0000657 First citation in articleCrossrefGoogle Scholar

  • Ortmann, A. (2019). Deception. In A. SchramA. UleEds., Handbook of research methods and applications in experimental economics (pp. 28–38). Edward Elgar Publishing. First citation in articleGoogle Scholar

  • Paz, V., Nicolaisen-Sobesky, E., Collado, E., Horta, S., Rey, C., Rivero, M., Berriolo, P., Díaz, M., Otón, M., Pérez, A., Fernández-Theoduloz, G., Cabana, Á., & Gradin, V. B. (2017). Effect of self-esteem on social interactions during the Ultimatum Game. Psychiatry Research, 252, 247–255. https://doi.org/10.1016/j.psychres.2016.12.063 First citation in articleCrossrefGoogle Scholar

  • Pfattheicher, S., & Böhm, R. (2018). Honesty-Humility under threat: Self-uncertainty destroys trust among the nice guys. Journal of Personality and Social Psychology, 114(1), 179–194. https://doi.org/10.1037/pspp0000144 First citation in articleCrossrefGoogle Scholar

  • Piff, P. K., Stancato, D. M., Cote, S., Mendoza-Denton, R., & Keltner, D. (2012). Higher social class predicts increased unethical behavior. Proceedings of the National Academy of Sciences of the United States of America, 109(11), 4086–4091. https://doi.org/10.1073/pnas.1118373109 First citation in articleCrossrefGoogle Scholar

  • Pittenger, D. J. (2002). Deception in research: Distinctions and solutions from the perspective of utilitarianism. Ethics & Behavior, 12(2), 117–142. https://doi.org/10.1207/S15327019EB1202_1 First citation in articleCrossrefGoogle Scholar

  • Rockenbach, B., & Milinski, M. (2006). The efficient interaction of indirect reciprocity and costly punishment. Nature, 444(7120), 718–723. https://doi.org/10.1038/nature05229 First citation in articleCrossrefGoogle Scholar

  • Sandoval, E. B., Brandstetter, J., Obaid, M., & Bartneck, C. (2016). Reciprocity in human-robot interaction: A quantitative approach through the Prisoner’s Dilemma and the Ultimatum Game. International Journal of Social Robotics, 8(2), 303–317. https://doi.org/10.1007/s12369-015-0323-x First citation in articleCrossrefGoogle Scholar

  • Schlenker, B. R., Helm, B., & Tedeschi, J. T. (1973). The effects of personality and situational variables on behavioral trust. Journal of Personality and Social Psychology, 25(3), 419–427. https://doi.org/10.1037/h0034088 First citation in articleCrossrefGoogle Scholar

  • Schönbrodt, F. D., & Gerstenberg, F. X. R. (2012). An IRT analysis of motive questionnaires: The Unified Motive Scales. Journal of Research in Personality, 46(6), 725–742. https://doi.org/10.1016/j.jrp.2012.08.010 First citation in articleCrossrefGoogle Scholar

  • Schurr, A., & Ritov, I. (2016). Winning a competition predicts dishonest behavior. Proceedings of the National Academy of Sciences of the United States of America, 113(7), 1754 LP–1759. https://doi.org/10.1073/pnas.1515102113 First citation in articleCrossrefGoogle Scholar

  • Seeman, J. (1969). Deception in psychological research. American Psychologist, 24(11), 1025–1028. https://doi.org/10.1037/h0028839 First citation in articleCrossrefGoogle Scholar

  • Selten, R. (1967). Die Strategiemethode zur Erforschung des eingeschränkt rationalen Verhaltens im Rahmen eines Oligopolexperimentes [The strategy method as a tool to analyze bounded rationality in oligopoly experiments]. In H. SauermannEd., Beiträge zur Experimentellen Wirtschaftsforschung (pp. 136–168). J. C. B. Mohr. First citation in articleGoogle Scholar

  • Shalvi, S., Dana, J., Handgraaf, M. J. J., & De Dreu, C. K. W. (2011). Justified ethicality: Observing desired counterfactuals modifies ethical perceptions and behavior. Organizational Behavior and Human Decision Processes, 115(2), 181–190. https://doi.org/10.1016/j.obhdp.2011.02.001 First citation in articleCrossrefGoogle Scholar

  • Smith, N. C., Kimmel, A. J., & Klein, J. G. (2009). Social contract theory and the ethics of deception in consumer research. Journal of Consumer Psychology, 19(3), 486–496. https://doi.org/10.1016/j.jcps.2009.04.007 First citation in articleCrossrefGoogle Scholar

  • Stricker, L. J. (1967). The true deceiver. Psychological Bulletin, 68(1), 13–20. https://doi.org/10.1037/h0024698 First citation in articleCrossrefGoogle Scholar

  • Tazelaar, M. J. A., Van Lange, P. A. M., & Ouwerkerk, J. W. (2004). How to cope with “noise” in social dilemmas: The benefits of communication. Journal of Personality and Social Psychology, 87(6), 845–859. https://doi.org/10.1037/0022-3514.87.6.845 First citation in articleCrossrefGoogle Scholar

  • Thielmann, I., Spadaro, G., & Balliet, D. (2020). Personality and prosocial behavior: A theoretical framework and meta-analysis. Psychological Bulletin, 146(1), 30–90. https://doi.org/10.1037/bul0000217 First citation in articleCrossrefGoogle Scholar

  • Utz, S. (2004). Self-activation is a two-edged sword: The effects of I primes on cooperation. Journal of Experimental Social Psychology, 40(6), 769–776. https://doi.org/10.1016/j.jesp.2004.03.001 First citation in articleCrossrefGoogle Scholar

  • Uziel, L., & Hefetz, U. (2014). The selfish side of self-control. European Journal of Personality, 28(5), 449–458. https://doi.org/10.0.3.234/per.1972 First citation in articleCrossrefGoogle Scholar

  • van der Schalk, J., Bruder, M., & Manstead, A. S. R. (2012). Regulating emotion in the context of interpersonal decisions: The role of anticipated pride and regret. Frontiers in Psychology, 3, Article 513. https://doi.org/10.3389/fpsyg.2012.00513 First citation in articleCrossrefGoogle Scholar

  • Van Lange, P. A. M., & Visser, K. (1999). Locomotion in social dilemmas: How people adapt to cooperative, tit-for-tat, and noncooperative partners. Journal of Personality and Social Psychology, 77(4), 762–773. https://doi.org/10.1037/0022-3514.77.4.762 First citation in articleCrossrefGoogle Scholar

  • Velez, J. A. (2015). Extending the theory of Bounded Generalized Reciprocity: An explanation of the social benefits of cooperative video game play. Computers in Human Behavior, 48, 481–491. https://doi.org/10.1016/j.chb.2015.02.015 First citation in articleCrossrefGoogle Scholar

  • Wang, Y., Jing, Y., Zhang, Z., Lin, C., & Valadez, E. A. (2017). How dispositional social risk-seeking promotes trusting strangers: Evidence based on brain potentials and neural oscillations. Journal of Experimental Psychology: General, 146(8), 1150–1163. https://doi.org/10.1037/xge0000328 First citation in articleCrossrefGoogle Scholar

  • Wood, D., Pilisuk, M., & Uren, E. (1973). The martyr’s personality: An experimental investigation. Journal of Personality and Social Psychology, 25(2), 177–186. https://doi.org/10.1037/h0033969 First citation in articleCrossrefGoogle Scholar

  • Zhong, C.-B. (2011). The ethical dangers of deliberative decision making. Administrative Science Quarterly, 56(1), 1–25. https://doi.org/10.2189/asqu.2011.56.1.001 First citation in articleCrossrefGoogle Scholar